title
stringlengths 2
169
| diff
stringlengths 235
19.5k
| body
stringlengths 0
30.5k
| url
stringlengths 48
84
| created_at
stringlengths 20
20
| closed_at
stringlengths 20
20
| merged_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| diff_len
float64 101
3.99k
| repo_name
stringclasses 83
values | __index_level_0__
int64 15
52.7k
|
---|---|---|---|---|---|---|---|---|---|---|
Update README.md | diff --git a/README.md b/README.md
index 59ad89f322..03df132675 100644
--- a/README.md
+++ b/README.md
@@ -6,7 +6,7 @@ Written by [@xtekky](https://github.com/hlohaus) & maintained by [@hlohaus](http
> By using this repository or any code related to it, you agree to the [legal notice](LEGAL_NOTICE.md). The author is **not responsible for the usage of this repository nor endorses it**, nor is the author responsible for any copies, forks, re-uploads made by other users, or anything else related to GPT4Free. This is the author's only account and repository. To prevent impersonation or irresponsible actions, please comply with the GNU GPL license this Repository uses.
> [!Warning]
-*"gpt4free"* serves as a **PoC** (proof of concept), demonstrating the development of a an api package with multi-provider requests, with features like timeouts, load balance and flow control.
+*"gpt4free"* serves as a **PoC** (proof of concept), demonstrating the development of an API package with multi-provider requests, with features like timeouts, load balance and flow control.
> [!Note]
<sup><strong>Lastet version:</strong></sup> [](https://pypi.org/project/g4f) [](https://hub.docker.com/r/hlohaus789/g4f)
@@ -28,21 +28,21 @@ docker pull hlohaus789/g4f
- Join our Discord Group: [discord.gg/XfybzPXPH5](https://discord.gg/XfybzPXPH5)
## 🔻 Site Takedown
-Is your site on this repository and you want to take it down ? email takedown@g4f.ai with proof it is yours and it will be removed as fast as possible. - to prevent reproduction please secure your api ; )
+Is your site on this repository and you want to take it down? Send an email to takedown@g4f.ai with proof it is yours and it will be removed as fast as possible. To prevent reproduction please secure your API ;)
## 🚀 Feedback and Todo
You can always leave some feedback here: https://forms.gle/FeWV9RLEedfdkmFN6
As per the survey, here is a list of improvements to come
-- [x] update the repository to include the new openai library syntax (ex: `Openai()` class) | completed, use `g4f.client.Client`
-- [ ] golang implementation
-- [ ] 🚧 Improve Documentation (in /docs & Guides, Howtos, & Do video tutorials
+- [x] Update the repository to include the new openai library syntax (ex: `Openai()` class) | completed, use `g4f.client.Client`
+- [ ] Golang implementation
+- [ ] 🚧 Improve Documentation (in /docs & Guides, Howtos, & Do video tutorials)
- [x] Improve the provider status list & updates
- [ ] Tutorials on how to reverse sites to write your own wrapper (PoC only ofc)
- [ ] Improve the Bing wrapper. (might write a new wrapper in golang as it is very fast)
- [ ] Write a standard provider performance test to improve the stability
- [ ] Potential support and development of local models
-- [ ] 🚧 improve compatibility and error handling
+- [ ] 🚧 Improve compatibility and error handling
## 📚 Table of Contents
@@ -90,7 +90,7 @@ docker pull hlohaus789/g4f
docker run -p 8080:8080 -p 1337:1337 -p 7900:7900 --shm-size="2g" hlohaus789/g4f:latest
```
3. Open the included client on: [http://localhost:8080/chat/](http://localhost:8080/chat/)
-or set the api base in your client to: [http://localhost:1337/v1](http://localhost:1337/v1)
+or set the API base in your client to: [http://localhost:1337/v1](http://localhost:1337/v1)
4. (Optional) If you need to log in to a provider, you can view the desktop from the container here: http://localhost:7900/?autoconnect=1&resize=scale&password=secret.
##### Use your smartphone:
@@ -191,7 +191,7 @@ See: [/docs/interference](/docs/interference.md)
##### Cookies / Access Token
-For generating images with Bing and for the OpenAi Chat you need cookies or a token from your browser session. From Bing you need the "_U" cookie and from OpenAI you need the "access_token". You can pass the cookies / the access token in the create function or you use the `set_cookies` setter before you run G4F:
+For generating images with Bing and for the OpenAI Chat you need cookies or a token from your browser session. From Bing you need the "_U" cookie and from OpenAI you need the "access_token". You can pass the cookies / the access token in the create function or you use the `set_cookies` setter before you run G4F:
```python
from g4f.cookies import set_cookies
| https://api.github.com/repos/xtekky/gpt4free/pulls/1636 | 2024-02-26T11:15:26Z | 2024-02-28T08:49:20Z | 2024-02-28T08:49:20Z | 2024-02-29T11:08:44Z | 1,203 | xtekky/gpt4free | 38,045 |
|
ratesapi.io bought by apilayer, shut down / merged | diff --git a/README.md b/README.md
index d777d6a27f..cc1d033233 100644
--- a/README.md
+++ b/README.md
@@ -251,7 +251,6 @@ API | Description | Auth | HTTPS | CORS |
| [Exchangeratesapi.io](https://exchangeratesapi.io) | Exchange rates with currency conversion | `apiKey` | Yes | Yes |
| [Frankfurter](https://www.frankfurter.app/docs) | Exchange rates, currency conversion and time series | No | Yes | Yes |
| [National Bank of Poland](http://api.nbp.pl/en.html) | A collection of currency exchange rates (data in XML and JSON) | No | Yes | Yes |
-| [ratesapi](https://ratesapi.io) | Free exchange rates and historical rates | No | Yes | Unknown |
| [VATComply.com](https://www.vatcomply.com/documentation) | Exchange rates, geolocation and VAT number validation | No | Yes | Yes |
**[⬆ Back to Index](#index)**
| It was a good run since #902, but no longer.
<!-- Thank you for taking the time to work on a Pull Request for this project! -->
<!-- To ensure your PR is dealt with swiftly please check the following: -->
- [x] My submission is formatted according to the guidelines in the [contributing guide](CONTRIBUTING.md)
- [x] My addition is ordered alphabetically
- [x] My submission has a useful description
- [x] The description does not end with punctuation
- [x] Each table column is padded with one space on either side
- [x] I have searched the repository for any relevant issues or pull requests
- [x] Any category I am creating has the minimum requirement of 3 items
- [x] All changes have been [squashed][squash-link] into a single commit
[squash-link]: <https://github.com/todotxt/todo.txt-android/wiki/Squash-All-Commits-Related-to-a-Single-Issue-into-a-Single-Commit>
| https://api.github.com/repos/public-apis/public-apis/pulls/1750 | 2021-05-27T19:49:49Z | 2021-05-28T00:48:10Z | 2021-05-28T00:48:10Z | 2021-05-28T01:23:31Z | 237 | public-apis/public-apis | 36,008 |
TF: T5 can now handle a padded past (i.e. XLA generation) | diff --git a/src/transformers/models/t5/modeling_tf_t5.py b/src/transformers/models/t5/modeling_tf_t5.py
index 5c8aec875b55d..2eebdfd1cb60e 100644
--- a/src/transformers/models/t5/modeling_tf_t5.py
+++ b/src/transformers/models/t5/modeling_tf_t5.py
@@ -23,6 +23,7 @@
import numpy as np
import tensorflow as tf
+from tensorflow.compiler.tf2xla.python.xla import dynamic_slice
from ...activations_tf import get_tf_activation
from ...modeling_tf_outputs import (
@@ -384,10 +385,19 @@ def project(hidden_states, proj_layer, key_value_states, past_key_value):
else:
position_bias = self.compute_bias(real_seq_length, key_length)
- # if key and values are already calculated
- # we want only the last query position bias
+ # if key and values are already calculated we want only the last query position bias
if past_key_value is not None:
- position_bias = position_bias[:, :, -seq_length:, :]
+ if not self.has_relative_attention_bias:
+ position_bias = position_bias[:, :, -seq_length:, :]
+ else:
+ # we might have a padded past structure, in which case we want to fetch the position bias slice
+ # right after the most recently filled past index
+ most_recently_filled_past_index = tf.reduce_max(tf.where(past_key_value[0][0, 0, :, 0] != 0.0))
+ position_bias = dynamic_slice(
+ position_bias,
+ (0, 0, most_recently_filled_past_index + 1, 0),
+ (1, self.n_heads, seq_length, real_seq_length),
+ )
if mask is not None:
position_bias = tf.cast(position_bias, dtype=mask.dtype)
diff --git a/tests/models/t5/test_modeling_tf_t5.py b/tests/models/t5/test_modeling_tf_t5.py
index c67851a054148..35f1d90886c98 100644
--- a/tests/models/t5/test_modeling_tf_t5.py
+++ b/tests/models/t5/test_modeling_tf_t5.py
@@ -590,21 +590,17 @@ def test_beam_search_xla_generate_simple(self):
]
input_ids = tokenizer(sentences, return_tensors="tf", padding=True).input_ids
- # xla_generate = tf.function(model.generate, jit_compile=True)
- xla_generate = tf.function(model.generate)
+ xla_generate = tf.function(model.generate, jit_compile=True)
- # TODO (joao): there is something not quite right with XLA T5 -- as we increase `max_length` the two outputs
- # drift appart, where the XLA version clearly degrades its quality. XLA-related variables look fine (they are
- # being padded and filled in the right places). This also happens in other generation modes. Investigate.
- output_ids = model.generate(input_ids, num_beams=2, max_length=9)
- output_ids_xla = xla_generate(input_ids, num_beams=2, max_length=9)
+ output_ids = model.generate(input_ids, num_beams=2)
+ output_ids_xla = xla_generate(input_ids, num_beams=2)
output_strings = tokenizer.batch_decode(output_ids, skip_special_tokens=True)
output_strings_xla = tokenizer.batch_decode(output_ids_xla, skip_special_tokens=True)
expected_output_string = [
"Aujourd'hui est une belle journée.",
- "J'ai quatre chats,",
+ "J'ai quatre chats, trois chiens, deux oiseaux et un cheval.",
]
self.assertListEqual(expected_output_string, output_strings)
| # What does this PR do?
In TF T5, we now fetch the correct slice of `position_bias` -- [the same way we do it in FLAX](https://github.com/huggingface/transformers/blob/main/src/transformers/models/t5/modeling_flax_t5.py#L339). The key difference is that FLAX relies on an [external variable](https://github.com/huggingface/transformers/blob/main/src/transformers/models/t5/modeling_flax_t5.py#L312) for the generated length that gets incremented every time past gets updated, and here the same value is obtained dynamically from the past array (latest filled past index = generated length - 1, where latest filled past index corresponds to the maximum index with non-0 values).
All slow tests are passing and we no longer have length restrictions on the XLA beam search test, which means that:
1. Although the code for eager execution was changed, all outputs remain the same;
2. XLA generation matches non-XLA generation. | https://api.github.com/repos/huggingface/transformers/pulls/17969 | 2022-06-30T18:25:04Z | 2022-07-04T18:47:44Z | 2022-07-04T18:47:44Z | 2022-07-04T18:59:19Z | 844 | huggingface/transformers | 12,827 |
Add patch_config(configurable=) arg, make with_config(configurable=) merge it with existing | diff --git a/libs/langchain/langchain/schema/runnable/base.py b/libs/langchain/langchain/schema/runnable/base.py
index 612e19d2c70b14..afce1201e184cd 100644
--- a/libs/langchain/langchain/schema/runnable/base.py
+++ b/libs/langchain/langchain/schema/runnable/base.py
@@ -2253,30 +2253,32 @@ def _invoke(
inputs: List[Input],
run_manager: CallbackManagerForChainRun,
config: RunnableConfig,
+ **kwargs: Any,
) -> List[Output]:
return self.bound.batch(
- inputs, patch_config(config, callbacks=run_manager.get_child())
+ inputs, patch_config(config, callbacks=run_manager.get_child()), **kwargs
)
def invoke(
- self, input: List[Input], config: Optional[RunnableConfig] = None
+ self, input: List[Input], config: Optional[RunnableConfig] = None, **kwargs: Any
) -> List[Output]:
- return self._call_with_config(self._invoke, input, config)
+ return self._call_with_config(self._invoke, input, config, **kwargs)
async def _ainvoke(
self,
inputs: List[Input],
run_manager: AsyncCallbackManagerForChainRun,
config: RunnableConfig,
+ **kwargs: Any,
) -> List[Output]:
return await self.bound.abatch(
- inputs, patch_config(config, callbacks=run_manager.get_child())
+ inputs, patch_config(config, callbacks=run_manager.get_child()), **kwargs
)
async def ainvoke(
self, input: List[Input], config: Optional[RunnableConfig] = None, **kwargs: Any
) -> List[Output]:
- return await self._acall_with_config(self._ainvoke, input, config)
+ return await self._acall_with_config(self._ainvoke, input, config, **kwargs)
class RunnableBinding(RunnableSerializable[Input, Output]):
@@ -2332,6 +2334,8 @@ def _merge_config(self, config: Optional[RunnableConfig]) -> RunnableConfig:
copy[key] = {**copy.get(key, {}), **config[key]} # type: ignore
elif key == "tags":
copy[key] = (copy.get(key) or []) + config[key] # type: ignore
+ elif key == "configurable":
+ copy[key] = {**copy.get(key, {}), **config[key]} # type: ignore
else:
# Even though the keys aren't literals this is correct
# because both dicts are same type
diff --git a/libs/langchain/langchain/schema/runnable/config.py b/libs/langchain/langchain/schema/runnable/config.py
index c7bf9ddc2bf905..71eb7428e0619d 100644
--- a/libs/langchain/langchain/schema/runnable/config.py
+++ b/libs/langchain/langchain/schema/runnable/config.py
@@ -135,6 +135,7 @@ def patch_config(
recursion_limit: Optional[int] = None,
max_concurrency: Optional[int] = None,
run_name: Optional[str] = None,
+ configurable: Optional[Dict[str, Any]] = None,
) -> RunnableConfig:
config = ensure_config(config)
if copy_locals:
@@ -151,6 +152,8 @@ def patch_config(
config["max_concurrency"] = max_concurrency
if run_name is not None:
config["run_name"] = run_name
+ if configurable is not None:
+ config["configurable"] = {**config.get("configurable", {}), **configurable}
return config
|
<!-- Thank you for contributing to LangChain!
Replace this entire comment with:
- **Description:** a description of the change,
- **Issue:** the issue # it fixes (if applicable),
- **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR gets announced, and you'd like a mention, we'll gladly shout you out!
Please make sure your PR is passing linting and testing before submitting. Run `make format`, `make lint` and `make test` to check this locally.
See contribution guidelines for more information on how to write/run tests, lint, etc:
https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on network access,
2. an example notebook showing its use. It lives in `docs/extras` directory.
If no one reviews your PR within a few days, please @-mention one of @baskaryan, @eyurtsev, @hwchase17.
-->
| https://api.github.com/repos/langchain-ai/langchain/pulls/11662 | 2023-10-11T13:27:23Z | 2023-10-11T13:45:32Z | 2023-10-11T13:45:32Z | 2023-10-11T13:45:33Z | 815 | langchain-ai/langchain | 43,362 |
Test order randomization | diff --git a/tests/behavioral/test_command.py b/tests/behavioral/test_command.py
deleted file mode 100644
index fbee1059..00000000
--- a/tests/behavioral/test_command.py
+++ /dev/null
@@ -1,57 +0,0 @@
-import os
-import shutil
-import unittest
-from patterns.behavioral.command import MoveFileCommand
-
-
-class CommandTest(unittest.TestCase):
- @classmethod
- def __get_test_directory(self):
- """
- Get the temporary directory for the tests.
- """
- self.test_dir = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'test_command')
-
- @classmethod
- def setUpClass(self):
- """
- - Create a temporary directory and file
- /test_command
- /foo.txt
- - get the temporary test directory
- - and initializes the command stack.
- """
- os.mkdir('tests/behavioral/test_command')
- open('tests/behavioral/test_command/foo.txt', 'w').close()
- self.__get_test_directory()
- self.command_stack = []
- self.command_stack.append(
- MoveFileCommand(os.path.join(self.test_dir, 'foo.txt'), os.path.join(self.test_dir, 'bar.txt'))
- )
- self.command_stack.append(
- MoveFileCommand(os.path.join(self.test_dir, 'bar.txt'), os.path.join(self.test_dir, 'baz.txt'))
- )
-
- def test_sequential_execution(self):
- self.command_stack[0].execute()
- output_after_first_execution = os.listdir(self.test_dir)
- self.assertEqual(output_after_first_execution[0], 'bar.txt')
- self.command_stack[1].execute()
- output_after_second_execution = os.listdir(self.test_dir)
- self.assertEqual(output_after_second_execution[0], 'baz.txt')
-
- def test_sequential_undo(self):
- self.command_stack = list(reversed(self.command_stack))
- self.command_stack[0].undo()
- output_after_first_undo = os.listdir(self.test_dir)
- self.assertEqual(output_after_first_undo[0], 'bar.txt')
- self.command_stack[1].undo()
- output_after_second_undo = os.listdir(self.test_dir)
- self.assertEqual(output_after_second_undo[0], 'foo.txt')
-
- @classmethod
- def tearDownClass(self):
- """
- Remove the temporary directory /test_command and its content.
- """
- shutil.rmtree('tests/behavioral/test_command')
diff --git a/tests/behavioral/test_observer.py b/tests/behavioral/test_observer.py
index ae7a53a6..e24efe44 100644
--- a/tests/behavioral/test_observer.py
+++ b/tests/behavioral/test_observer.py
@@ -1,58 +1,29 @@
-import unittest
-from unittest.mock import patch
-
-from patterns.behavioral.observer import Subject, Data, DecimalViewer, HexViewer
-
-
-class TestSubject(unittest.TestCase):
- @classmethod
- def setUpClass(cls):
- cls.s = Subject()
- cls.dec_obs = DecimalViewer()
- cls.hex_obs = HexViewer()
-
- def test_a_observer_list_shall_be_empty_initially(cls):
- cls.assertEqual(len(cls.s._observers), 0)
-
- def test_b_observers_shall_be_attachable(cls):
- cls.s.attach(cls.dec_obs)
- cls.assertEqual(isinstance(cls.s._observers[0], DecimalViewer), True)
- cls.assertEqual(len(cls.s._observers), 1)
- cls.s.attach(cls.hex_obs)
- cls.assertEqual(isinstance(cls.s._observers[1], HexViewer), True)
- cls.assertEqual(len(cls.s._observers), 2)
-
- def test_c_observers_shall_be_detachable(cls):
- cls.s.detach(cls.dec_obs)
- # hex viewer shall be remaining if dec viewer is detached first
- cls.assertEqual(isinstance(cls.s._observers[0], HexViewer), True)
- cls.assertEqual(len(cls.s._observers), 1)
- cls.s.detach(cls.hex_obs)
- cls.assertEqual(len(cls.s._observers), 0)
-
-
-class TestData(unittest.TestCase):
- @classmethod
- def setUpClass(cls):
- cls.dec_obs = DecimalViewer()
- cls.hex_obs = HexViewer()
- cls.sub = Data('Data')
- # inherited behavior already tested with TestSubject
- cls.sub.attach(cls.dec_obs)
- cls.sub.attach(cls.hex_obs)
-
- def test_data_change_shall_notify_all_observers_once(cls):
- with patch.object(cls.dec_obs, 'update') as mock_dec_obs_update, patch.object(
- cls.hex_obs, 'update'
- ) as mock_hex_obs_update:
- cls.sub.data = 10
- cls.assertEqual(mock_dec_obs_update.call_count, 1)
- cls.assertEqual(mock_hex_obs_update.call_count, 1)
-
- def test_data_value_shall_be_changeable(cls):
- cls.sub.data = 20
- cls.assertEqual(cls.sub._data, 20)
-
- def test_data_name_shall_be_changeable(cls):
- cls.sub.name = 'New Data Name'
- cls.assertEqual(cls.sub.name, 'New Data Name')
+from unittest.mock import patch, Mock
+
+import pytest
+
+from patterns.behavioral.observer import Data, DecimalViewer, HexViewer
+
+
+@pytest.fixture
+def observable():
+ return Data('some data')
+
+def test_attach_detach(observable):
+ decimal_viewer = DecimalViewer()
+ assert len(observable._observers) == 0
+
+ observable.attach(decimal_viewer)
+ assert decimal_viewer in observable._observers
+
+ observable.detach(decimal_viewer)
+ assert decimal_viewer not in observable._observers
+
+def test_one_data_change_notifies_each_observer_once(observable):
+ observable.attach(DecimalViewer())
+ observable.attach(HexViewer())
+
+ with patch('patterns.behavioral.observer.DecimalViewer.update', new_callable=Mock()) as mocked_update:
+ assert mocked_update.call_count == 0
+ observable.data = 10
+ assert mocked_update.call_count == 1
diff --git a/tests/behavioral/test_state.py b/tests/behavioral/test_state.py
index 53dbf9bd..adaae509 100644
--- a/tests/behavioral/test_state.py
+++ b/tests/behavioral/test_state.py
@@ -1,54 +1,24 @@
-import unittest
+import pytest
+
from patterns.behavioral.state import Radio
-class RadioTest(unittest.TestCase):
- """
- Attention: Test case results depend on test case execution. The test cases
- in this integration test class should be executed in an explicit order:
- http://stackoverflow.com/questions/5387299/python-unittest-testcase-execution-order
- """
-
- @classmethod
- def setUpClass(self):
- self.radio = Radio()
-
- def test_initial_state(self):
- state = self.radio.state.name
- expected_state_name = 'AM'
- self.assertEqual(state, expected_state_name)
-
- def test_initial_am_station(self):
- station = self.radio.state.stations[self.radio.state.pos]
- expected_station = '1250'
- self.assertEqual(station, expected_station)
-
- def test_2nd_am_station_after_scan(self):
- self.radio.scan()
- station = self.radio.state.stations[self.radio.state.pos]
- expected_station = '1380'
- self.assertEqual(station, expected_station)
-
- def test_3rd_am_station_after_scan(self):
- self.radio.scan()
- station = self.radio.state.stations[self.radio.state.pos]
- expected_station = '1510'
- self.assertEqual(station, expected_station)
-
- def test_am_station_overflow_after_scan(self):
- self.radio.scan()
- station = self.radio.state.stations[self.radio.state.pos]
- expected_station = '1250'
- self.assertEqual(station, expected_station)
-
- def test_shall_toggle_from_am_to_fm(self):
- self.radio.toggle_amfm()
- state = self.radio.state.name
- expected_state_name = 'FM'
- self.assertEqual(state, expected_state_name)
-
- def test_shall_toggle_from_fm_to_am(self):
- self.radio.toggle_amfm()
- state = self.radio.state.name
- expected_state_name = 'AM'
- self.assertEqual(state, expected_state_name)
+@pytest.fixture
+def radio():
+ return Radio()
+
+def test_initial_state(radio):
+ assert radio.state.name == 'AM'
+
+def test_initial_am_station(radio):
+ initial_pos = radio.state.pos
+ assert radio.state.stations[initial_pos] == '1250'
+
+def test_toggle_amfm(radio):
+ assert radio.state.name == 'AM'
+
+ radio.toggle_amfm()
+ assert radio.state.name == 'FM'
+
+ radio.toggle_amfm()
+ assert radio.state.name == 'AM'
diff --git a/tox.ini b/tox.ini
index d702da2b..d86eeec9 100644
--- a/tox.ini
+++ b/tox.ini
@@ -11,8 +11,7 @@ commands =
flake8 patterns/
; `randomly-seed` option from `pytest-randomly` helps with deterministic outputs for examples like `other/blackboard.py`
pytest --randomly-seed=1234 --doctest-modules patterns/
- ; `-p no:randomly` turns off `randomly` plugin for unit tests
- pytest -s -vv --cov={envsitepackagesdir}/patterns --log-level=INFO -p no:randomly tests/
+ pytest -s -vv --cov={envsitepackagesdir}/patterns --log-level=INFO tests/
[testenv:cov-report]
| - without these changes test suite sometimes depends on tests ordering (bad practice in general)
- deleted tests are fully covered in script's doctests
- rewrote in pytest style | https://api.github.com/repos/faif/python-patterns/pulls/319 | 2020-01-08T15:54:07Z | 2020-01-11T23:38:58Z | 2020-01-11T23:38:58Z | 2020-01-11T23:38:58Z | 2,165 | faif/python-patterns | 33,505 |
separate Extra options | diff --git a/extensions-builtin/extra-options-section/scripts/extra_options_section.py b/extensions-builtin/extra-options-section/scripts/extra_options_section.py
index 588b64d2386..983f87ff033 100644
--- a/extensions-builtin/extra-options-section/scripts/extra_options_section.py
+++ b/extensions-builtin/extra-options-section/scripts/extra_options_section.py
@@ -22,22 +22,23 @@ def ui(self, is_img2img):
self.comps = []
self.setting_names = []
self.infotext_fields = []
+ extra_options = shared.opts.extra_options_img2img if is_img2img else shared.opts.extra_options_txt2img
mapping = {k: v for v, k in generation_parameters_copypaste.infotext_to_setting_name_mapping}
with gr.Blocks() as interface:
- with gr.Accordion("Options", open=False) if shared.opts.extra_options_accordion and shared.opts.extra_options else gr.Group():
+ with gr.Accordion("Options", open=False) if shared.opts.extra_options_accordion and extra_options else gr.Group():
- row_count = math.ceil(len(shared.opts.extra_options) / shared.opts.extra_options_cols)
+ row_count = math.ceil(len(extra_options) / shared.opts.extra_options_cols)
for row in range(row_count):
with gr.Row():
for col in range(shared.opts.extra_options_cols):
index = row * shared.opts.extra_options_cols + col
- if index >= len(shared.opts.extra_options):
+ if index >= len(extra_options):
break
- setting_name = shared.opts.extra_options[index]
+ setting_name = extra_options[index]
with FormColumn():
comp = ui_settings.create_setting_component(setting_name)
@@ -64,7 +65,8 @@ def before_process(self, p, *args):
shared.options_templates.update(shared.options_section(('ui', "User interface"), {
- "extra_options": shared.OptionInfo([], "Options in main UI", ui_components.DropdownMulti, lambda: {"choices": list(shared.opts.data_labels.keys())}).js("info", "settingsHintsShowQuicksettings").info("setting entries that also appear in txt2img/img2img interfaces").needs_reload_ui(),
+ "extra_options_txt2img": shared.OptionInfo([], "Options in main UI - txt2img", ui_components.DropdownMulti, lambda: {"choices": list(shared.opts.data_labels.keys())}).js("info", "settingsHintsShowQuicksettings").info("setting entries that also appear in txt2img interfaces").needs_reload_ui(),
+ "extra_options_img2img": shared.OptionInfo([], "Options in main UI - img2img", ui_components.DropdownMulti, lambda: {"choices": list(shared.opts.data_labels.keys())}).js("info", "settingsHintsShowQuicksettings").info("setting entries that also appear in img2img interfaces").needs_reload_ui(),
"extra_options_cols": shared.OptionInfo(1, "Options in main UI - number of columns", gr.Number, {"precision": 0}).needs_reload_ui(),
"extra_options_accordion": shared.OptionInfo(False, "Options in main UI - place into an accordion").needs_reload_ui()
}))
| ## Description
separate extra options for txt2img and img2img
note I did not inherit the old settings maybe we should
## Screenshots/videos:
settings

txt2img

img2img

## Checklist:
- [x] I have read [contributing wiki page](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Contributing)
- [x] I have performed a self-review of my own code
- [x] My code follows the [style guidelines](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Contributing#code-style)
- [x] My code passes [tests](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Tests)
| https://api.github.com/repos/AUTOMATIC1111/stable-diffusion-webui/pulls/12551 | 2023-08-14T09:54:00Z | 2023-08-14T10:35:41Z | 2023-08-14T10:35:41Z | 2023-08-14T10:35:48Z | 682 | AUTOMATIC1111/stable-diffusion-webui | 40,439 |
gh-107713: Reduce usage of mocks in `test_clinic.py` | diff --git a/Lib/test/test_clinic.py b/Lib/test/test_clinic.py
index f594e39a90546a..d13d8623f8093b 100644
--- a/Lib/test/test_clinic.py
+++ b/Lib/test/test_clinic.py
@@ -7,7 +7,6 @@
from test.support.os_helper import TESTFN, unlink
from textwrap import dedent
from unittest import TestCase
-import collections
import contextlib
import inspect
import os.path
@@ -21,6 +20,13 @@
from clinic import DSLParser
+def _make_clinic(*, filename='clinic_tests'):
+ clang = clinic.CLanguage(None)
+ c = clinic.Clinic(clang, filename=filename)
+ c.block_parser = clinic.BlockParser('', clang)
+ return c
+
+
def _expect_failure(tc, parser, code, errmsg, *, filename=None, lineno=None):
"""Helper for the parser tests.
@@ -41,81 +47,6 @@ def _expect_failure(tc, parser, code, errmsg, *, filename=None, lineno=None):
tc.assertEqual(cm.exception.lineno, lineno)
-class FakeConverter:
- def __init__(self, name, args):
- self.name = name
- self.args = args
-
-
-class FakeConverterFactory:
- def __init__(self, name):
- self.name = name
-
- def __call__(self, name, default, **kwargs):
- return FakeConverter(self.name, kwargs)
-
-
-class FakeConvertersDict:
- def __init__(self):
- self.used_converters = {}
-
- def get(self, name, default):
- return self.used_converters.setdefault(name, FakeConverterFactory(name))
-
-c = clinic.Clinic(language='C', filename = "file")
-
-class FakeClinic:
- def __init__(self):
- self.converters = FakeConvertersDict()
- self.legacy_converters = FakeConvertersDict()
- self.language = clinic.CLanguage(None)
- self.filename = "clinic_tests"
- self.destination_buffers = {}
- self.block_parser = clinic.BlockParser('', self.language)
- self.modules = collections.OrderedDict()
- self.classes = collections.OrderedDict()
- clinic.clinic = self
- self.name = "FakeClinic"
- self.line_prefix = self.line_suffix = ''
- self.destinations = {}
- self.add_destination("block", "buffer")
- self.add_destination("file", "buffer")
- self.add_destination("suppress", "suppress")
- d = self.destinations.get
- self.field_destinations = collections.OrderedDict((
- ('docstring_prototype', d('suppress')),
- ('docstring_definition', d('block')),
- ('methoddef_define', d('block')),
- ('impl_prototype', d('block')),
- ('parser_prototype', d('suppress')),
- ('parser_definition', d('block')),
- ('impl_definition', d('block')),
- ))
- self.functions = []
-
- def get_destination(self, name):
- d = self.destinations.get(name)
- if not d:
- sys.exit("Destination does not exist: " + repr(name))
- return d
-
- def add_destination(self, name, type, *args):
- if name in self.destinations:
- sys.exit("Destination already exists: " + repr(name))
- self.destinations[name] = clinic.Destination(name, type, self, *args)
-
- def is_directive(self, name):
- return name == "module"
-
- def directive(self, name, args):
- self.called_directives[name] = args
-
- _module_and_class = clinic.Clinic._module_and_class
-
- def __repr__(self):
- return "<FakeClinic object>"
-
-
class ClinicWholeFileTest(TestCase):
maxDiff = None
@@ -124,7 +55,7 @@ def expect_failure(self, raw, errmsg, *, filename=None, lineno=None):
filename=filename, lineno=lineno)
def setUp(self):
- self.clinic = clinic.Clinic(clinic.CLanguage(None), filename="test.c")
+ self.clinic = _make_clinic(filename="test.c")
def test_eol(self):
# regression test:
@@ -848,7 +779,7 @@ def test_clinic_1(self):
class ClinicParserTest(TestCase):
def parse(self, text):
- c = FakeClinic()
+ c = _make_clinic()
parser = DSLParser(c)
block = clinic.Block(text)
parser.parse(block)
@@ -872,7 +803,7 @@ def checkDocstring(self, fn, expected):
dedent(expected).strip())
def test_trivial(self):
- parser = DSLParser(FakeClinic())
+ parser = DSLParser(_make_clinic())
block = clinic.Block("""
module os
os.access
@@ -1119,7 +1050,7 @@ def test_cloning_nonexistent_function_correctly_fails(self):
with support.captured_stderr() as stderr:
self.expect_failure(block, err, lineno=0)
expected_debug_print = dedent("""\
- cls=None, module=<FakeClinic object>, existing='fooooooooooooooooo'
+ cls=None, module=<clinic.Clinic object>, existing='fooooooooooooooooo'
(cls or module).functions=[]
""")
stderr = stderr.getvalue()
@@ -1740,8 +1671,7 @@ def test_indent_stack_illegal_outdent(self):
self.expect_failure(block, err)
def test_directive(self):
- c = FakeClinic()
- parser = DSLParser(c)
+ parser = DSLParser(_make_clinic())
parser.flag = False
parser.directives['setflag'] = lambda : setattr(parser, 'flag', True)
block = clinic.Block("setflag")
@@ -3147,22 +3077,24 @@ def test_Block_repr(self):
self.assertEqual(repr(block3), expected_repr_3)
def test_Destination_repr(self):
+ c = _make_clinic()
+
destination = clinic.Destination(
- "foo", type="file", clinic=FakeClinic(), args=("eggs",)
+ "foo", type="file", clinic=c, args=("eggs",)
)
self.assertEqual(
repr(destination), "<clinic.Destination 'foo' type='file' file='eggs'>"
)
- destination2 = clinic.Destination("bar", type="buffer", clinic=FakeClinic())
+ destination2 = clinic.Destination("bar", type="buffer", clinic=c)
self.assertEqual(repr(destination2), "<clinic.Destination 'bar' type='buffer'>")
def test_Module_repr(self):
- module = clinic.Module("foo", FakeClinic())
+ module = clinic.Module("foo", _make_clinic())
self.assertRegex(repr(module), r"<clinic.Module 'foo' at \d+>")
def test_Class_repr(self):
- cls = clinic.Class("foo", FakeClinic(), None, 'some_typedef', 'some_type_object')
+ cls = clinic.Class("foo", _make_clinic(), None, 'some_typedef', 'some_type_object')
self.assertRegex(repr(cls), r"<clinic.Class 'foo' at \d+>")
def test_FunctionKind_repr(self):
@@ -3176,7 +3108,7 @@ def test_FunctionKind_repr(self):
def test_Function_and_Parameter_reprs(self):
function = clinic.Function(
name='foo',
- module=FakeClinic(),
+ module=_make_clinic(),
cls=None,
c_basename=None,
full_name='foofoo',
diff --git a/Tools/clinic/clinic.py b/Tools/clinic/clinic.py
index 4dfe90b314f543..2bed98c23674e0 100755
--- a/Tools/clinic/clinic.py
+++ b/Tools/clinic/clinic.py
@@ -2424,6 +2424,9 @@ def _module_and_class(
return module, cls
+ def __repr__(self) -> str:
+ return "<clinic.Clinic object>"
+
def parse_file(
filename: str,
|
<!-- gh-issue-number: gh-107713 -->
* Issue: gh-107713
<!-- /gh-issue-number -->
Fixes #107713 | https://api.github.com/repos/python/cpython/pulls/107714 | 2023-08-07T12:52:43Z | 2023-08-07T13:26:49Z | 2023-08-07T13:26:49Z | 2023-08-07T13:26:52Z | 1,825 | python/cpython | 3,873 |
Add MLEM | diff --git a/README.md b/README.md
index 67bc9517..f72b470a 100644
--- a/README.md
+++ b/README.md
@@ -1723,6 +1723,7 @@ be
* [Pythonizr](https://pythonizr.com) - An online tool to generate boilerplate machine learning code that uses scikit-learn.
* [Flyte](https://flyte.org/) - Flyte makes it easy to create concurrent, scalable, and maintainable workflows for machine learning and data processing.
* [Chaos Genius](https://github.com/chaos-genius/chaos_genius/) - ML powered analytics engine for outlier/anomaly detection and root cause analysis.
+* [MLEM](https://github.com/iterative/mlem) - Version and deploy your ML models following GitOps principles
<a name="books"></a>
## Books
| Hello,
It's a great list. I wanted to put MLEM into this list, because data community would like it. Thank you for considering it. 🚀
Best, | https://api.github.com/repos/josephmisiti/awesome-machine-learning/pulls/881 | 2022-09-05T14:23:56Z | 2022-09-09T16:55:12Z | 2022-09-09T16:55:12Z | 2022-09-09T16:55:12Z | 193 | josephmisiti/awesome-machine-learning | 51,853 |
Add settings UI for llama.cpp and fixed reloading of llama.cpp models | diff --git a/modules/llamacpp_model.py b/modules/llamacpp_model.py
index fa8c304564..65577ee0cf 100644
--- a/modules/llamacpp_model.py
+++ b/modules/llamacpp_model.py
@@ -16,6 +16,9 @@ class LlamaCppModel:
def __init__(self):
self.initialized = False
+ def __del__(self):
+ self.model.__del__()
+
@classmethod
def from_pretrained(self, path):
result = self()
diff --git a/modules/ui.py b/modules/ui.py
index 7abea91446..7d804fe0c5 100644
--- a/modules/ui.py
+++ b/modules/ui.py
@@ -27,7 +27,7 @@
def list_model_elements():
- elements = ['cpu_memory', 'auto_devices', 'disk', 'cpu', 'bf16', 'load_in_8bit', 'wbits', 'groupsize', 'model_type', 'pre_layer']
+ elements = ['cpu_memory', 'auto_devices', 'disk', 'cpu', 'bf16', 'load_in_8bit', 'wbits', 'groupsize', 'model_type', 'pre_layer', 'threads', 'n_batch', 'no-mmap', 'mlock', 'n_gpu_layers']
for i in range(torch.cuda.device_count()):
elements.append(f'gpu_memory_{i}')
return elements
diff --git a/server.py b/server.py
index 576bfba73d..608b4e0f2f 100644
--- a/server.py
+++ b/server.py
@@ -360,7 +360,20 @@ def create_model_menus():
shared.gradio['download_model_button'] = gr.Button("Download")
with gr.Column():
- shared.gradio['model_status'] = gr.Markdown('No model is loaded' if shared.model_name == 'None' else 'Ready')
+ with gr.Box():
+ gr.Markdown('llama.cpp parameters')
+ with gr.Row():
+ with gr.Column():
+ shared.gradio['threads'] = gr.Slider(label="threads", minimum=0, step=1, maximum=32, value=shared.args.threads)
+ shared.gradio['n_batch'] = gr.Slider(label="n_batch", minimum=1, maximum=2048, value=shared.args.n_batch)
+ shared.gradio['n_gpu_layers'] = gr.Slider(label="n-gpu-layers", minimum=0, maximum=128, value=shared.args.n_gpu_layers)
+
+ with gr.Column():
+ shared.gradio['no-mmap'] = gr.Checkbox(label="no-mmap", value=shared.args.no_mmap)
+ shared.gradio['mlock'] = gr.Checkbox(label="mlock", value=shared.args.mlock)
+
+ with gr.Row():
+ shared.gradio['model_status'] = gr.Markdown('No model is loaded' if shared.model_name == 'None' else 'Ready')
# In this event handler, the interface state is read and updated
# with the model defaults (if any), and then the model is loaded
| I added settings for llama.cpp models to the UI tab.
I also fixed unloading and reloading of llama.cpp models, previously it didnt free the memory. Unfortunately it releases only RAM but not VRAM (when using offloading to GPU) because [llama.cpp library doesn't release the memory in llama_free method](https://github.com/ggerganov/llama.cpp/issues/1456). | https://api.github.com/repos/oobabooga/text-generation-webui/pulls/2087 | 2023-05-15T20:45:09Z | 2023-05-15T22:51:23Z | 2023-05-15T22:51:23Z | 2023-05-15T22:51:23Z | 691 | oobabooga/text-generation-webui | 25,949 |
Add Grafana Mimir and Tempo | diff --git a/diagrams/onprem/monitoring.py b/diagrams/onprem/monitoring.py
index 96bf95661..2423a2db8 100644
--- a/diagrams/onprem/monitoring.py
+++ b/diagrams/onprem/monitoring.py
@@ -28,6 +28,10 @@ class Humio(_Monitoring):
_icon = "humio.png"
+class Mimir(_Monitoring):
+ _icon = "mimir.png"
+
+
class Nagios(_Monitoring):
_icon = "nagios.png"
diff --git a/diagrams/onprem/tracing.py b/diagrams/onprem/tracing.py
index e4353538d..f42fef253 100644
--- a/diagrams/onprem/tracing.py
+++ b/diagrams/onprem/tracing.py
@@ -12,4 +12,8 @@ class Jaeger(_Tracing):
_icon = "jaeger.png"
+class Tempo(_Tracing):
+ _icon = "tempo.png"
+
+
# Aliases
diff --git a/docs/nodes/onprem.md b/docs/nodes/onprem.md
index b256bc2f9..0fd3b0e25 100644
--- a/docs/nodes/onprem.md
+++ b/docs/nodes/onprem.md
@@ -350,6 +350,9 @@ Node classes list of onprem provider.
<img width="30" src="/img/resources/onprem/monitoring/humio.png" alt="Humio" style="float: left; padding-right: 5px;" >
**diagrams.onprem.monitoring.Humio**
+<img width="30" src="/img/resources/onprem/monitoring/mimir.png" alt="Mimir" style="float: left; padding-right: 5px;" >
+**diagrams.onprem.monitoring.Mimir**
+
<img width="30" src="/img/resources/onprem/monitoring/nagios.png" alt="Nagios" style="float: left; padding-right: 5px;" >
**diagrams.onprem.monitoring.Nagios**
@@ -545,6 +548,9 @@ Node classes list of onprem provider.
<img width="30" src="/img/resources/onprem/tracing/jaeger.png" alt="Jaeger" style="float: left; padding-right: 5px;" >
**diagrams.onprem.tracing.Jaeger**
+<img width="30" src="/img/resources/onprem/tracing/tempo.png" alt="Tempo" style="float: left; padding-right: 5px;" >
+**diagrams.onprem.tracing.Tempo**
+
## onprem.vcs
diff --git a/resources/onprem/monitoring/mimir.png b/resources/onprem/monitoring/mimir.png
new file mode 100644
index 000000000..fd1661d1e
Binary files /dev/null and b/resources/onprem/monitoring/mimir.png differ
diff --git a/resources/onprem/tracing/tempo.png b/resources/onprem/tracing/tempo.png
new file mode 100644
index 000000000..d0f519d0d
Binary files /dev/null and b/resources/onprem/tracing/tempo.png differ
diff --git a/website/static/img/resources/onprem/monitoring/mimir.png b/website/static/img/resources/onprem/monitoring/mimir.png
new file mode 100644
index 000000000..fd1661d1e
Binary files /dev/null and b/website/static/img/resources/onprem/monitoring/mimir.png differ
diff --git a/website/static/img/resources/onprem/tracing/tempo.png b/website/static/img/resources/onprem/tracing/tempo.png
new file mode 100644
index 000000000..d0f519d0d
Binary files /dev/null and b/website/static/img/resources/onprem/tracing/tempo.png differ
| Add Grafana Mimir to resources, onprem, monitoring and add Grafana Tempo to resources, onprem, tracing. | https://api.github.com/repos/mingrammer/diagrams/pulls/857 | 2023-02-22T15:07:09Z | 2023-05-22T23:32:16Z | 2023-05-22T23:32:16Z | 2023-05-22T23:32:16Z | 894 | mingrammer/diagrams | 52,601 |
moto Integration Fix (api gateway put integration response) | diff --git a/localstack/services/apigateway/apigateway_starter.py b/localstack/services/apigateway/apigateway_starter.py
index bf680b80bb87a..1caae22eb28df 100644
--- a/localstack/services/apigateway/apigateway_starter.py
+++ b/localstack/services/apigateway/apigateway_starter.py
@@ -40,7 +40,7 @@ def apigateway_models_resource_delete_integration(self, method_type):
return {}
- def apigateway_models_IntegrationResponse_init(self, status_code, selection_pattern=None):
+ def apigateway_models_IntegrationResponse_init(self, status_code, selection_pattern=None, response_templates=None):
self['statusCode'] = status_code
if selection_pattern:
self['selectionPattern'] = selection_pattern
| moto Integration Fix (api gateway put integration response) | https://api.github.com/repos/localstack/localstack/pulls/2404 | 2020-05-07T08:32:36Z | 2020-05-07T09:05:00Z | 2020-05-07T09:05:00Z | 2020-05-07T09:05:00Z | 177 | localstack/localstack | 28,751 |
Add option to avoid sending size between interfaces. | diff --git a/modules/generation_parameters_copypaste.py b/modules/generation_parameters_copypaste.py
index 44fe1a6c2dc..e8d5250a885 100644
--- a/modules/generation_parameters_copypaste.py
+++ b/modules/generation_parameters_copypaste.py
@@ -121,8 +121,7 @@ def run_bind():
if send_generate_info and paste_fields[tab]["fields"] is not None:
if send_generate_info in paste_fields:
- paste_field_names = ['Prompt', 'Negative prompt', 'Steps', 'Face restoration', 'Size-1', 'Size-2'] + (["Seed"] if shared.opts.send_seed else [])
-
+ paste_field_names = ['Prompt', 'Negative prompt', 'Steps', 'Face restoration'] + (['Size-1', 'Size-2'] if shared.opts.send_size else []) + (["Seed"] if shared.opts.send_seed else [])
button.click(
fn=lambda *x: x,
inputs=[field for field, name in paste_fields[send_generate_info]["fields"] if name in paste_field_names],
diff --git a/modules/shared.py b/modules/shared.py
index dc45fcaa62f..ab9012af51c 100644
--- a/modules/shared.py
+++ b/modules/shared.py
@@ -395,6 +395,7 @@ def list_samplers():
"add_model_name_to_info": OptionInfo(False, "Add model name to generation information"),
"disable_weights_auto_swap": OptionInfo(False, "When reading generation parameters from text into UI (from PNG info or pasted text), do not change the selected model/checkpoint."),
"send_seed": OptionInfo(True, "Send seed when sending prompt or image to other interface"),
+ "send_size": OptionInfo(True, "Send size when sending prompt or image to another interface"),
"font": OptionInfo("", "Font for image grids that have text"),
"js_modal_lightbox": OptionInfo(True, "Enable full page image viewer"),
"js_modal_lightbox_initially_zoomed": OptionInfo(True, "Show images zoomed in by default in full page image viewer"),
| Adds #5433, the option `send_size`, virtually identical to the extant `send_seed`:

In short: when `send_size` is unchecked, the relevant "Send to {tab}" buttons stop sending size between interfaces.
✓ Works sending txt2img -> img2img
✓ Works sending txt2img -> inpaint
\+ Doesn't break sending anything to anything else, nor itself
Note: **Gradio must refresh for `send_seed` and `send_size` to take effect**. I tried writing a `shared.opts.onchange` callback two different ways, but none worked. | https://api.github.com/repos/AUTOMATIC1111/stable-diffusion-webui/pulls/5441 | 2022-12-05T12:35:20Z | 2022-12-10T08:07:17Z | 2022-12-10T08:07:17Z | 2022-12-10T09:24:34Z | 469 | AUTOMATIC1111/stable-diffusion-webui | 39,761 |
added a note to the api documentation about JSON_SORT_KEYS. Fixes #922 | diff --git a/docs/api.rst b/docs/api.rst
index 5faccc30aa..db55050075 100644
--- a/docs/api.rst
+++ b/docs/api.rst
@@ -385,6 +385,12 @@ you are using Flask 0.10 which implies that:
doSomethingWith({{ user.username|tojson|safe }});
</script>
+.. admonition:: Auto-Sort JSON Keys
+
+ The configuration variable ``JSON_SORT_KEYS`` (:ref:`config`) can be set to false to
+ stop Flask from auto-sorting keys. By default sorting is enabled and
+ outside of the app context sorting is turned on.
+
.. autofunction:: jsonify
.. autofunction:: dumps
| See #922 for the conversation and rationale.
| https://api.github.com/repos/pallets/flask/pulls/962 | 2014-01-25T01:23:30Z | 2014-02-08T16:26:27Z | 2014-02-08T16:26:27Z | 2020-11-14T06:33:21Z | 164 | pallets/flask | 20,922 |
infra: update create_api_rst | diff --git a/docs/api_reference/create_api_rst.py b/docs/api_reference/create_api_rst.py
index bd2d5a3e0b9be0..e1e976d54ee918 100644
--- a/docs/api_reference/create_api_rst.py
+++ b/docs/api_reference/create_api_rst.py
@@ -307,7 +307,14 @@ def _package_namespace(package_name: str) -> str:
def _package_dir(package_name: str = "langchain") -> Path:
"""Return the path to the directory containing the documentation."""
- if package_name in ("langchain", "experimental", "community", "core", "cli"):
+ if package_name in (
+ "langchain",
+ "experimental",
+ "community",
+ "core",
+ "cli",
+ "text-splitters",
+ ):
return ROOT_DIR / "libs" / package_name / _package_namespace(package_name)
else:
return (
| https://api.github.com/repos/langchain-ai/langchain/pulls/18361 | 2024-03-01T03:02:15Z | 2024-03-01T03:04:44Z | 2024-03-01T03:04:44Z | 2024-03-01T03:04:45Z | 212 | langchain-ai/langchain | 43,325 |
|
docs(website): generate missing website images | diff --git a/website/static/img/resources/elastic/agent/agent.png b/website/static/img/resources/elastic/agent/agent.png
new file mode 100644
index 000000000..f2a90e0d5
Binary files /dev/null and b/website/static/img/resources/elastic/agent/agent.png differ
diff --git a/website/static/img/resources/elastic/agent/endpoint.png b/website/static/img/resources/elastic/agent/endpoint.png
new file mode 100644
index 000000000..4daf3ca3e
Binary files /dev/null and b/website/static/img/resources/elastic/agent/endpoint.png differ
diff --git a/website/static/img/resources/elastic/agent/fleet.png b/website/static/img/resources/elastic/agent/fleet.png
new file mode 100644
index 000000000..55dd36861
Binary files /dev/null and b/website/static/img/resources/elastic/agent/fleet.png differ
diff --git a/website/static/img/resources/elastic/agent/integrations.png b/website/static/img/resources/elastic/agent/integrations.png
new file mode 100644
index 000000000..9bd753953
Binary files /dev/null and b/website/static/img/resources/elastic/agent/integrations.png differ
diff --git a/website/static/img/resources/elastic/beats/apm.png b/website/static/img/resources/elastic/beats/apm.png
new file mode 100644
index 000000000..fdb5796a9
Binary files /dev/null and b/website/static/img/resources/elastic/beats/apm.png differ
diff --git a/website/static/img/resources/elastic/beats/auditbeat.png b/website/static/img/resources/elastic/beats/auditbeat.png
new file mode 100644
index 000000000..4e41ae782
Binary files /dev/null and b/website/static/img/resources/elastic/beats/auditbeat.png differ
diff --git a/website/static/img/resources/elastic/beats/filebeat.png b/website/static/img/resources/elastic/beats/filebeat.png
new file mode 100644
index 000000000..778af3d3b
Binary files /dev/null and b/website/static/img/resources/elastic/beats/filebeat.png differ
diff --git a/website/static/img/resources/elastic/beats/functionbeat.png b/website/static/img/resources/elastic/beats/functionbeat.png
new file mode 100644
index 000000000..080e9f461
Binary files /dev/null and b/website/static/img/resources/elastic/beats/functionbeat.png differ
diff --git a/website/static/img/resources/elastic/beats/heartbeat.png b/website/static/img/resources/elastic/beats/heartbeat.png
new file mode 100644
index 000000000..975daa7b3
Binary files /dev/null and b/website/static/img/resources/elastic/beats/heartbeat.png differ
diff --git a/website/static/img/resources/elastic/beats/metricbeat.png b/website/static/img/resources/elastic/beats/metricbeat.png
new file mode 100644
index 000000000..80082cd4d
Binary files /dev/null and b/website/static/img/resources/elastic/beats/metricbeat.png differ
diff --git a/website/static/img/resources/elastic/beats/packetbeat.png b/website/static/img/resources/elastic/beats/packetbeat.png
new file mode 100644
index 000000000..9ede7e1ee
Binary files /dev/null and b/website/static/img/resources/elastic/beats/packetbeat.png differ
diff --git a/website/static/img/resources/elastic/beats/winlogbeat.png b/website/static/img/resources/elastic/beats/winlogbeat.png
new file mode 100644
index 000000000..70f12acbb
Binary files /dev/null and b/website/static/img/resources/elastic/beats/winlogbeat.png differ
diff --git a/website/static/img/resources/elastic/elasticsearch/logstash-pipeline.png b/website/static/img/resources/elastic/elasticsearch/logstash-pipeline.png
new file mode 100644
index 000000000..4a7724569
Binary files /dev/null and b/website/static/img/resources/elastic/elasticsearch/logstash-pipeline.png differ
diff --git a/website/static/img/resources/elastic/elasticsearch/map-services.png b/website/static/img/resources/elastic/elasticsearch/map-services.png
new file mode 100644
index 000000000..774d0c3cb
Binary files /dev/null and b/website/static/img/resources/elastic/elasticsearch/map-services.png differ
diff --git a/website/static/img/resources/elastic/elasticsearch/searchable-snapshots.png b/website/static/img/resources/elastic/elasticsearch/searchable-snapshots.png
new file mode 100644
index 000000000..3fdd73d90
Binary files /dev/null and b/website/static/img/resources/elastic/elasticsearch/searchable-snapshots.png differ
diff --git a/website/static/img/resources/elastic/elasticsearch/stack.png b/website/static/img/resources/elastic/elasticsearch/stack.png
new file mode 100644
index 000000000..71e6651b5
Binary files /dev/null and b/website/static/img/resources/elastic/elasticsearch/stack.png differ
diff --git a/website/static/img/resources/elastic/enterprisesearch/crawler.png b/website/static/img/resources/elastic/enterprisesearch/crawler.png
new file mode 100644
index 000000000..555801d00
Binary files /dev/null and b/website/static/img/resources/elastic/enterprisesearch/crawler.png differ
diff --git a/website/static/img/resources/elastic/observability/observability.png b/website/static/img/resources/elastic/observability/observability.png
index 5844caa98..f4408894c 100644
Binary files a/website/static/img/resources/elastic/observability/observability.png and b/website/static/img/resources/elastic/observability/observability.png differ
diff --git a/website/static/img/resources/elastic/security/xdr.png b/website/static/img/resources/elastic/security/xdr.png
new file mode 100644
index 000000000..972f1f16a
Binary files /dev/null and b/website/static/img/resources/elastic/security/xdr.png differ
diff --git a/website/static/img/resources/onprem/storage/portworx.png b/website/static/img/resources/onprem/storage/portworx.png
new file mode 100644
index 000000000..7464ed044
Binary files /dev/null and b/website/static/img/resources/onprem/storage/portworx.png differ
| autogen did not run after merging the following PR:
#782
#742
The resources was not copied in website static folder. | https://api.github.com/repos/mingrammer/diagrams/pulls/806 | 2022-12-03T08:24:02Z | 2023-01-08T08:08:43Z | 2023-01-08T08:08:43Z | 2023-01-08T08:08:43Z | 1,459 | mingrammer/diagrams | 52,609 |
Fix documentation for provider's release | diff --git a/dev/README_RELEASE_PROVIDER_PACKAGES.md b/dev/README_RELEASE_PROVIDER_PACKAGES.md
index 1b79d9beecb78..58eed39ddb2ee 100644
--- a/dev/README_RELEASE_PROVIDER_PACKAGES.md
+++ b/dev/README_RELEASE_PROVIDER_PACKAGES.md
@@ -30,13 +30,14 @@
- [Commit the source packages to Apache SVN repo](#commit-the-source-packages-to-apache-svn-repo)
- [Publish the Regular convenience package to PyPI](#publish-the-regular-convenience-package-to-pypi)
- [Add tags in git](#add-tags-in-git)
- - [Publish documentation](#publish-documentation)
+ - [Prepare documentation](#prepare-documentation)
- [Prepare voting email for Providers release candidate](#prepare-voting-email-for-providers-release-candidate)
- [Verify the release by PMC members](#verify-the-release-by-pmc-members)
- [Verify by Contributors](#verify-by-contributors)
- [Publish release](#publish-release)
- [Summarize the voting for the Apache Airflow release](#summarize-the-voting-for-the-apache-airflow-release)
- [Publish the Regular convenience package to PyPI](#publish-the-regular-convenience-package-to-pypi-1)
+ - [Publish documentation prepared before](#publish-documentation-prepared-before)
- [Add tags in git](#add-tags-in-git-1)
- [Notify developers of release](#notify-developers-of-release)
@@ -229,7 +230,7 @@ set tags for the providers in the repo.
./dev/provider_packages/tag_providers.sh
```
-## Publish documentation
+## Prepare documentation
Documentation is an essential part of the product and should be made available to users.
In our cases, documentation for the released versions is published in a separate repository -
@@ -646,7 +647,7 @@ export AIRFLOW_REPO_ROOT=$(pwd)
# Go to the directory where you have checked out the dev svn release
# And go to the sub-folder with RC candidates
-cd "<ROOT_OF_YOUR_DEV_REPO>/backport-providers/${VERSION_RC}"
+cd "<ROOT_OF_YOUR_DEV_REPO>/providers/"
export SOURCE_DIR=$(pwd)
# Go the folder where you have checked out the release repo
@@ -685,25 +686,18 @@ python ${AIRFLOW_REPO_ROOT}/dev/provider_packages/remove_old_releases.py \
# Commit to SVN
-svn commit -m "Release Airflow Backport Providers ${VERSION} from ${VERSION_RC}"
+svn commit -m "Release Airflow Providers on $(date)"
```
Verify that the packages appear in
-[backport-providers](https://dist.apache.org/repos/dist/release/airflow/backport-providers)
+[backport-providers](https://dist.apache.org/repos/dist/release/airflow/providers)
### Publish the final version convenience package to PyPI
-Checkout the RC Version:
+Checkout the RC Version for the RC Version released (there is a batch of providers - one of them is enough):
```shell script
-git checkout backport-providers-${VERSION_RC}
-```
-
-Tag and push the final version (providing that your apache remote is named 'apache'):
-
-```shell script
-git tag backport-providers-${VERSION}
-git push apache backport-providers-${VERSION}
+git checkout providers-<PROVIDER_NAME>/<VERSION_RC>
```
In order to publish to PyPI you just need to build and release packages.
@@ -717,7 +711,7 @@ In order to publish to PyPI you just need to build and release packages.
if you ony build few packages, run:
```shell script
-./breeze --backports prepare-provider-packages <PACKAGE> ...
+./breeze prepare-provider-packages <PACKAGE> ...
```
In case you decided to remove some of the packages. remove them from dist folder now:
@@ -763,6 +757,7 @@ rm -rf ${AIRFLOW_REPO_ROOT}/dist/*
if you ony build few packages, run:
```shell script
+rm -rf ${AIRFLOW_REPO_ROOT}/dist/*
./breeze prepare-provider-packages --package-format both PACKAGE PACKAGE ....
```
@@ -789,6 +784,12 @@ twine upload -r pypi ${AIRFLOW_REPO_ROOT}/dist/*
* Again, confirm that the packages are available under the links printed.
+## Publish documentation prepared before
+
+Merge the PR that you prepared before with the documentation. If you removed some of the providers
+from the release - remove the versions from the prepared documentation and update stable.txt with the
+previous version for those providers before merging the PR.
+
## Add tags in git
@@ -809,7 +810,7 @@ Subject:
```shell script
cat <<EOF
-Airflow Providers are released
+Airflow Providers released on $(date) are ready
EOF
```
@@ -819,7 +820,7 @@ Body:
cat <<EOF
Dear Airflow community,
-I'm happy to announce that new version of Airflow Providers packages were just released.
+I'm happy to announce that new versions of Airflow Providers packages were just released.
The source release, as well as the binary releases, are available here:
@@ -829,7 +830,7 @@ We also made those versions available on PyPi for convenience ('pip install apac
https://pypi.org/search/?q=apache-airflow-providers
-The documentation and changelogs are available in the PyPI packages:
+The documentation is available at http://airflow.apache.org/docs/ and linked from the PyPI packages:
<PASTE TWINE UPLOAD LINKS HERE. SORT THEM BEFORE!>
| <!--
Thank you for contributing! Please make sure that your code changes
are covered with tests. And in case of new features or big changes
remember to adjust the documentation.
Feel free to ping committers for the review!
In case of existing issue, reference it using one of the following:
closes: #ISSUE
related: #ISSUE
How to write a good git commit message:
http://chris.beams.io/posts/git-commit/
-->
---
**^ Add meaningful description above**
Read the **[Pull Request Guidelines](https://github.com/apache/airflow/blob/master/CONTRIBUTING.rst#pull-request-guidelines)** for more information.
In case of fundamental code change, Airflow Improvement Proposal ([AIP](https://cwiki.apache.org/confluence/display/AIRFLOW/Airflow+Improvements+Proposals)) is needed.
In case of a new dependency, check compliance with the [ASF 3rd Party License Policy](https://www.apache.org/legal/resolved.html#category-x).
In case of backwards incompatible changes please leave a note in [UPDATING.md](https://github.com/apache/airflow/blob/master/UPDATING.md).
| https://api.github.com/repos/apache/airflow/pulls/14654 | 2021-03-07T23:34:29Z | 2021-03-09T20:22:37Z | 2021-03-09T20:22:37Z | 2021-03-09T20:22:38Z | 1,225 | apache/airflow | 14,882 |
fix: BigQueryVectorSearch JSON type unsupported for metadatas | diff --git a/libs/community/langchain_community/vectorstores/bigquery_vector_search.py b/libs/community/langchain_community/vectorstores/bigquery_vector_search.py
index 3c77ffb917ea4e..da89dc34c2b2f0 100644
--- a/libs/community/langchain_community/vectorstores/bigquery_vector_search.py
+++ b/libs/community/langchain_community/vectorstores/bigquery_vector_search.py
@@ -404,7 +404,8 @@ def get_documents(
if self.metadata_field:
metadata = row[self.metadata_field]
if metadata:
- metadata = json.loads(metadata)
+ if not isinstance(metadata, dict):
+ metadata = json.loads(metadata)
else:
metadata = {}
metadata["__id"] = row[self.doc_id_field]
@@ -544,7 +545,8 @@ def _search_with_score_and_embeddings_by_vector(
for row in job:
metadata = row[self.metadata_field]
if metadata:
- metadata = json.loads(metadata)
+ if not isinstance(metadata, dict):
+ metadata = json.loads(metadata)
else:
metadata = {}
metadata["__id"] = row[self.doc_id_field]
| Thank you for contributing to LangChain!
- [x] **PR title**: "community: fix error in BigQueryVectorSearch"
- [x] **PR message**:
- **Description:** Fixes the incorrect parsing of the metadata column
- **Issue:** The metadata column can be of type JSON or STRING [1] but the code only works for the STRING type.
- [x] **Add tests and docs**: If you're adding a new integration, please include
1. An example notebook: [demo](https://github.com/ashleyxuu/langchain/blob/master/docs/docs/integrations/vectorstores/bigquery_vector_search.ipynb)
Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in langchain.
If no one reviews your PR within a few days, please @-mention one of baskaryan, efriis, eyurtsev, hwchase17.
| https://api.github.com/repos/langchain-ai/langchain/pulls/18234 | 2024-02-27T23:35:40Z | 2024-02-28T16:19:53Z | 2024-02-28T16:19:53Z | 2024-02-28T16:19:53Z | 246 | langchain-ai/langchain | 43,506 |
[release] minor fix to pytorch_pbt_failure test when using gpu. | diff --git a/python/ray/train/examples/pytorch/tune_cifar_torch_pbt_example.py b/python/ray/train/examples/pytorch/tune_cifar_torch_pbt_example.py
index 90846eb84824e..196051971129f 100644
--- a/python/ray/train/examples/pytorch/tune_cifar_torch_pbt_example.py
+++ b/python/ray/train/examples/pytorch/tune_cifar_torch_pbt_example.py
@@ -70,6 +70,10 @@ def train_func(config):
model = resnet18()
+ # Note that `prepare_model` needs to be called before setting optimizer.
+ if not session.get_checkpoint(): # fresh start
+ model = train.torch.prepare_model(model)
+
# Create optimizer.
optimizer_config = {
"lr": config.get("lr"),
@@ -84,6 +88,7 @@ def train_func(config):
# Load in model
model_state = checkpoint_dict["model"]
model.load_state_dict(model_state)
+ model = train.torch.prepare_model(model)
# Load in optimizer
optimizer_state = checkpoint_dict["optimizer_state_dict"]
@@ -97,8 +102,6 @@ def train_func(config):
checkpoint_epoch = checkpoint_dict["epoch"]
starting_epoch = checkpoint_epoch + 1
- model = train.torch.prepare_model(model)
-
# Load in training and validation data.
transform_train = transforms.Compose(
[
| manually tested on https://console.anyscale-staging.com/o/anyscale-internal/projects/prj_qC3ZfndQWYYjx2cz8KWGNUL4/clusters/ses_wttwryzcyl6ahytbfz23vwm5fa?command-history-section=head_start_up_log
Signed-off-by: xwjiang2010 <xwjiang2010@gmail.com>
<!-- Thank you for your contribution! Please review https://github.com/ray-project/ray/blob/master/CONTRIBUTING.rst before opening a pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR. If you don't have the access to it, we will shortly find a reviewer and assign them to your PR. -->
## Why are these changes needed?
<!-- Please give a short summary of the change and the problem this solves. -->
## Related issue number
<!-- For example: "Closes #1234" -->
## Checks
- [ ] I've signed off every commit(by using the -s flag, i.e., `git commit -s`) in this PR.
- [ ] I've run `scripts/format.sh` to lint the changes in this PR.
- [ ] I've included any doc changes needed for https://docs.ray.io/en/master/.
- [ ] I've made sure the tests are passing. Note that there might be a few flaky tests, see the recent failures at https://flakey-tests.ray.io/
- Testing Strategy
- [ ] Unit tests
- [ ] Release tests
- [ ] This PR is not tested :(
| https://api.github.com/repos/ray-project/ray/pulls/32070 | 2023-01-30T21:15:28Z | 2023-01-31T16:22:28Z | 2023-01-31T16:22:28Z | 2023-07-26T19:51:24Z | 315 | ray-project/ray | 19,196 |
bpo-24758: Improve the error msg for unittest.mock.Mock()'s unsafe mode | diff --git a/Lib/unittest/mock.py b/Lib/unittest/mock.py
index 351aba5d44d7f4..a33a6208d6b584 100644
--- a/Lib/unittest/mock.py
+++ b/Lib/unittest/mock.py
@@ -571,7 +571,8 @@ def __getattr__(self, name):
raise AttributeError(name)
if not self._mock_unsafe:
if name.startswith(('assert', 'assret')):
- raise AttributeError(name)
+ raise AttributeError("Attributes cannot start with 'assert' "
+ "or 'assret'")
result = self._mock_children.get(name)
if result is _deleted:
diff --git a/Lib/unittest/test/testmock/testmock.py b/Lib/unittest/test/testmock/testmock.py
index 5f917dd20f1dde..b20b8e20e7e6fd 100644
--- a/Lib/unittest/test/testmock/testmock.py
+++ b/Lib/unittest/test/testmock/testmock.py
@@ -1453,9 +1453,10 @@ def static_method(): pass
#Issue21238
def test_mock_unsafe(self):
m = Mock()
- with self.assertRaises(AttributeError):
+ msg = "Attributes cannot start with 'assert' or 'assret'"
+ with self.assertRaisesRegex(AttributeError, msg):
m.assert_foo_call()
- with self.assertRaises(AttributeError):
+ with self.assertRaisesRegex(AttributeError, msg):
m.assret_foo_call()
m = Mock(unsafe=True)
m.assert_foo_call()
|
<!-- issue-number: [bpo-24758](https://bugs.python.org/issue24758) -->
https://bugs.python.org/issue24758
<!-- /issue-number -->
| https://api.github.com/repos/python/cpython/pulls/12991 | 2019-04-28T05:59:43Z | 2019-05-08T17:32:24Z | 2019-05-08T17:32:24Z | 2019-05-08T17:32:28Z | 351 | python/cpython | 4,728 |
ansible-test - Use quay.io containers in plugins. | diff --git a/changelogs/fragments/ansible-test-container-images.yml b/changelogs/fragments/ansible-test-container-images.yml
new file mode 100644
index 00000000000000..d5913f5fb3496c
--- /dev/null
+++ b/changelogs/fragments/ansible-test-container-images.yml
@@ -0,0 +1,3 @@
+minor_changes:
+ - ansible-test - Update the ``galaxy`` test plugin to get its container from a copy on quay.io.
+ - ansible-test - Update the ``openshift`` test plugin to get its container from a copy on quay.io.
diff --git a/test/lib/ansible_test/_internal/commands/integration/cloud/galaxy.py b/test/lib/ansible_test/_internal/commands/integration/cloud/galaxy.py
index 9c9000715d88a2..de58cbf5bca69f 100644
--- a/test/lib/ansible_test/_internal/commands/integration/cloud/galaxy.py
+++ b/test/lib/ansible_test/_internal/commands/integration/cloud/galaxy.py
@@ -86,7 +86,7 @@ def __init__(self, args): # type: (IntegrationConfig) -> None
# the newer update is available.
self.pulp = os.environ.get(
'ANSIBLE_PULP_CONTAINER',
- 'docker.io/pulp/pulp-galaxy-ng@sha256:b79a7be64eff86d8f58db9ca83ed4967bd8b4e45c99addb17a91d11926480cf1'
+ 'quay.io/ansible/pulp-galaxy-ng:b79a7be64eff'
)
self.uses_docker = True
diff --git a/test/lib/ansible_test/_internal/commands/integration/cloud/openshift.py b/test/lib/ansible_test/_internal/commands/integration/cloud/openshift.py
index c30785afafbdd2..10f63ac05aa7c7 100644
--- a/test/lib/ansible_test/_internal/commands/integration/cloud/openshift.py
+++ b/test/lib/ansible_test/_internal/commands/integration/cloud/openshift.py
@@ -36,7 +36,7 @@ def __init__(self, args): # type: (IntegrationConfig) -> None
super().__init__(args, config_extension='.kubeconfig')
# The image must be pinned to a specific version to guarantee CI passes with the version used.
- self.image = 'openshift/origin:v3.9.0'
+ self.image = 'quay.io/ansible/openshift-origin:v3.9.0'
self.uses_docker = True
self.uses_config = True
| ##### SUMMARY
ansible-test - Use quay.io containers in plugins.
##### ISSUE TYPE
Feature Pull Request
##### COMPONENT NAME
ansible-test
| https://api.github.com/repos/ansible/ansible/pulls/77058 | 2022-02-17T21:08:25Z | 2022-02-17T21:31:44Z | 2022-02-17T21:31:44Z | 2022-03-03T14:00:19Z | 597 | ansible/ansible | 48,878 |
Fix Markdown() documentation | diff --git a/rich/markdown.py b/rich/markdown.py
index 26167286b..35ac28c1c 100644
--- a/rich/markdown.py
+++ b/rich/markdown.py
@@ -399,7 +399,7 @@ class Markdown(JupyterMixin):
style (Union[str, Style], optional): Optional style to apply to markdown.
hyperlinks (bool, optional): Enable hyperlinks. Defaults to ``True``.
inline_code_lexer: (str, optional): Lexer to use if inline code highlighting is
- enabled. Defaults to "python".
+ enabled. Defaults to None.
inline_code_theme: (Optional[str], optional): Pygments theme for inline code
highlighting, or None for no highlighting. Defaults to None.
"""
| Quick fix for #1350
## Type of changes
- [ ] Bug fix
- [ ] New feature
- [x] Documentation / docstrings
- [ ] Tests
- [ ] Other
## Checklist
- [ ] I've run the latest [black](https://github.com/psf/black) with default args on new code.
- [ ] I've updated CHANGELOG.md and CONTRIBUTORS.md where appropriate.
- [ ] I've added tests for new code.
- [x] I accept that @willmcgugan may be pedantic in the code review.
## Description
No real code added, just a fix in the docs as pointed out in #1350 | https://api.github.com/repos/Textualize/rich/pulls/1352 | 2021-07-15T17:35:22Z | 2021-07-15T20:37:40Z | 2021-07-15T20:37:39Z | 2021-07-15T20:37:40Z | 171 | Textualize/rich | 48,341 |
Query params and secrets in app tests | diff --git a/lib/streamlit/testing/v1/app_test.py b/lib/streamlit/testing/v1/app_test.py
index b5654192c2c0..246532df3e77 100644
--- a/lib/streamlit/testing/v1/app_test.py
+++ b/lib/streamlit/testing/v1/app_test.py
@@ -22,6 +22,7 @@
import traceback
from typing import Any, Callable, Sequence
from unittest.mock import MagicMock
+from urllib import parse
from streamlit import source_util, util
from streamlit.proto.WidgetStates_pb2 import WidgetStates
@@ -31,6 +32,7 @@
)
from streamlit.runtime.media_file_manager import MediaFileManager
from streamlit.runtime.memory_media_file_storage import MemoryMediaFileStorage
+from streamlit.runtime.secrets import Secrets
from streamlit.runtime.state.session_state import SessionState
from streamlit.testing.v1.element_tree import (
Block,
@@ -82,6 +84,7 @@ def __init__(self, script_path: str, *, default_timeout: float):
self.default_timeout = default_timeout
self.session_state = SessionState()
self.query_params: dict[str, Any] = {}
+ self.secrets: dict[str, Any] = {}
tree = ElementTree()
tree._runner = self
@@ -177,6 +180,10 @@ def _run(
Timeout is in seconds, or None to use the default timeout of the runner.
"""
+ # Have to import the streamlit module itself so replacing st.secrets
+ # is visible to other modules.
+ import streamlit as st
+
if timeout is None:
timeout = self.default_timeout
@@ -191,14 +198,29 @@ def _run(
self.saved_cached_pages = source_util._cached_pages
source_util._cached_pages = None
+ saved_secrets: Secrets = st.secrets
+ # Only modify global secrets stuff if we have been given secrets
+ if self.secrets:
+ new_secrets = Secrets([])
+ new_secrets._secrets = self.secrets
+ st.secrets = new_secrets
+
with patch_config_options({"runner.postScriptGC": False}):
script_runner = LocalScriptRunner(self._script_path, self.session_state)
self._tree = script_runner.run(widget_state, self.query_params, timeout)
self._tree._runner = self
+ # Last event is SHUTDOWN, so the corresponding data includes query string
+ query_string = script_runner.event_data[-1]["client_state"].query_string
+ self.query_params = parse.parse_qs(query_string)
# teardown
with source_util._pages_cache_lock:
source_util._cached_pages = self.saved_cached_pages
+
+ if self.secrets:
+ if st.secrets._secrets is not None:
+ self.secrets = dict(st.secrets._secrets)
+ st.secrets = saved_secrets
Runtime._instance = None
return self
diff --git a/lib/tests/streamlit/testing/test_runner_test.py b/lib/tests/streamlit/testing/test_runner_test.py
index 5ac15c19d110..9842ee0ce5af 100644
--- a/lib/tests/streamlit/testing/test_runner_test.py
+++ b/lib/tests/streamlit/testing/test_runner_test.py
@@ -42,7 +42,7 @@ def test_from_file():
script.run()
-def test_query_params():
+def test_get_query_params():
def script():
import streamlit as st
@@ -54,3 +54,27 @@ def script():
at.query_params["bar"] = "baz"
at.run()
assert at.get("json")[0].body == '{"foo": ["5"], "bar": ["baz"]}'
+
+
+def test_set_query_params():
+ def script():
+ import streamlit as st
+
+ st.experimental_set_query_params(foo="bar")
+
+ at = AppTest.from_function(script).run()
+ # parse.parse_qs puts everything in lists
+ assert at.query_params["foo"] == ["bar"]
+
+
+def test_secrets():
+ def script():
+ import streamlit as st
+
+ st.write(st.secrets["foo"])
+
+ at = AppTest.from_function(script)
+ at.secrets["foo"] = "bar"
+ at.run()
+ assert at.markdown[0].value == "bar"
+ assert at.secrets["foo"] == "bar"
| Query param reading, plus somewhat hacky secrets support.
## Testing Plan
Some tests added, but secrets are undertested
---
**Contribution License Agreement**
By submitting this pull request you agree that all contributions to this project are made under the Apache 2.0 license.
| https://api.github.com/repos/streamlit/streamlit/pulls/7561 | 2023-10-16T23:11:48Z | 2023-10-17T23:21:48Z | 2023-10-17T23:21:48Z | 2023-10-17T23:21:52Z | 966 | streamlit/streamlit | 22,108 |
Fix proxyFile regex to properly match an address with a - | diff --git a/lib/core/option.py b/lib/core/option.py
index 64722e1e276..0fcd0406fa3 100755
--- a/lib/core/option.py
+++ b/lib/core/option.py
@@ -2324,7 +2324,7 @@ def _setProxyList():
return
conf.proxyList = []
- for match in re.finditer(r"(?i)((http[^:]*|socks[^:]*)://)?([\w.]+):(\d+)", readCachedFileContent(conf.proxyFile)):
+ for match in re.finditer(r"(?i)((http[^:]*|socks[^:]*)://)?([\w\-.]+):(\d+)", readCachedFileContent(conf.proxyFile)):
_, type_, address, port = match.groups()
conf.proxyList.append("%s://%s:%s" % (type_ or "http", address, port))
| Hi!
If you have a proxy file and have some hosts with - (and it's a valid char for a domain) the regex that matches the type_, address and port doesn't works properly.
Just added the - on the regex and every thing works fine!
```
$ python test.py
Testing string: foo.bar:80 fo-o.bar:80 foo-bar.foo.bar:80 foo-bar.fo-o.bar:80
Results for old regex:
http://foo.bar:80
http://o.bar:80
http://bar.foo.bar:80
http://o.bar:80
Results for new regex:
http://foo.bar:80
http://fo-o.bar:80
http://foo-bar.foo.bar:80
http://foo-bar.fo-o.bar:80
```
| https://api.github.com/repos/sqlmapproject/sqlmap/pulls/2401 | 2017-02-17T23:46:58Z | 2017-02-19T00:31:12Z | 2017-02-19T00:31:12Z | 2017-02-19T00:31:12Z | 201 | sqlmapproject/sqlmap | 15,085 |
Render order enforcing | diff --git a/gym/utils/env_checker.py b/gym/utils/env_checker.py
index 19e4317c719..399c5b10e27 100644
--- a/gym/utils/env_checker.py
+++ b/gym/utils/env_checker.py
@@ -388,13 +388,6 @@ def check_env(env: gym.Env, warn: bool = True, skip_render_check: bool = True) -
observation_space = env.observation_space
action_space = env.action_space
- try:
- # As environments are normally wrapped by OrderEnforcing
- # "Cannot call env.render() before calling env.reset()" is raised
- env.step(env.action_space.sample())
- except AssertionError as e:
- assert "Cannot call env.step()" in str(e)
-
# Warn the user if needed.
# A warning means that the environment may run but not work properly with popular RL libraries.
if warn:
diff --git a/gym/wrappers/order_enforcing.py b/gym/wrappers/order_enforcing.py
index 32e1f0b7149..4bb0720da9a 100644
--- a/gym/wrappers/order_enforcing.py
+++ b/gym/wrappers/order_enforcing.py
@@ -1,5 +1,6 @@
"""Wrapper to enforce the proper ordering of environment operations."""
import gym
+from gym.error import ResetNeeded
class OrderEnforcing(gym.Wrapper):
@@ -18,14 +19,21 @@ class OrderEnforcing(gym.Wrapper):
>>> env.step(0)
"""
- def __init__(self, env):
- """A wrapper that will produce an error if :meth:`step` is called before an initial :meth:`reset`."""
+ def __init__(self, env: gym.Env, disable_render_order_enforcing: bool = False):
+ """A wrapper that will produce an error if :meth:`step` is called before an initial :meth:`reset`.
+
+ Args:
+ env: The environment to wrap
+ disable_render_order_enforcing: If to disable render order enforcing
+ """
super().__init__(env)
- self._has_reset = False
+ self._has_reset: bool = False
+ self._disable_render_order_enforcing: bool = disable_render_order_enforcing
def step(self, action):
- """Steps through the environment with :param:`kwargs`."""
- assert self._has_reset, "Cannot call env.step() before calling env.reset()"
+ """Steps through the environment with `kwargs`."""
+ if not self._has_reset:
+ raise ResetNeeded("Cannot call env.step() before calling env.reset()")
return self.env.step(action)
def reset(self, **kwargs):
@@ -34,15 +42,10 @@ def reset(self, **kwargs):
return self.env.reset(**kwargs)
def render(self, **kwargs):
- """Checks that the environment has been :meth:`reset` before rendering the environment."""
- if hasattr(self.unwrapped, "disable_render_order_enforcing"):
- if not self.unwrapped.disable_render_order_enforcing:
- assert (
- self._has_reset
- ), "Cannot call env.render() before calling env.reset()"
- else:
- assert self._has_reset, (
- "Cannot call env.render() before calling env.reset(), if this is a intended property, "
- "set `disable_render_order_enforcing=True` on the base environment (env.unwrapped)."
+ """Renders the environment with `kwargs`."""
+ if not self._disable_render_order_enforcing and not self._has_reset:
+ raise ResetNeeded(
+ "Cannot call `env.render()` before calling `env.reset()`, if this is a intended action, "
+ "set `disable_render_order_enforcing=True` on the OrderEnforcer wrapper."
)
return self.env.render(**kwargs)
diff --git a/tests/wrappers/test_order_enforcing.py b/tests/wrappers/test_order_enforcing.py
index 7225fd3437c..6b77344ce29 100644
--- a/tests/wrappers/test_order_enforcing.py
+++ b/tests/wrappers/test_order_enforcing.py
@@ -2,6 +2,7 @@
import gym
from gym.envs.classic_control import CartPoleEnv
+from gym.error import ResetNeeded
from gym.wrappers import OrderEnforcing
from tests.envs.spec_list import spec_list
from tests.wrappers.utils import has_wrapper
@@ -9,12 +10,14 @@
@pytest.mark.parametrize("spec", spec_list, ids=[spec.id for spec in spec_list])
def test_gym_make_order_enforcing(spec):
+ """Checks that gym.make wrappers the environment with the OrderEnforcing wrapper."""
env = gym.make(spec.id)
assert has_wrapper(env, OrderEnforcing)
def test_order_enforcing():
+ """Checks that the order enforcing works as expected, raising an error before reset is called and not after."""
# The reason for not using gym.make is that all environments are by default wrapped in the order enforcing wrapper
env = CartPoleEnv()
assert not has_wrapper(env, OrderEnforcing)
@@ -22,19 +25,19 @@ def test_order_enforcing():
# Assert that the order enforcing works for step and render before reset
order_enforced_env = OrderEnforcing(env)
assert order_enforced_env._has_reset is False
- with pytest.raises(AssertionError):
+ with pytest.raises(ResetNeeded):
order_enforced_env.step(0)
- with pytest.raises(AssertionError):
- order_enforced_env.render()
+ with pytest.raises(ResetNeeded):
+ order_enforced_env.render(mode="rgb_array")
+ assert order_enforced_env._has_reset is False
# Assert that the Assertion errors are not raised after reset
order_enforced_env.reset()
assert order_enforced_env._has_reset is True
order_enforced_env.step(0)
- order_enforced_env.render()
+ order_enforced_env.render(mode="rgb_array")
- # Assert that with disable_render_order_enforcing works
+ # Assert that with disable_render_order_enforcing works, the environment has already been reset
env = CartPoleEnv()
- env.disable_render_order_enforcing = True
- env = OrderEnforcing(env)
- env.render() # no assertion error
+ env = OrderEnforcing(env, disable_render_order_enforcing=True)
+ env.render(mode="rgb_array") # no assertion error
diff --git a/tests/wrappers/utils.py b/tests/wrappers/utils.py
index 63cee5a922a..55db8873af0 100644
--- a/tests/wrappers/utils.py
+++ b/tests/wrappers/utils.py
@@ -1,9 +1,7 @@
-from typing import Union
-
import gym
-def has_wrapper(wrapped_env: Union[gym.Wrapper, gym.Env], wrapper_type: type):
+def has_wrapper(wrapped_env: gym.Env, wrapper_type: type) -> bool:
while isinstance(wrapped_env, gym.Wrapper):
if isinstance(wrapped_env, wrapper_type):
return True
| Add render order enforcement as most gym environments cannot be visualised before render
Inspired by https://github.com/openai/gym/issues/2751
As some users may want this as a feature of their environments, I have added a setting where if the users add `disable_render_order_enforcing` to the base environment then this will not be checked
Added tests to verify these features are working
Deleted old test that did not seem to actually test anything that `OrderEnforcing` actually implemented.
This has also changed the error from `AssertationError` to `ResetNeeded` error | https://api.github.com/repos/openai/gym/pulls/2805 | 2022-05-06T13:25:34Z | 2022-05-18T14:07:54Z | 2022-05-18T14:07:54Z | 2022-05-18T14:07:55Z | 1,570 | openai/gym | 5,258 |
Fixed #31628 -- Updated Windows install guide to recommend venv. | diff --git a/docs/howto/windows.txt b/docs/howto/windows.txt
index c0750ab71323c..9d67bd9e5ef12 100644
--- a/docs/howto/windows.txt
+++ b/docs/howto/windows.txt
@@ -5,11 +5,10 @@ How to install Django on Windows
.. highlight:: doscon
This document will guide you through installing Python 3.7 and Django on
-Windows. It also provides instructions for installing `virtualenv`_ and
-`virtualenvwrapper`_, which make it easier to work on Python projects. This is
-meant as a beginner's guide for users working on Django projects and does not
-reflect how Django should be installed when developing patches for Django
-itself.
+Windows. It also provides instructions for setting up a virtual environment,
+which makes it easier to work on Python projects. This is meant as a beginner's
+guide for users working on Django projects and does not reflect how Django
+should be installed when developing patches for Django itself.
The steps in this guide have been tested with Windows 7, 8, and 10. In other
versions, the steps would be similar. You will need to be familiar with using
@@ -49,30 +48,34 @@ get-pip.py`` instructions.
.. _pip: https://pypi.org/project/pip/
-.. _virtualenvwrapper-win:
+.. _virtualenvironment:
-Install ``virtualenv`` and ``virtualenvwrapper``
-================================================
+Setting up a virtual environment
+================================
-`virtualenv`_ and `virtualenvwrapper`_ provide a dedicated environment for
-each Django project you create. While not mandatory, this is considered a best
-practice and will save you time in the future when you're ready to deploy your
-project. To do this, run::
+It is best practice to provide a dedicated environment for each Django project
+you create. There are many options to manage environments and packages within
+the Python ecosystem, some of which are recommended in the `Python
+documentation <https://packaging.python.org/guides/tool-recommendations/>`_.
+Python itself comes with `venv`_ for managing environments which we will use
+for this guide.
- ...\> py -m pip install virtualenvwrapper-win
+To create a virtual environment for your project, open a new command prompt,
+navigate to the folder where you want to create your project and then enter the
+following::
-Then create a virtual environment for your project::
+ ...\> py -m venv project-name
- ...\> mkvirtualenv myproject
+This will create a folder called 'project-name' if it does not already exist
+and setup the virtual environment. To activate the environment, run::
-The virtual environment will be activated automatically and you'll see
-"(myproject)" next to the command prompt to designate that. If you start a new
-command prompt, you'll need to activate the environment again using::
+ ...\> project-name\Scripts\activate.bat
- ...\> workon myproject
+The virtual environment will be activated and you'll see "(project-name)" next
+to the command prompt to designate that. Each time you start a new command
+prompt, you'll need to activate the environment again.
-.. _virtualenv: https://pypi.org/project/virtualenv/
-.. _virtualenvwrapper: https://pypi.org/project/virtualenvwrapper-win/
+.. _venv: https://docs.python.org/3/tutorial/venv.html
Install Django
==============
diff --git a/docs/intro/contributing.txt b/docs/intro/contributing.txt
index 06cc061670ddb..78d904440b431 100644
--- a/docs/intro/contributing.txt
+++ b/docs/intro/contributing.txt
@@ -163,13 +163,6 @@ more convenient.
...\> %HOMEPATH%\.virtualenvs\djangodev\Scripts\activate.bat
- or you can install :ref:`a Windows version of virtualenvwrapper
- <virtualenvwrapper-win>` and then use:
-
- .. code-block:: doscon
-
- ...\> workon djangodev
-
__ https://virtualenvwrapper.readthedocs.io/en/latest/
The name of the currently activated virtual environment is displayed on the
| [Ticket #31628](https://code.djangoproject.com/ticket/31628) | https://api.github.com/repos/django/django/pulls/12981 | 2020-05-26T21:15:50Z | 2020-05-27T09:18:24Z | 2020-05-27T09:18:24Z | 2020-05-27T09:43:31Z | 949 | django/django | 50,987 |
Custom api_base for GeminiPro | diff --git a/g4f/Provider/GeminiPro.py b/g4f/Provider/GeminiPro.py
index e1738dc8f0..87ded3ac39 100644
--- a/g4f/Provider/GeminiPro.py
+++ b/g4f/Provider/GeminiPro.py
@@ -13,6 +13,7 @@ class GeminiPro(AsyncGeneratorProvider, ProviderModelMixin):
url = "https://ai.google.dev"
working = True
supports_message_history = True
+ needs_auth = True
default_model = "gemini-pro"
models = ["gemini-pro", "gemini-pro-vision"]
@@ -24,19 +25,27 @@ async def create_async_generator(
stream: bool = False,
proxy: str = None,
api_key: str = None,
+ api_base: str = None,
image: ImageType = None,
**kwargs
) -> AsyncResult:
model = "gemini-pro-vision" if not model and image else model
model = cls.get_model(model)
+
if not api_key:
- raise MissingAuthError('Missing "api_key" for auth')
- headers = {
- "Content-Type": "application/json",
- }
+ raise MissingAuthError('Missing "api_key"')
+ if not api_base:
+ api_base = f"https://generativelanguage.googleapis.com/v1beta"
+
+ method = "streamGenerateContent" if stream else "generateContent"
+ url = f"{api_base.rstrip('/')}/models/{model}:{method}"
+ headers = None
+ if api_base:
+ headers = {f"Authorization": "Bearer {api_key}"}
+ else:
+ url += f"?key={api_key}"
+
async with ClientSession(headers=headers) as session:
- method = "streamGenerateContent" if stream else "generateContent"
- url = f"https://generativelanguage.googleapis.com/v1beta/models/{model}:{method}"
contents = [
{
"role": "model" if message["role"] == "assistant" else message["role"],
@@ -62,7 +71,7 @@ async def create_async_generator(
"topK": kwargs.get("top_k"),
}
}
- async with session.post(url, params={"key": api_key}, json=data, proxy=proxy) as response:
+ async with session.post(url, json=data, proxy=proxy) as response:
if not response.ok:
data = await response.json()
raise RuntimeError(data[0]["error"]["message"])
@@ -73,12 +82,11 @@ async def create_async_generator(
lines = [b"{\n"]
elif chunk == b",\r\n" or chunk == b"]":
try:
- data = b"".join(lines)
- data = json.loads(data)
+ data = json.loads(b"".join(lines))
yield data["candidates"][0]["content"]["parts"][0]["text"]
except:
data = data.decode() if isinstance(data, bytes) else data
- raise RuntimeError(f"Read text failed. data: {data}")
+ raise RuntimeError(f"Read chunk failed: {data}")
lines = []
else:
lines.append(chunk)
diff --git a/g4f/Provider/Liaobots.py b/g4f/Provider/Liaobots.py
index e93642ba71..54bf7f2e7a 100644
--- a/g4f/Provider/Liaobots.py
+++ b/g4f/Provider/Liaobots.py
@@ -78,7 +78,7 @@ class Liaobots(AsyncGeneratorProvider, ProviderModelMixin):
supports_gpt_35_turbo = True
supports_gpt_4 = True
default_model = "gpt-3.5-turbo"
- models = [m for m in models]
+ models = list(models)
model_aliases = {
"claude-v2": "claude-2"
}
diff --git a/g4f/gui/client/css/style.css b/g4f/gui/client/css/style.css
index bd42280ded..bed54f88e4 100644
--- a/g4f/gui/client/css/style.css
+++ b/g4f/gui/client/css/style.css
@@ -541,7 +541,6 @@ label[for="camera"] {
display: flex;
align-items: center;
gap: 16px;
- padding-right: 15px
}
.field .about {
@@ -569,7 +568,16 @@ select {
padding: 8px 16px;
appearance: none;
- width: 250px;
+ width: 160px;
+}
+
+@media only screen and (min-width: 40em) {
+ select {
+ width: 200px;
+ }
+ .field {
+ padding-right: 15px
+ }
}
.input-box {
diff --git a/g4f/gui/client/js/chat.v1.js b/g4f/gui/client/js/chat.v1.js
index 9585ca98ae..c727dbf999 100644
--- a/g4f/gui/client/js/chat.v1.js
+++ b/g4f/gui/client/js/chat.v1.js
@@ -3,7 +3,6 @@ const markdown = window.markdownit();
const message_box = document.getElementById(`messages`);
const message_input = document.getElementById(`message-input`);
const box_conversations = document.querySelector(`.top`);
-const spinner = box_conversations.querySelector(".spinner");
const stop_generating = document.querySelector(`.stop_generating`);
const regenerate = document.querySelector(`.regenerate`);
const send_button = document.querySelector(`#send-button`);
@@ -71,6 +70,7 @@ const handle_ask = async () => {
message_input.style.height = `82px`;
message_input.focus();
window.scrollTo(0, 0);
+
message = message_input.value
if (message.length > 0) {
message_input.value = '';
@@ -268,6 +268,11 @@ const ask_gpt = async () => {
}
}
if (!error) {
+ // Remove cursor
+ html = markdown_render(text);
+ content_inner.innerHTML = html;
+ highlight(content_inner);
+
if (imageInput) imageInput.value = "";
if (cameraInput) cameraInput.value = "";
if (fileInput) fileInput.value = "";
@@ -275,26 +280,28 @@ const ask_gpt = async () => {
} catch (e) {
console.error(e);
- if (e.name != `AbortError`) {
- text = `oops ! something went wrong, please try again / reload. [stacktrace in console]`;
+ if (e.name != "AbortError") {
+ error = true;
+ text = "oops ! something went wrong, please try again / reload. [stacktrace in console]";
content_inner.innerHTML = text;
} else {
content_inner.innerHTML += ` [aborted]`;
text += ` [aborted]`
}
}
- let cursorDiv = document.getElementById(`cursor`);
- if (cursorDiv) cursorDiv.parentNode.removeChild(cursorDiv);
- if (text) {
+ if (!error) {
await add_message(window.conversation_id, "assistant", text, provider);
+ await load_conversation(window.conversation_id);
+ } else {
+ let cursorDiv = document.getElementById(`cursor`);
+ if (cursorDiv) cursorDiv.parentNode.removeChild(cursorDiv);
}
- await load_conversation(window.conversation_id);
message_box.scrollTop = message_box.scrollHeight;
await remove_cancel_button();
await register_remove_message();
prompt_lock = false;
window.scrollTo(0, 0);
- await load_conversations(20, 0);
+ await load_conversations();
regenerate.classList.remove(`regenerate-hidden`);
};
@@ -353,7 +360,7 @@ const delete_conversation = async (conversation_id) => {
await new_conversation();
}
- await load_conversations(20, 0, true);
+ await load_conversations();
};
const set_conversation = async (conversation_id) => {
@@ -362,7 +369,7 @@ const set_conversation = async (conversation_id) => {
await clear_conversation();
await load_conversation(conversation_id);
- await load_conversations(20, 0, true);
+ await load_conversations();
};
const new_conversation = async () => {
@@ -370,7 +377,7 @@ const new_conversation = async () => {
window.conversation_id = uuid();
await clear_conversation();
- await load_conversations(20, 0, true);
+ await load_conversations();
await say_hello()
};
@@ -435,14 +442,14 @@ function count_words(text) {
}
function count_tokens(model, text) {
- if (model.startsWith("gpt-3") || model.startsWith("gpt-4")) {
- return GPTTokenizer_cl100k_base?.encode(text).length
+ if (model.startsWith("gpt-3") || model.startsWith("gpt-4") || model.startsWith("text-davinci")) {
+ return GPTTokenizer_cl100k_base?.encode(text).length;
}
if (model.startsWith("llama2") || model.startsWith("codellama")) {
- return llamaTokenizer?.encode(text).length
+ return llamaTokenizer?.encode(text).length;
}
if (model.startsWith("mistral") || model.startsWith("mixtral")) {
- return mistralTokenizer?.encode(text).length
+ return mistralTokenizer?.encode(text).length;
}
}
@@ -526,7 +533,7 @@ const add_message = async (conversation_id, role, content, provider) => {
return conversation.items.length - 1;
};
-const load_conversations = async (limit, offset, loader) => {
+const load_conversations = async () => {
let conversations = [];
for (let i = 0; i < localStorage.length; i++) {
if (localStorage.key(i).startsWith("conversation:")) {
@@ -550,7 +557,6 @@ const load_conversations = async (limit, offset, loader) => {
</div>
`;
}
-
};
document.getElementById(`cancelButton`).addEventListener(`click`, async () => {
@@ -693,10 +699,8 @@ window.onload = async () => {
}
}
- if (conversations == 0) localStorage.clear();
-
await setTimeout(() => {
- load_conversations(20, 0);
+ load_conversations();
}, 1);
if (/\/chat\/.+/.test(window.location.href)) {
@@ -776,15 +780,17 @@ observer.observe(message_input, { attributes: true });
versions = await response.json()
document.title = 'g4f - gui - ' + versions["version"];
- text = "version ~ "
+ let text = "version ~ "
if (versions["version"] != versions["latest_version"]) {
- release_url = 'https://github.com/xtekky/gpt4free/releases/tag/' + versions["latest_version"];
- text += '<a href="' + release_url +'" target="_blank" title="New version: ' + versions["latest_version"] +'">' + versions["version"] + ' 🆕</a>';
+ let release_url = 'https://github.com/xtekky/gpt4free/releases/tag/' + versions["latest_version"];
+ let title = `New version: ${versions["latest_version"]}`;
+ text += `<a href="${release_url}" target="_blank" title="${title}">${versions["version"]} 🆕</a>`;
} else {
text += versions["version"];
}
document.getElementById("version_text").innerHTML = text
})()
+
for (const el of [imageInput, cameraInput]) {
el.addEventListener('click', async () => {
el.value = '';
@@ -794,6 +800,7 @@ for (const el of [imageInput, cameraInput]) {
}
});
}
+
fileInput.addEventListener('click', async (event) => {
fileInput.value = '';
delete fileInput.dataset.text;
| https://api.github.com/repos/xtekky/gpt4free/pulls/1630 | 2024-02-25T08:42:10Z | 2024-02-25T20:34:21Z | 2024-02-25T20:34:21Z | 2024-02-25T20:34:22Z | 2,694 | xtekky/gpt4free | 38,159 |
|
[serve] Restore "Get new handle to controller if killed" (#23283) | diff --git a/dashboard/optional_utils.py b/dashboard/optional_utils.py
index d36469ffbff51..c0fd147eb59c1 100644
--- a/dashboard/optional_utils.py
+++ b/dashboard/optional_utils.py
@@ -270,6 +270,7 @@ async def decorator(self, *args, **kwargs):
if connect_to_serve:
serve.start(detached=True, _override_controller_namespace="serve")
+
return await f(self, *args, **kwargs)
except Exception as e:
logger.exception(f"Unexpected error in handler: {e}")
diff --git a/python/ray/serve/api.py b/python/ray/serve/api.py
index 0f900ee6d55aa..77fc82b41c16d 100644
--- a/python/ray/serve/api.py
+++ b/python/ray/serve/api.py
@@ -26,6 +26,7 @@
)
from fastapi import APIRouter, FastAPI
+from ray.exceptions import RayActorError
from ray.experimental.dag.class_node import ClassNode
from ray.experimental.dag.function_node import FunctionNode
from starlette.requests import Request
@@ -82,7 +83,7 @@
_INTERNAL_REPLICA_CONTEXT = None
-_global_client = None
+_global_client: "Client" = None
_UUID_RE = re.compile(
"[a-f0-9]{8}-[a-f0-9]{4}-4[a-f0-9]{3}-[89aAbB][a-f0-9]{3}-[a-f0-9]{12}"
@@ -118,9 +119,28 @@ def _get_controller_namespace(
return controller_namespace
-def internal_get_global_client(_override_controller_namespace: Optional[str] = None):
- if _global_client is not None:
- return _global_client
+def internal_get_global_client(
+ _override_controller_namespace: Optional[str] = None,
+ _health_check_controller: bool = False,
+) -> "Client":
+ """Gets the global client, which stores the controller's handle.
+
+ Args:
+ _override_controller_namespace (Optional[str]): If None and there's no
+ cached client, searches for the controller in this namespace.
+ _health_check_controller (bool): If True, run a health check on the
+ cached controller if it exists. If the check fails, try reconnecting
+ to the controller.
+ """
+
+ try:
+ if _global_client is not None:
+ if _health_check_controller:
+ ray.get(_global_client._controller.check_alive.remote())
+ return _global_client
+ except RayActorError:
+ logger.info("The cached controller has died. Reconnecting.")
+ _set_global_client(None)
return _connect(_override_controller_namespace=_override_controller_namespace)
@@ -711,7 +731,8 @@ def start(
try:
client = internal_get_global_client(
- _override_controller_namespace=_override_controller_namespace
+ _override_controller_namespace=_override_controller_namespace,
+ _health_check_controller=True,
)
logger.info(
"Connecting to existing Serve instance in namespace "
diff --git a/python/ray/serve/controller.py b/python/ray/serve/controller.py
index 194fa36970184..d7e0dbdab3642 100644
--- a/python/ray/serve/controller.py
+++ b/python/ray/serve/controller.py
@@ -110,6 +110,10 @@ async def __init__(
asyncio.get_event_loop().create_task(self.run_control_loop())
+ def check_alive(self) -> None:
+ """No-op to check if this controller is alive."""
+ return
+
def record_autoscaling_metrics(self, data: Dict[str, float], send_timestamp: float):
self.autoscaling_metrics_store.add_metrics_point(data, send_timestamp)
diff --git a/python/ray/serve/tests/test_cli.py b/python/ray/serve/tests/test_cli.py
index b7564aca1830f..3d8182f4b0ff6 100644
--- a/python/ray/serve/tests/test_cli.py
+++ b/python/ray/serve/tests/test_cli.py
@@ -136,6 +136,7 @@ def test_deploy(ray_start_stop):
== "Hello shallow world!"
)
+ serve.shutdown()
ray.shutdown()
@@ -371,5 +372,43 @@ def test_run_runtime_env(ray_start_stop):
p.wait()
+@pytest.mark.skipif(sys.platform == "win32", reason="File path incorrect on Windows.")
+@pytest.mark.parametrize("use_command", [True, False])
+def test_idempotence_after_controller_death(ray_start_stop, use_command: bool):
+ """Check that CLI is idempotent even if controller dies."""
+
+ config_file_name = os.path.join(
+ os.path.dirname(__file__), "test_config_files", "two_deployments.yaml"
+ )
+ success_message_fragment = b"Sent deploy request successfully!"
+ deploy_response = subprocess.check_output(["serve", "deploy", config_file_name])
+ assert success_message_fragment in deploy_response
+
+ ray.init(address="auto", namespace="serve")
+ serve.start(detached=True)
+ assert len(serve.list_deployments()) == 2
+
+ # Kill controller
+ if use_command:
+ subprocess.check_output(["serve", "shutdown"])
+ else:
+ serve.shutdown()
+
+ info_response = subprocess.check_output(["serve", "config"])
+ info = yaml.safe_load(info_response)
+
+ assert "deployments" in info
+ assert len(info["deployments"]) == 0
+
+ deploy_response = subprocess.check_output(["serve", "deploy", config_file_name])
+ assert success_message_fragment in deploy_response
+
+ # Restore testing controller
+ serve.start(detached=True)
+ assert len(serve.list_deployments()) == 2
+ serve.shutdown()
+ ray.shutdown()
+
+
if __name__ == "__main__":
sys.exit(pytest.main(["-v", "-s", __file__]))
diff --git a/python/ray/serve/tests/test_standalone2.py b/python/ray/serve/tests/test_standalone2.py
index a3d4ecb633ed8..ce044f35fc674 100644
--- a/python/ray/serve/tests/test_standalone2.py
+++ b/python/ray/serve/tests/test_standalone2.py
@@ -1,11 +1,13 @@
import sys
import pytest
+from ray.exceptions import RayActorError
import requests
import ray
from ray import serve
from ray.serve.api import internal_get_global_client
+from ray._private.test_utils import wait_for_condition
@pytest.fixture
@@ -75,5 +77,41 @@ def f(*args):
serve.shutdown()
+@pytest.mark.parametrize("detached", [True, False])
+def test_refresh_controller_after_death(shutdown_ray, detached):
+ """Check if serve.start() refreshes the controller handle if it's dead."""
+
+ ray_namespace = "ray_namespace"
+ controller_namespace = "controller_namespace"
+
+ ray.init(namespace=ray_namespace)
+ serve.shutdown() # Ensure serve isn't running before beginning the test
+ serve.start(detached=detached, _override_controller_namespace=controller_namespace)
+
+ old_handle = internal_get_global_client()._controller
+ ray.kill(old_handle, no_restart=True)
+
+ def controller_died(handle):
+ try:
+ ray.get(handle.check_alive.remote())
+ return False
+ except RayActorError:
+ return True
+
+ wait_for_condition(controller_died, handle=old_handle, timeout=15)
+
+ # Call start again to refresh handle
+ serve.start(detached=detached, _override_controller_namespace=controller_namespace)
+
+ new_handle = internal_get_global_client()._controller
+ assert new_handle is not old_handle
+
+ # Health check should not error
+ ray.get(new_handle.check_alive.remote())
+
+ serve.shutdown()
+ ray.shutdown()
+
+
if __name__ == "__main__":
sys.exit(pytest.main(["-v", "-s", __file__]))
| <!-- Thank you for your contribution! Please review https://github.com/ray-project/ray/blob/master/CONTRIBUTING.rst before opening a pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR. If you don't have the access to it, we will shortly find a reviewer and assign them to your PR. -->
## Why are these changes needed?
<!-- Please give a short summary of the change and the problem this solves. -->
#23336 reverted #23283. #23283 did pass CI before merging. However, when it merged, it began to fail because it used commands that were outdated on the Master branch in `test_cli.py` (specifically `serve info` instead of `serve config`). This change restores #23283 and updates its tests commands.
## Related issue number
<!-- For example: "Closes #1234" -->
#23336 and #23283.
## Checks
- [X] I've run `scripts/format.sh` to lint the changes in this PR.
- [ ] I've included any doc changes needed for https://docs.ray.io/en/master/.
- [ ] I've made sure the tests are passing. Note that there might be a few flaky tests, see the recent failures at https://flakey-tests.ray.io/
- Testing Strategy
- [X] Unit tests
- `test_cli.py` is updated to use `serve config` instead of `serve info`.
| https://api.github.com/repos/ray-project/ray/pulls/23338 | 2022-03-18T19:22:02Z | 2022-03-18T23:40:09Z | 2022-03-18T23:40:09Z | 2022-03-18T23:40:09Z | 1,794 | ray-project/ray | 19,138 |
Prepare 2.7.3 release | diff --git a/paddleocr.py b/paddleocr.py
index 0619300c88..8e4359df50 100644
--- a/paddleocr.py
+++ b/paddleocr.py
@@ -59,7 +59,7 @@ def _import_file(module_name, file_path, make_importable=False):
]
SUPPORT_DET_MODEL = ['DB']
-VERSION = '2.7.2'
+VERSION = '2.7.3'
SUPPORT_REC_MODEL = ['CRNN', 'SVTR_LCNet']
BASE_DIR = os.path.expanduser("~/.paddleocr/")
diff --git a/ppocr/utils/utility.py b/ppocr/utils/utility.py
index 91b74f022d..05c92a8d0a 100755
--- a/ppocr/utils/utility.py
+++ b/ppocr/utils/utility.py
@@ -108,7 +108,8 @@ def check_and_read(img_path):
return imgvalue, True, False
elif os.path.basename(img_path)[-3:].lower() == 'pdf':
from paddle.utils import try_import
- try_import('fitz')
+
+ fitz = try_import("fitz")
from PIL import Image
imgs = []
with fitz.open(img_path) as pdf:
diff --git a/ppstructure/pdf2word/pdf2word.py b/ppstructure/pdf2word/pdf2word.py
index d5d715ed66..2c21b661dd 100644
--- a/ppstructure/pdf2word/pdf2word.py
+++ b/ppstructure/pdf2word/pdf2word.py
@@ -21,9 +21,9 @@
import cv2
import platform
import numpy as np
-import fitz
from paddle.utils import try_import
-try_import('fitz')
+fitz = try_import("fitz")
+
from PIL import Image
from pdf2docx.converter import Converter
from qtpy.QtWidgets import QApplication, QWidget, QPushButton, QProgressBar, \
| cherry-pick #11820 and bump version to 2.7.3 | https://api.github.com/repos/PaddlePaddle/PaddleOCR/pulls/11826 | 2024-03-28T03:42:28Z | 2024-03-28T03:43:15Z | 2024-03-28T03:43:15Z | 2024-03-28T03:43:15Z | 423 | PaddlePaddle/PaddleOCR | 42,723 |
Fixed #34424 -- Fixed SelectDateWidget crash for inputs raising OverflowError. | diff --git a/django/forms/widgets.py b/django/forms/widgets.py
index 842b73e95c4aa..3d6091c250525 100644
--- a/django/forms/widgets.py
+++ b/django/forms/widgets.py
@@ -1161,6 +1161,8 @@ def value_from_datadict(self, data, files, name):
# Return pseudo-ISO dates with zeros for any unselected values,
# e.g. '2017-0-23'.
return "%s-%s-%s" % (y or 0, m or 0, d or 0)
+ except OverflowError:
+ return "0-0-0"
return date_value.strftime(input_format)
return data.get(name)
diff --git a/tests/forms_tests/field_tests/test_datefield.py b/tests/forms_tests/field_tests/test_datefield.py
index a9f93f40ed20b..65ac76319d1f4 100644
--- a/tests/forms_tests/field_tests/test_datefield.py
+++ b/tests/forms_tests/field_tests/test_datefield.py
@@ -1,3 +1,4 @@
+import sys
from datetime import date, datetime
from django.core.exceptions import ValidationError
@@ -36,6 +37,17 @@ def test_form_field(self):
d = GetDate({"mydate_month": "1", "mydate_day": "1", "mydate_year": "2010"})
self.assertIn('<label for="id_mydate_month">', d.as_p())
+ # Inputs raising an OverflowError.
+ e = GetDate(
+ {
+ "mydate_month": str(sys.maxsize + 1),
+ "mydate_day": "31",
+ "mydate_year": "2010",
+ }
+ )
+ self.assertIs(e.is_valid(), False)
+ self.assertEqual(e.errors, {"mydate": ["Enter a valid date."]})
+
@translation.override("nl")
def test_l10n_date_changed(self):
"""
@@ -149,6 +161,8 @@ def test_datefield_1(self):
f.clean("200a-10-25")
with self.assertRaisesMessage(ValidationError, "'Enter a valid date.'"):
f.clean("25/10/06")
+ with self.assertRaisesMessage(ValidationError, "'Enter a valid date.'"):
+ f.clean("0-0-0")
with self.assertRaisesMessage(ValidationError, "'This field is required.'"):
f.clean(None)
diff --git a/tests/forms_tests/widget_tests/test_selectdatewidget.py b/tests/forms_tests/widget_tests/test_selectdatewidget.py
index cfcd03798768a..215c41a809c04 100644
--- a/tests/forms_tests/widget_tests/test_selectdatewidget.py
+++ b/tests/forms_tests/widget_tests/test_selectdatewidget.py
@@ -1,3 +1,4 @@
+import sys
from datetime import date
from django.forms import DateField, Form, SelectDateWidget
@@ -610,6 +611,7 @@ def test_value_from_datadict(self):
((None, "12", "1"), None),
(("2000", None, "1"), None),
(("2000", "12", None), None),
+ ((str(sys.maxsize + 1), "12", "1"), "0-0-0"),
]
for values, expected in tests:
with self.subTest(values=values):
| https://code.djangoproject.com/ticket/34424 | https://api.github.com/repos/django/django/pulls/16667 | 2023-03-20T21:32:53Z | 2023-03-22T07:33:05Z | 2023-03-22T07:33:04Z | 2023-03-22T07:33:05Z | 744 | django/django | 51,170 |
set adamw_mode default true (follows FusedAdam and < 0.3.11 logic) | diff --git a/deepspeed/ops/adam/cpu_adam.py b/deepspeed/ops/adam/cpu_adam.py
index 2b1be7e53de2..7977d232b1fa 100755
--- a/deepspeed/ops/adam/cpu_adam.py
+++ b/deepspeed/ops/adam/cpu_adam.py
@@ -74,7 +74,7 @@ def __init__(self,
self.opt_id = DeepSpeedCPUAdam.optimizer_id
DeepSpeedCPUAdam.optimizer_id = DeepSpeedCPUAdam.optimizer_id + 1
-
+ self.adam_w_mode = adamw_mode
self.ds_opt_adam = CPUAdamBuilder().load()
self.ds_opt_adam.create_adam(self.opt_id,
diff --git a/deepspeed/runtime/config.py b/deepspeed/runtime/config.py
index 4cc09a8e3bf1..11e1d4037c8e 100755
--- a/deepspeed/runtime/config.py
+++ b/deepspeed/runtime/config.py
@@ -40,6 +40,10 @@
# extra optimizer parameters for adam/adamw
TORCH_ADAM_PARAM = "torch_adam"
+# default to adamw logic for adam/adamw optimizers unless user explictly opts out
+ADAM_W_MODE = "adam_w_mode"
+ADAM_W_MODE_DEFAULT = True
+
class DeepSpeedConfigError(Exception):
pass
diff --git a/deepspeed/runtime/engine.py b/deepspeed/runtime/engine.py
index 7faddfe566ad..1462225ac2bd 100755
--- a/deepspeed/runtime/engine.py
+++ b/deepspeed/runtime/engine.py
@@ -22,7 +22,7 @@
from deepspeed.runtime.fp16.unfused_optimizer import FP16_UnfusedOptimizer
from deepspeed.runtime.config import DeepSpeedConfig, DEEPSPEED_OPTIMIZERS, \
ADAM_OPTIMIZER, ADAMW_OPTIMIZER, LAMB_OPTIMIZER, ONEBIT_ADAM_OPTIMIZER, \
- TORCH_ADAM_PARAM
+ TORCH_ADAM_PARAM, ADAM_W_MODE, ADAM_W_MODE_DEFAULT
from deepspeed.runtime.dataloader import DeepSpeedDataLoader
from deepspeed.runtime.constants import \
@@ -640,26 +640,30 @@ def _configure_basic_optimizer(self, model_parameters):
if self.optimizer_name() in [ADAM_OPTIMIZER, ADAMW_OPTIMIZER]:
torch_adam = optimizer_parameters.pop(TORCH_ADAM_PARAM, False)
- adam_w_mode = self.optimizer_name() == ADAMW_OPTIMIZER
- # zero-offload torch-adam adam_w_mode optimizer
- # T|F T T torch.optim.AdamW
- # T|F T F torch.optim.Adam
- # T F T|F DeepSpeedCPUAdam(adam_w_mode)
- # F F T|F FusedAdam(adam_w_mode)
+ adam_w_mode = optimizer_parameters.pop(ADAM_W_MODE, ADAM_W_MODE_DEFAULT)
+
+ # Optimizer name of Adam forces AdamW logic unless adam_w_mode is explictly set
+ effective_adam_w_mode = self.optimizer_name(
+ ) == ADAMW_OPTIMIZER or adam_w_mode
+
if torch_adam:
- if adam_w_mode:
- optimizer = torch.optim.AdamW(model_parameters,
- **optimizer_parameters)
- else:
+ if not effective_adam_w_mode:
optimizer = torch.optim.Adam(model_parameters,
**optimizer_parameters)
- elif self.zero_cpu_offload():
- optimizer = DeepSpeedCPUAdam(model_parameters,
- **optimizer_parameters,
- adamw_mode=adam_w_mode)
+ else:
+ optimizer = torch.optim.AdamW(model_parameters,
+ **optimizer_parameters)
else:
- optimizer_parameters['adam_w_mode'] = adam_w_mode
- optimizer = FusedAdam(model_parameters, **optimizer_parameters)
+ if self.zero_cpu_offload():
+ from deepspeed.ops.adam import DeepSpeedCPUAdam
+ optimizer = DeepSpeedCPUAdam(model_parameters,
+ **optimizer_parameters,
+ adamw_mode=effective_adam_w_mode)
+ else:
+ from deepspeed.ops.adam import FusedAdam
+ optimizer = FusedAdam(model_parameters,
+ **optimizer_parameters,
+ adam_w_mode=effective_adam_w_mode)
elif self.optimizer_name() == LAMB_OPTIMIZER:
from deepspeed.ops.lamb import FusedLamb
diff --git a/tests/unit/test_adamw.py b/tests/unit/test_adamw.py
new file mode 100644
index 000000000000..83e0b5436546
--- /dev/null
+++ b/tests/unit/test_adamw.py
@@ -0,0 +1,73 @@
+import deepspeed
+import torch
+import pytest
+
+from common import distributed_test
+from deepspeed.ops.adam import FusedAdam
+from deepspeed.ops.adam import DeepSpeedCPUAdam
+from simple_model import SimpleModel, args_from_dict
+
+# yapf: disable
+#'optimizer, zero_offload, torch_adam, adam_w_mode, resulting_optimizer
+adam_configs = [["AdamW", False, False, False, (FusedAdam, True)],
+ ["AdamW", False, True, False, (torch.optim.AdamW, None)],
+ ["AdamW", True, False, False, (DeepSpeedCPUAdam, True)],
+ ["AdamW", True, True, False, (torch.optim.AdamW, None)],
+ ["AdamW", False, False, True, (FusedAdam, True)],
+ ["AdamW", False, True, True, (torch.optim.AdamW, None)],
+ ["AdamW", True, False, True, (DeepSpeedCPUAdam, True)],
+ ["AdamW", True, True, True, (torch.optim.AdamW, None)],
+ ["Adam", False, False, False, (FusedAdam, False)],
+ ["Adam", False, True, False, (torch.optim.Adam, None)],
+ ["Adam", True, False, False, (DeepSpeedCPUAdam, False)],
+ ["Adam", True, True, False, (torch.optim.Adam, None)],
+ ["Adam", False, False, True, (FusedAdam, True)],
+ ["Adam", False, True, True, (torch.optim.AdamW, None)],
+ ["Adam", True, False, True, (DeepSpeedCPUAdam, True)],
+ ["Adam", True, True, True, (torch.optim.AdamW, None)]]
+
+@pytest.mark.parametrize(
+ 'optimizer, zero_offload, torch_adam, adam_w_mode, resulting_optimizer',
+ adam_configs)
+def test_adam_configs(tmpdir,
+ optimizer,
+ zero_offload,
+ torch_adam,
+ adam_w_mode,
+ resulting_optimizer):
+ config_dict = {
+ "train_batch_size": 2,
+ "steps_per_print": 1,
+ "optimizer": {
+ "type": optimizer,
+ "params": {
+ "lr": 0.00015,
+ "torch_adam": torch_adam,
+ "adam_w_mode": adam_w_mode
+ }
+ },
+ "gradient_clipping": 1.0,
+ "fp16": {
+ "enabled": True
+ },
+ "zero_optimization": {
+ "stage": 2,
+ "cpu_offload": zero_offload
+ }
+ }
+ args = args_from_dict(tmpdir, config_dict)
+
+ @distributed_test(world_size=[1])
+ def helper(args):
+ model = SimpleModel(10)
+ model, _, _, _ = deepspeed.initialize(args=args,
+ model=model,
+ model_parameters=model.parameters())
+ # get base optimizer under zero
+ ds_optimizer = model.optimizer.optimizer
+ opt_class, adam_w_mode = resulting_optimizer
+ assert isinstance(ds_optimizer, opt_class)
+ if adam_w_mode in [True, False]:
+ assert ds_optimizer.adam_w_mode == adam_w_mode
+
+ helper(args)
| Was going to also update our docs related to this but noticed they are currently indicating this behavior anyway.

The user can explicitly set adam_w_mode=false if they truly want pure adam logic. | https://api.github.com/repos/microsoft/DeepSpeed/pulls/844 | 2021-03-09T21:54:16Z | 2021-03-11T02:02:09Z | 2021-03-11T02:02:09Z | 2021-03-11T02:02:13Z | 1,839 | microsoft/DeepSpeed | 10,163 |
Normalize percent-encoded bytes before comparison | diff --git a/.travis.yml b/.travis.yml
index efb75ddec2..436f545fa5 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -38,5 +38,5 @@ jobs:
dist: xenial
sudo: true
- stage: coverage
- python: 3.6
+ python: '3.6'
script: codecov
diff --git a/Makefile b/Makefile
index 317a7c76fb..231ce35775 100644
--- a/Makefile
+++ b/Makefile
@@ -1,7 +1,7 @@
.PHONY: docs
init:
pip install pipenv --upgrade
- pipenv install --dev --skip-lock
+ pipenv install --dev
test:
# This runs all of the tests, on both Python 2 and Python 3.
detox
diff --git a/Pipfile b/Pipfile
index 3e0fd729eb..b6705a6c68 100644
--- a/Pipfile
+++ b/Pipfile
@@ -4,7 +4,7 @@ verify_ssl = true
name = "pypi"
[dev-packages]
-pytest = ">=2.8.0"
+pytest = ">=2.8.0,<4.1"
codecov = "*"
pytest-httpbin = ">=0.0.7"
pytest-mock = "*"
diff --git a/tests/test_requests.py b/tests/test_requests.py
index 4bc1924f03..89eff885a0 100644
--- a/tests/test_requests.py
+++ b/tests/test_requests.py
@@ -9,6 +9,7 @@
import collections
import contextlib
import warnings
+import re
import io
import requests
@@ -2418,9 +2419,17 @@ class TestPreparingURLs(object):
)
)
def test_preparing_url(self, url, expected):
+
+ def normalize_percent_encode(x):
+ # Helper function that normalizes equivalent
+ # percent-encoded bytes before comparisons
+ for c in re.findall(r'%[a-fA-F0-9]{2}', x):
+ x = x.replace(c, c.upper())
+ return x
+
r = requests.Request('GET', url=url)
p = r.prepare()
- assert p.url == expected
+ assert normalize_percent_encode(p.url) == expected
@pytest.mark.parametrize(
'url',
| urllib3 replaced it's URL parser with `rfc3986` for more compliant parsing (urllib3/urllib3#1487). We've also got an effort underway to run downstream tests before releases (urllib3/urllib3#1508) and during that process we've discovered that requests requires percent-encoded bytes to keep their casing.
Adding this to the unit tests would allow either version of pre- and post-rfc3986 urllib3 to pass requests unit tests. I don't know if the `Request.url` attribute needs to maintain it's casing in this situation or not, if it does this is not the solution. Just putting this together as a potential solution. | https://api.github.com/repos/psf/requests/pulls/4915 | 2018-12-23T21:54:59Z | 2019-01-21T13:58:33Z | 2019-01-21T13:58:33Z | 2021-08-31T00:07:21Z | 559 | psf/requests | 32,119 |
updated README.md on subnet mask definition and example | diff --git a/README.md b/README.md
index f706bb5e1..80484e09c 100644
--- a/README.md
+++ b/README.md
@@ -144,27 +144,10 @@ An Internet Protocol address (IP address) is a numerical label assigned to each
<details>
<summary>Explain subnet mask and given an example</summary><br><b>
-A Subnet mask is a 32-bit number that masks an IP address, and divides the IP address into network address and host address. Subnet Mask is made by setting network bits to all "1"s and setting host bits to all "0"s. Within a given network, two host addresses are reserved for special purpose, and cannot be assigned to hosts. The "0" address is assigned a network address and "255" is assigned to a broadcast address, and they cannot be assigned to hosts.
+A Subnet mask is a 32-bit number that masks an IP address, and divides the IP address into network address and host address. Subnet Mask is made by setting network bits to all "1"s and setting host bits to all "0"s. Within a given network, out of the total usable host addresses, two are always reserved for specific purposes and cannot be allocated to any host. These are the first address, which is reserved as a network address (a.k.a network ID) and the last address used for network broadcast.
-**For Example**
+[Example](https://github.com/philemonnwanne/o0o0o/tree/main/exes/exe-09)
-```
-| Address Class | No of Network Bits | No of Host Bits | Subnet mask | CIDR notation |
-| ------------- | ------------------ | --------------- | --------------- | ------------- |
-| A | 8 | 24 | 255.0.0.0 | /8 |
-| A | 9 | 23 | 255.128.0.0 | /9 |
-| A | 12 | 20 | 255.240.0.0 | /12 |
-| A | 14 | 18 | 255.252.0.0 | /14 |
-| B | 16 | 16 | 255.255.0.0 | /16 |
-| B | 17 | 15 | 255.255.128.0 | /17 |
-| B | 20 | 12 | 255.255.240.0 | /20 |
-| B | 22 | 10 | 255.255.252.0 | /22 |
-| C | 24 | 8 | 255.255.255.0 | /24 |
-| C | 25 | 7 | 255.255.255.128 | /25 |
-| C | 28 | 4 | 255.255.255.240 | /28 |
-| C | 30 | 2 | 255.255.255.252 | /30 |
-
-```
</b></details>
<details>
| The statement **[255] is assigned to a broadcast address** could be a bit misleading as this is not always the case in most networks. The subnet mask column in the example table also contradicts the former statement and could leave beginners totally confused. So instead of **255**, the **last address (on the host part)** of the network mask should have been used and this can be any number within the range of **0 and 255.** | https://api.github.com/repos/bregman-arie/devops-exercises/pulls/300 | 2022-10-04T10:22:16Z | 2022-10-04T10:38:50Z | 2022-10-04T10:38:50Z | 2022-10-04T10:43:58Z | 697 | bregman-arie/devops-exercises | 17,649 |
json encode int8 and int16 | diff --git a/gym/utils/json_utils.py b/gym/utils/json_utils.py
index 6088d4ea966..4657dfc0368 100644
--- a/gym/utils/json_utils.py
+++ b/gym/utils/json_utils.py
@@ -10,6 +10,10 @@ def json_encode_np(obj):
return float(obj)
elif isinstance(obj, np.float64):
return float(obj)
+ elif isinstance(obj, np.int8):
+ return int(obj)
+ elif isinstance(obj, np.int16):
+ return int(obj)
elif isinstance(obj, np.int32):
return int(obj)
elif isinstance(obj, np.int64):
| currently some of the numpy types are missing from the json encoding. this adds support for int8 and int16. | https://api.github.com/repos/openai/gym/pulls/1166 | 2018-09-14T19:37:38Z | 2018-09-14T23:05:27Z | 2018-09-14T23:05:27Z | 2018-09-14T23:05:27Z | 146 | openai/gym | 5,523 |
[RLlib] Issue 15973: Trainer.with_updates(validate_config=...) behaves confusingly. | diff --git a/rllib/agents/dqn/apex.py b/rllib/agents/dqn/apex.py
index e9476229e1c9f..65e486048e2df 100644
--- a/rllib/agents/dqn/apex.py
+++ b/rllib/agents/dqn/apex.py
@@ -21,7 +21,6 @@
from ray.rllib.agents.dqn.dqn import calculate_rr_weights, \
DEFAULT_CONFIG as DQN_CONFIG, DQNTrainer, validate_config
from ray.rllib.agents.dqn.learner_thread import LearnerThread
-from ray.rllib.agents.trainer import Trainer
from ray.rllib.evaluation.worker_set import WorkerSet
from ray.rllib.execution.common import (STEPS_TRAINED_COUNTER,
_get_global_vars, _get_shared_metrics)
@@ -77,7 +76,6 @@ class OverrideDefaultResourceRequest:
@override(Trainable)
def default_resource_request(cls, config):
cf = dict(cls._default_config, **config)
- Trainer._validate_config(cf)
eval_config = cf["evaluation_config"]
diff --git a/rllib/agents/impala/impala.py b/rllib/agents/impala/impala.py
index f2bc9512fbe94..b12ed5e9b88b1 100644
--- a/rllib/agents/impala/impala.py
+++ b/rllib/agents/impala/impala.py
@@ -2,7 +2,7 @@
import ray
from ray.rllib.agents.impala.vtrace_tf_policy import VTraceTFPolicy
-from ray.rllib.agents.trainer import Trainer, with_common_config
+from ray.rllib.agents.trainer import with_common_config
from ray.rllib.agents.trainer_template import build_trainer
from ray.rllib.execution.learner_thread import LearnerThread
from ray.rllib.execution.multi_gpu_learner import TFMultiGPULearner
@@ -103,7 +103,6 @@ class OverrideDefaultResourceRequest:
@override(Trainable)
def default_resource_request(cls, config):
cf = dict(cls._default_config, **config)
- Trainer._validate_config(cf)
eval_config = cf["evaluation_config"]
diff --git a/rllib/agents/qmix/qmix.py b/rllib/agents/qmix/qmix.py
index c2584378ded03..dc8802f42da55 100644
--- a/rllib/agents/qmix/qmix.py
+++ b/rllib/agents/qmix/qmix.py
@@ -95,6 +95,8 @@
"lstm_cell_size": 64,
"max_seq_len": 999999,
},
+ # Only torch supported so far.
+ "framework": "torch",
})
# __sphinx_doc_end__
# yapf: enable
diff --git a/rllib/agents/qmix/tests/test_qmix.py b/rllib/agents/qmix/tests/test_qmix.py
index 5c214a5d859e3..c5dad9d983df3 100644
--- a/rllib/agents/qmix/tests/test_qmix.py
+++ b/rllib/agents/qmix/tests/test_qmix.py
@@ -54,6 +54,14 @@ def step(self, action_dict):
class TestQMix(unittest.TestCase):
+ @classmethod
+ def setUpClass(cls) -> None:
+ ray.init()
+
+ @classmethod
+ def tearDownClass(cls) -> None:
+ ray.shutdown()
+
def test_avail_actions_qmix(self):
grouping = {
"group_1": ["agent_1"], # trivial grouping for testing
@@ -65,7 +73,6 @@ def test_avail_actions_qmix(self):
lambda config: AvailActionsTestEnv(config).with_agent_groups(
grouping, obs_space=obs_space, act_space=act_space))
- ray.init()
agent = QMixTrainer(
env="action_mask_test",
config={
@@ -75,7 +82,7 @@ def test_avail_actions_qmix(self):
},
"framework": "torch",
})
- for _ in range(5):
+ for _ in range(4):
agent.train() # OK if it doesn't trip the action assertion error
assert agent.train()["episode_reward_mean"] == 21.0
agent.stop()
diff --git a/rllib/agents/trainer.py b/rllib/agents/trainer.py
index d17b50f4cf67d..2abfdec87f18f 100644
--- a/rllib/agents/trainer.py
+++ b/rllib/agents/trainer.py
@@ -553,7 +553,6 @@ def default_resource_request(
cls, config: PartialTrainerConfigDict) -> \
Union[Resources, PlacementGroupFactory]:
cf = dict(cls._default_config, **config)
- Trainer._validate_config(cf)
eval_config = cf["evaluation_config"]
diff --git a/rllib/agents/trainer_template.py b/rllib/agents/trainer_template.py
index 2d28703a2d6c3..73a3c86e4dac0 100644
--- a/rllib/agents/trainer_template.py
+++ b/rllib/agents/trainer_template.py
@@ -12,8 +12,8 @@
from ray.rllib.policy import Policy
from ray.rllib.utils import add_mixins
from ray.rllib.utils.annotations import override, DeveloperAPI
-from ray.rllib.utils.typing import EnvConfigDict, EnvType, ResultDict, \
- TrainerConfigDict
+from ray.rllib.utils.typing import EnvConfigDict, EnvType, \
+ PartialTrainerConfigDict, ResultDict, TrainerConfigDict
logger = logging.getLogger(__name__)
@@ -124,9 +124,6 @@ def __init__(self, config=None, env=None, logger_creator=None):
def _init(self, config: TrainerConfigDict,
env_creator: Callable[[EnvConfigDict], EnvType]):
- # Validate config via custom validation function.
- if validate_config:
- validate_config(config)
# No `get_policy_class` function.
if get_policy_class is None:
@@ -211,6 +208,16 @@ def fn(env, env_context, task_fn):
return res
+ @staticmethod
+ @override(Trainer)
+ def _validate_config(config: PartialTrainerConfigDict,
+ trainer_obj_or_none: Optional["Trainer"] = None):
+ # Call super (Trainer) validation method first.
+ Trainer._validate_config(config, trainer_obj_or_none)
+ # Then call user defined one, if any.
+ if validate_config is not None:
+ validate_config(config)
+
@override(Trainer)
def _before_evaluate(self):
if before_evaluate_fn:
diff --git a/rllib/examples/two_step_game.py b/rllib/examples/two_step_game.py
index b014d5e0351b3..737e7389437fb 100644
--- a/rllib/examples/two_step_game.py
+++ b/rllib/examples/two_step_game.py
@@ -55,6 +55,8 @@
if __name__ == "__main__":
args = parser.parse_args()
+ ray.init(num_cpus=args.num_cpus or None)
+
grouping = {
"group_1": [0, 1],
}
@@ -123,7 +125,6 @@
},
# Use GPUs iff `RLLIB_NUM_GPUS` env var set to > 0.
"num_gpus": int(os.environ.get("RLLIB_NUM_GPUS", "0")),
- "framework": args.framework,
}
group = True
else:
@@ -134,8 +135,6 @@
}
group = False
- ray.init(num_cpus=args.num_cpus or None)
-
stop = {
"episode_reward_mean": args.stop_reward,
"timesteps_total": args.stop_timesteps,
@@ -146,7 +145,7 @@
"env": "grouped_twostep" if group else TwoStepGame,
})
- results = tune.run(args.run, stop=stop, config=config, verbose=1)
+ results = tune.run(args.run, stop=stop, config=config, verbose=2)
if args.as_test:
check_learning_achieved(results, args.stop_reward)
| <!-- Thank you for your contribution! Please review https://github.com/ray-project/ray/blob/master/CONTRIBUTING.rst before opening a pull request. -->
Issue 15973: Trainer.with_updates(validate config=...) behaves confusingly.
- Trainer._validate_config is called multiple times during initialization: 1) Trainer.setup(), 2) Trainer.default_resource_request (overridden by IMPALA and APEX), 3) trainer_template._init() (<- yet another time during setup, which is unnecessary).
- This PR removes all unnecessary calls to this method and makes sure that - when overridden by the user via `Trainer.with_updates(validate_config=...)`, only the user-defined method is used from here on.
<!-- Please add a reviewer to the assignee section when you create a PR. If you don't have the access to it, we will shortly find a reviewer and assign them to your PR. -->
Issue #15973
## Why are these changes needed?
<!-- Please give a short summary of the change and the problem this solves. -->
Closes #15973
## Related issue number
<!-- For example: "Closes #1234" -->
## Checks
- [x] I've run `scripts/format.sh` to lint the changes in this PR.
- [x] I've included any doc changes needed for https://docs.ray.io/en/master/.
- [x] I've made sure the tests are passing. Note that there might be a few flaky tests, see the recent failures at https://flakey-tests.ray.io/
- Testing Strategy
- [x] Unit tests
- [ ] Release tests
- [ ] This PR is not tested :(
| https://api.github.com/repos/ray-project/ray/pulls/16429 | 2021-06-15T10:30:56Z | 2021-06-19T20:42:00Z | 2021-06-19T20:42:00Z | 2021-06-19T20:42:00Z | 1,831 | ray-project/ray | 19,679 |
Dont create sample events for new projects | diff --git a/src/sentry/web/forms/add_project.py b/src/sentry/web/forms/add_project.py
index 79f458b42a54d..a17ca62553d6e 100644
--- a/src/sentry/web/forms/add_project.py
+++ b/src/sentry/web/forms/add_project.py
@@ -4,7 +4,6 @@
from django.utils.translation import ugettext_lazy as _
from sentry.models import AuditLogEntry, AuditLogEntryEvent, Project
-from sentry.utils.samples import create_sample_event
BLANK_CHOICE = [("", "")]
@@ -37,6 +36,4 @@ def save(self, actor, team, ip_address):
data=project.get_audit_log_data(),
)
- create_sample_event(project, platform='javascript')
-
return project
| This causes more confusion than value these days, since we have the robot-onboarding.
| https://api.github.com/repos/getsentry/sentry/pulls/2597 | 2016-01-25T20:43:20Z | 2016-01-25T21:17:51Z | 2016-01-25T21:17:51Z | 2020-12-23T20:55:37Z | 174 | getsentry/sentry | 44,474 |
Fix typo for store_serialized_dags config | diff --git a/airflow/config_templates/config.yml b/airflow/config_templates/config.yml
index 1b7530fc88383..5033c342e503d 100644
--- a/airflow/config_templates/config.yml
+++ b/airflow/config_templates/config.yml
@@ -321,7 +321,7 @@
default: "0"
- name: store_serialized_dags
description: |
- Whether to serialises DAGs and persist them in DB.
+ Whether to serialise DAGs and persist them in DB.
If set to True, Webserver reads from DB instead of parsing DAG files
More details: https://airflow.apache.org/docs/stable/dag-serialization.html
version_added: 1.10.7
diff --git a/airflow/config_templates/default_airflow.cfg b/airflow/config_templates/default_airflow.cfg
index a1d78dd4cfd74..dfd779056fffe 100644
--- a/airflow/config_templates/default_airflow.cfg
+++ b/airflow/config_templates/default_airflow.cfg
@@ -184,7 +184,7 @@ dag_discovery_safe_mode = True
# The number of retries each task is going to have by default. Can be overridden at dag or task level.
default_task_retries = 0
-# Whether to serialises DAGs and persist them in DB.
+# Whether to serialise DAGs and persist them in DB.
# If set to True, Webserver reads from DB instead of parsing DAG files
# More details: https://airflow.apache.org/docs/stable/dag-serialization.html
store_serialized_dags = False
diff --git a/docs/dag-serialization.rst b/docs/dag-serialization.rst
index dccffdcb35eaa..7802a09a041a9 100644
--- a/docs/dag-serialization.rst
+++ b/docs/dag-serialization.rst
@@ -59,7 +59,7 @@ Add the following settings in ``airflow.cfg``:
store_serialized_dags = True
min_serialized_dag_update_interval = 30
-* ``store_serialized_dags``: This flag decides whether to serialises DAGs and persist them in DB.
+* ``store_serialized_dags``: This flag decides whether to serialise DAGs and persist them in DB.
If set to True, Webserver reads from DB instead of parsing DAG files
* ``min_serialized_dag_update_interval``: This flag sets the minimum interval (in seconds) after which
the serialized DAG in DB should be updated. This helps in reducing database write rate.
| Reported by a user on slack
---
Issue link: WILL BE INSERTED BY [boring-cyborg](https://github.com/kaxil/boring-cyborg)
Make sure to mark the boxes below before creating PR: [x]
- [x] Description above provides context of the change
- [x] Unit tests coverage for changes (not needed for documentation changes)
- [x] Commits follow "[How to write a good git commit message](http://chris.beams.io/posts/git-commit/)"
- [x] Relevant documentation is updated including usage instructions.
- [x] I will engage committers as explained in [Contribution Workflow Example](https://github.com/apache/airflow/blob/master/CONTRIBUTING.rst#contribution-workflow-example).
---
In case of fundamental code change, Airflow Improvement Proposal ([AIP](https://cwiki.apache.org/confluence/display/AIRFLOW/Airflow+Improvements+Proposals)) is needed.
In case of a new dependency, check compliance with the [ASF 3rd Party License Policy](https://www.apache.org/legal/resolved.html#category-x).
In case of backwards incompatible changes please leave a note in [UPDATING.md](https://github.com/apache/airflow/blob/master/UPDATING.md).
Read the [Pull Request Guidelines](https://github.com/apache/airflow/blob/master/CONTRIBUTING.rst#pull-request-guidelines) for more information.
| https://api.github.com/repos/apache/airflow/pulls/7952 | 2020-03-28T14:11:36Z | 2020-03-28T16:59:47Z | 2020-03-28T16:59:47Z | 2020-04-04T23:44:51Z | 568 | apache/airflow | 14,608 |
Fix requirements in gui api | diff --git a/g4f/gui/server/api.py b/g4f/gui/server/api.py
index ed904be8ed..3adb88f433 100644
--- a/g4f/gui/server/api.py
+++ b/g4f/gui/server/api.py
@@ -13,8 +13,13 @@
from plyer import camera
from plyer import filechooser
app_storage_path = platformdirs.user_pictures_dir
+ user_select_image = partial(
+ filechooser.open_file,
+ path=platformdirs.user_pictures_dir(),
+ filters=[["Image", "*.jpg", "*.jpeg", "*.png", "*.webp", "*.svg"]],
+ )
has_plyer = True
-except ImportError:
+except (ImportError, NameError):
has_plyer = False
try:
from android.runnable import run_on_ui_thread
@@ -26,11 +31,6 @@
has_android = True
except ImportError:
run_on_ui_thread = lambda a : a
- user_select_image = partial(
- filechooser.open_file,
- path=platformdirs.user_pictures_dir(),
- filters=[["Image", "*.jpg", "*.jpeg", "*.png", "*.webp", "*.svg"]],
- )
has_android = False
from g4f import version, models
diff --git a/main.py b/main.py
deleted file mode 100644
index 5b2cc4c200..0000000000
--- a/main.py
+++ /dev/null
@@ -1,15 +0,0 @@
-import ssl
-import certifi
-from functools import partial
-
-ssl.default_ca_certs = certifi.where()
-ssl.create_default_context = partial(
- ssl.create_default_context,
- cafile=certifi.where()
-)
-
-from g4f.gui.webview import run_webview
-import g4f.debug
-g4f.debug.version_check = False
-
-run_webview(True);
\ No newline at end of file
| https://api.github.com/repos/xtekky/gpt4free/pulls/1729 | 2024-03-19T19:44:41Z | 2024-03-19T19:45:36Z | 2024-03-19T19:45:36Z | 2024-03-19T19:47:11Z | 429 | xtekky/gpt4free | 38,249 |
|
Removes py26reqs.txt and git dependency | diff --git a/Dockerfile b/Dockerfile
index 02aa0f0d737..da01106040b 100644
--- a/Dockerfile
+++ b/Dockerfile
@@ -49,7 +49,6 @@ COPY letsencrypt-apache /opt/letsencrypt/src/letsencrypt-apache/
COPY letsencrypt-nginx /opt/letsencrypt/src/letsencrypt-nginx/
-# py26reqs.txt not installed!
RUN virtualenv --no-site-packages -p python2 /opt/letsencrypt/venv && \
/opt/letsencrypt/venv/bin/pip install \
-e /opt/letsencrypt/src/acme \
diff --git a/Dockerfile-dev b/Dockerfile-dev
index b89411c90c2..3c5b539663b 100644
--- a/Dockerfile-dev
+++ b/Dockerfile-dev
@@ -32,7 +32,6 @@ RUN /opt/letsencrypt/src/ubuntu.sh && \
# the above is not likely to change, so by putting it further up the
# Dockerfile we make sure we cache as much as possible
-# py26reqs.txt not installed!
COPY setup.py README.rst CHANGES.rst MANIFEST.in linter_plugin.py tox.cover.sh tox.ini pep8.travis.sh .pep8 .pylintrc /opt/letsencrypt/src/
# all above files are necessary for setup.py, however, package source
diff --git a/MANIFEST.in b/MANIFEST.in
index a82c7dd8c77..a6f9ae2b648 100644
--- a/MANIFEST.in
+++ b/MANIFEST.in
@@ -1,4 +1,3 @@
-include py26reqs.txt
include README.rst
include CHANGES.rst
include CONTRIBUTING.md
diff --git a/bootstrap/README b/bootstrap/README
index 89fd8b6ba8c..d917809036a 100644
--- a/bootstrap/README
+++ b/bootstrap/README
@@ -2,6 +2,5 @@ This directory contains scripts that install necessary OS-specific
prerequisite dependencies (see docs/using.rst).
General dependencies:
-- git-core: py26reqs.txt git+https://*
- ca-certificates: communication with demo ACMO server at
- https://www.letsencrypt-demo.org, py26reqs.txt git+https://*
+ https://www.letsencrypt-demo.org
diff --git a/bootstrap/_arch_common.sh b/bootstrap/_arch_common.sh
index f66067ffb90..2b512792f63 100755
--- a/bootstrap/_arch_common.sh
+++ b/bootstrap/_arch_common.sh
@@ -8,7 +8,6 @@
# ./bootstrap/dev/_common_venv.sh
deps="
- git
python2
python-virtualenv
gcc
diff --git a/bootstrap/_deb_common.sh b/bootstrap/_deb_common.sh
index 4c6b91a3349..d8b03075cf2 100755
--- a/bootstrap/_deb_common.sh
+++ b/bootstrap/_deb_common.sh
@@ -33,7 +33,6 @@ if apt-cache show python-virtualenv > /dev/null ; then
fi
apt-get install -y --no-install-recommends \
- git \
python \
python-dev \
$virtualenv \
diff --git a/bootstrap/_gentoo_common.sh b/bootstrap/_gentoo_common.sh
index a718db7ffe7..f49dc00f056 100755
--- a/bootstrap/_gentoo_common.sh
+++ b/bootstrap/_gentoo_common.sh
@@ -1,6 +1,6 @@
#!/bin/sh
-PACKAGES="dev-vcs/git
+PACKAGES="
dev-lang/python:2.7
dev-python/virtualenv
dev-util/dialog
diff --git a/bootstrap/_rpm_common.sh b/bootstrap/_rpm_common.sh
index 411d7bd9207..92b54b720a1 100755
--- a/bootstrap/_rpm_common.sh
+++ b/bootstrap/_rpm_common.sh
@@ -33,9 +33,7 @@ then
fi
fi
-# "git-core" seems to be an alias for "git" in CentOS 7 (yum search fails)
if ! $tool install -y \
- git-core \
gcc \
dialog \
augeas-libs \
diff --git a/bootstrap/_suse_common.sh b/bootstrap/_suse_common.sh
index 46f9d693bf4..efeebe4f817 100755
--- a/bootstrap/_suse_common.sh
+++ b/bootstrap/_suse_common.sh
@@ -2,7 +2,7 @@
# SLE12 don't have python-virtualenv
-zypper -nq in -l git-core \
+zypper -nq in -l \
python \
python-devel \
python-virtualenv \
diff --git a/bootstrap/dev/venv.sh b/bootstrap/dev/venv.sh
index 2bd32a89b11..11ab417dda9 100755
--- a/bootstrap/dev/venv.sh
+++ b/bootstrap/dev/venv.sh
@@ -4,7 +4,6 @@
export VENV_ARGS="--python python2"
./bootstrap/dev/_venv_common.sh \
- -r py26reqs.txt \
-e acme[testing] \
-e .[dev,docs,testing] \
-e letsencrypt-apache \
diff --git a/bootstrap/freebsd.sh b/bootstrap/freebsd.sh
index 180ee21b4c4..4482c35cd91 100755
--- a/bootstrap/freebsd.sh
+++ b/bootstrap/freebsd.sh
@@ -1,7 +1,6 @@
#!/bin/sh -xe
pkg install -Ay \
- git \
python \
py27-virtualenv \
augeas \
diff --git a/bootstrap/venv.sh b/bootstrap/venv.sh
index ff1a50c6cc4..5042178d9e0 100755
--- a/bootstrap/venv.sh
+++ b/bootstrap/venv.sh
@@ -20,7 +20,7 @@ fi
pip install -U setuptools
pip install -U pip
-pip install -U -r py26reqs.txt letsencrypt letsencrypt-apache # letsencrypt-nginx
+pip install -U letsencrypt letsencrypt-apache # letsencrypt-nginx
echo
echo "Congratulations, Let's Encrypt has been successfully installed/updated!"
diff --git a/letsencrypt-auto b/letsencrypt-auto
index 44c71883c7a..9721a79dd48 100755
--- a/letsencrypt-auto
+++ b/letsencrypt-auto
@@ -175,7 +175,7 @@ if [ "$VERBOSE" = 1 ] ; then
echo
$VENV_BIN/pip install -U setuptools
$VENV_BIN/pip install -U pip
- $VENV_BIN/pip install -r "$LEA_PATH"/py26reqs.txt -U letsencrypt letsencrypt-apache
+ $VENV_BIN/pip install -U letsencrypt letsencrypt-apache
# nginx is buggy / disabled for now, but upgrade it if the user has
# installed it manually
if $VENV_BIN/pip freeze | grep -q letsencrypt-nginx ; then
@@ -187,8 +187,6 @@ else
$VENV_BIN/pip install -U pip > /dev/null
printf .
# nginx is buggy / disabled for now...
- $VENV_BIN/pip install -r "$LEA_PATH"/py26reqs.txt > /dev/null
- printf .
$VENV_BIN/pip install -U letsencrypt > /dev/null
printf .
$VENV_BIN/pip install -U letsencrypt-apache > /dev/null
diff --git a/py26reqs.txt b/py26reqs.txt
deleted file mode 100644
index a94b22c0c8f..00000000000
--- a/py26reqs.txt
+++ /dev/null
@@ -1,2 +0,0 @@
-# https://github.com/bw2/ConfigArgParse/issues/17
-git+https://github.com/kuba/ConfigArgParse.git@python2.6-0.9.3#egg=ConfigArgParse
diff --git a/setup.py b/setup.py
index 40c6ac16ce6..40749bf2a5c 100644
--- a/setup.py
+++ b/setup.py
@@ -32,7 +32,6 @@ def read_file(filename, encoding='utf8'):
install_requires = [
'acme=={0}'.format(version),
- 'ConfigArgParse',
'configobj',
'cryptography>=0.7', # load_pem_x509_certificate
'parsedatetime',
@@ -53,10 +52,14 @@ def read_file(filename, encoding='utf8'):
install_requires.extend([
# only some distros recognize stdlib argparse as already satisfying
'argparse',
+ 'ConfigArgParse>=0.10.0', # python2.6 support, upstream #17
'mock<1.1.0',
])
else:
- install_requires.append('mock')
+ install_requires.extend([
+ 'ConfigArgParse',
+ 'mock',
+ ])
dev_extras = [
# Pin astroid==1.3.5, pylint==1.4.2 as a workaround for #289
diff --git a/tox.ini b/tox.ini
index d1fafe20f2f..1abe1cf39a8 100644
--- a/tox.ini
+++ b/tox.ini
@@ -17,7 +17,7 @@ envlist = py26,py27,py33,py34,py35,cover,lint
commands =
pip install -e acme[testing]
nosetests -v acme
- pip install -r py26reqs.txt -e .[testing]
+ pip install -e .[testing]
nosetests -v letsencrypt
pip install -e letsencrypt-apache
nosetests -v letsencrypt_apache
| Supercedes #1661.
| https://api.github.com/repos/certbot/certbot/pulls/1908 | 2015-12-16T01:08:27Z | 2015-12-23T01:59:59Z | 2015-12-23T01:59:59Z | 2015-12-23T15:44:56Z | 2,236 | certbot/certbot | 1,194 |
Fix typo | diff --git a/modules/shared.py b/modules/shared.py
index 2dc092d6878..6b5d150b038 100644
--- a/modules/shared.py
+++ b/modules/shared.py
@@ -240,7 +240,7 @@ def options_section(section_identifier, options_dict):
options_templates.update(options_section(('ui', "User interface"), {
"show_progressbar": OptionInfo(True, "Show progressbar"),
- "show_progress_every_n_steps": OptionInfo(0, "Show show image creation progress every N sampling steps. Set 0 to disable.", gr.Slider, {"minimum": 0, "maximum": 32, "step": 1}),
+ "show_progress_every_n_steps": OptionInfo(0, "Show image creation progress every N sampling steps. Set 0 to disable.", gr.Slider, {"minimum": 0, "maximum": 32, "step": 1}),
"return_grid": OptionInfo(True, "Show grid in results for web"),
"do_not_show_images": OptionInfo(False, "Do not show any images in results for web"),
"add_model_hash_to_info": OptionInfo(True, "Add model hash to generation information"),
| https://api.github.com/repos/AUTOMATIC1111/stable-diffusion-webui/pulls/2008 | 2022-10-08T22:21:12Z | 2022-10-09T08:10:14Z | 2022-10-09T08:10:14Z | 2022-10-09T08:10:14Z | 259 | AUTOMATIC1111/stable-diffusion-webui | 40,280 |
|
Fix a OTA bug on `_ota_chunk_data` | diff --git a/shadowsocks/tcprelay.py b/shadowsocks/tcprelay.py
index 2e4772d0e..2ff7b213a 100644
--- a/shadowsocks/tcprelay.py
+++ b/shadowsocks/tcprelay.py
@@ -358,7 +358,7 @@ def _handle_stage_addr(self, data):
b'\x00\x00\x00\x00\x10\x10'),
self._local_sock)
# spec https://shadowsocks.org/en/spec/one-time-auth.html
- # ATYP & 0x10 == 1, then OTA is enabled.
+ # ATYP & 0x10 == 0x10, then OTA is enabled.
if self._ota_enable_session:
data = common.chr(addrtype | ADDRTYPE_AUTH) + data[1:]
key = self._encryptor.cipher_iv + self._encryptor.key
@@ -458,7 +458,7 @@ def _ota_chunk_data(self, data, data_cb):
return
data_len = self._ota_buff_head[:ONETIMEAUTH_CHUNK_DATA_LEN]
self._ota_len = struct.unpack('>H', data_len)[0]
- length = min(self._ota_len, len(data))
+ length = min(self._ota_len - len(self._ota_buff_data), len(data))
self._ota_buff_data += data[:length]
data = data[length:]
if len(self._ota_buff_data) == self._ota_len:
| You can repro it by following code:
``` python
from __future__ import print_function
import struct
ONETIMEAUTH_BYTES = 10
ONETIMEAUTH_CHUNK_BYTES = 12
ONETIMEAUTH_CHUNK_DATA_LEN = 2
class OTA(object):
def __init__(self):
self._ota_len = 0
self._ota_chunk_idx = 0
self._ota_buff_head = b''
self._ota_buff_data = b''
def _ota_chunk_data(self, data, data_cb):
# spec https://shadowsocks.org/en/spec/one-time-auth.html
unchunk_data = b''
while len(data) > 0:
if self._ota_len == 0:
# get DATA.LEN + HMAC-SHA1
length = ONETIMEAUTH_CHUNK_BYTES - len(self._ota_buff_head)
self._ota_buff_head += data[:length]
data = data[length:]
if len(self._ota_buff_head) < ONETIMEAUTH_CHUNK_BYTES:
# wait more data
return
data_len = self._ota_buff_head[:ONETIMEAUTH_CHUNK_DATA_LEN]
self._ota_len = struct.unpack('>H', data_len)[0]
length = min(self._ota_len, len(data))
self._ota_buff_data += data[:length]
data = data[length:]
if len(self._ota_buff_data) == self._ota_len:
# get a chunk data
_hash = self._ota_buff_head[ONETIMEAUTH_CHUNK_DATA_LEN:]
_data = self._ota_buff_data
index = struct.pack('>I', self._ota_chunk_idx)
# key = self._encryptor.decipher_iv + index
# if onetimeauth_verify(_hash, _data, key) is False:
# logging.warn('one time auth fail, drop chunk !')
# else:
unchunk_data += _data
self._ota_chunk_idx += 1
self._ota_buff_head = b''
self._ota_buff_data = b''
self._ota_len = 0
data_cb(unchunk_data)
return
def test(data):
print("-" * 40)
o = OTA()
for frag in data:
o._ota_chunk_data(frag, print)
if __name__ == '__main__':
data1 = [
b'\x00\x03hhhhhhhhhh111\x00\x03hhhhhhhhhh222',
]
data2 = [
b'\x00\x03hhhhhhhhhh1',
b'11\x00\x03hhhhhhhhhh222',
]
test(data1)
test(data2)
```
The output:
```
----------------------------------------
111222
----------------------------------------
```
| https://api.github.com/repos/shadowsocks/shadowsocks/pulls/642 | 2016-10-10T08:30:48Z | 2016-10-10T15:05:21Z | 2016-10-10T15:05:21Z | 2016-10-10T15:05:21Z | 334 | shadowsocks/shadowsocks | 24,713 |
Fix start.vbs , test-appid , disable x-tunnel and change python path in some files | diff --git a/gae_proxy/local/cert_util.py b/gae_proxy/local/cert_util.py
index 68bed1798b..98809d46ce 100644
--- a/gae_proxy/local/cert_util.py
+++ b/gae_proxy/local/cert_util.py
@@ -13,13 +13,12 @@
import subprocess
current_path = os.path.dirname(os.path.abspath(__file__))
-root_path = os.path.abspath( os.path.join(current_path, os.pardir, os.pardir))
-python_path = os.path.abspath( os.path.join(current_path, os.pardir, os.pardir, 'python27', '1.0'))
data_path = os.path.abspath(os.path.join(current_path, os.pardir, os.pardir, 'data', 'gae_proxy'))
if not os.path.isdir(data_path):
data_path = current_path
if __name__ == "__main__":
+ root_path = os.path.abspath( os.path.join(current_path, os.pardir, os.pardir))
noarch_lib = os.path.join(root_path, 'lib', 'noarch')
sys.path.append(noarch_lib)
diff --git a/gae_proxy/local/check_local_network.py b/gae_proxy/local/check_local_network.py
index 67893735bc..f77e53127e 100644
--- a/gae_proxy/local/check_local_network.py
+++ b/gae_proxy/local/check_local_network.py
@@ -7,19 +7,22 @@
import socket
import threading
-current_path = os.path.dirname(os.path.abspath(__file__))
-
if __name__ == "__main__":
- python_path = os.path.abspath( os.path.join(current_path, os.pardir, os.pardir, 'python27', '1.0'))
+ current_path = os.path.dirname(os.path.abspath(__file__))
+ root_path = os.path.abspath(os.path.join(current_path, os.pardir, os.pardir))
+ gae_path = os.path.join(root_path, "gae_proxy")
+ sys.path.append(gae_path)
- noarch_lib = os.path.abspath( os.path.join(python_path, 'lib', 'noarch'))
+ noarch_lib = os.path.join(root_path, 'lib', 'noarch')
sys.path.append(noarch_lib)
+ common_lib = os.path.join(root_path, 'lib', 'common')
+ sys.path.append(common_lib)
if sys.platform == "win32":
- win32_lib = os.path.abspath( os.path.join(python_path, 'lib', 'win32'))
+ win32_lib = os.path.join(root_path, 'lib', 'win32')
sys.path.append(win32_lib)
elif sys.platform.startswith("linux"):
- linux_lib = os.path.abspath( os.path.join(python_path, 'lib', 'linux'))
+ linux_lib = os.path.join(root_path, 'lib', 'linux')
sys.path.append(linux_lib)
import OpenSSL
diff --git a/gae_proxy/local/google_ip.py b/gae_proxy/local/google_ip.py
index 01fbe6ffc7..252ba28b30 100644
--- a/gae_proxy/local/google_ip.py
+++ b/gae_proxy/local/google_ip.py
@@ -352,7 +352,7 @@ def update_ip(self, ip, handshake_time):
handshake_time = int(handshake_time)
if handshake_time < 5: # that's impossible
- xlog.warn("%s handshake:%d impossible", ip, 1000 * handshake_time)
+ xlog.debug("%s handshake:%d impossible or wrong", ip, 1000 * handshake_time)
return
time_now = time.time()
diff --git a/gae_proxy/local/proxy.py b/gae_proxy/local/proxy.py
index 2a4860c955..56caf920ff 100644
--- a/gae_proxy/local/proxy.py
+++ b/gae_proxy/local/proxy.py
@@ -1,4 +1,4 @@
-#!/usr/bin/env python2
+#!/usr/bin/env python3
# coding:utf-8
# Based on GAppProxy 2.0.0 by Du XiaoGang <dugang.2008@gmail.com>
# Based on WallProxy 0.4.0 by Hust Moon <www.ehust@gmail.com>
diff --git a/gae_proxy/local/test_appid.py b/gae_proxy/local/test_appid.py
index 8a93334271..a088d0b4d6 100644
--- a/gae_proxy/local/test_appid.py
+++ b/gae_proxy/local/test_appid.py
@@ -3,13 +3,13 @@
from xlog import getLogger
xlog = getLogger("gae_proxy")
-from .connect_manager import https_manager
+from local.connect_manager import https_manager
def test_appid_exist(ssl_sock, appid):
request_data = 'GET /_gh/ HTTP/1.1\r\nHost: %s.appspot.com\r\n\r\n' % appid
ssl_sock.send(request_data.encode())
- response = http.client.HTTPResponse(ssl_sock, buffering=True)
+ response = http.client.HTTPResponse(ssl_sock)
response.begin()
if response.status == 404:
@@ -24,7 +24,7 @@ def test_appid_exist(ssl_sock, appid):
xlog.warn("test appid %s status:%d", appid, response.status)
content = response.read()
- if "GoAgent" not in content:
+ if b"GoAgent" not in content:
#xlog.warn("app check %s content:%s", appid, content)
return False
diff --git a/launcher/autorun.py b/launcher/autorun.py
index 7877e53309..979f4b0ce6 100644
--- a/launcher/autorun.py
+++ b/launcher/autorun.py
@@ -44,7 +44,7 @@ def remove(name):
winreg.DeleteValue(key, name)
winreg.CloseKey(key)
- run_cmd = "\"" + os.path.abspath( os.path.join(root_path, "python27", "1.0", "pythonw.exe")) + "\" \"" +\
+ run_cmd = "\"" + os.path.abspath( os.path.join(root_path, "python3", "python.exe")) + "\" \"" +\
os.path.abspath( os.path.join(root_path, "launcher", "start.py")) + "\""
elif sys.platform.startswith('linux'):
_xdg_config_home = os.environ.get("XDG_CONFIG_HOME", "~/.config")
diff --git a/launcher/config.py b/launcher/config.py
index 4ef5b00653..b9d2aea15c 100644
--- a/launcher/config.py
+++ b/launcher/config.py
@@ -60,7 +60,7 @@ def recheck_module_path():
global config
need_save_config = False
- modules = ["gae_proxy", "launcher", "php_proxy", "x_tunnel"]
+ modules = ["gae_proxy", "launcher", "php_proxy"]
for module in modules:
if module not in ["launcher", "php_proxy"]:
if not os.path.isdir(os.path.join(root_path, module)):
diff --git a/launcher/create_shortcut.js b/launcher/create_shortcut.js
index 91798604db..d5ade771b8 100644
--- a/launcher/create_shortcut.js
+++ b/launcher/create_shortcut.js
@@ -2,7 +2,7 @@
function CreateShortcut()
{
wsh = new ActiveXObject('WScript.Shell');
- target_path = '"' + wsh.CurrentDirectory + '\\..\\python27\\1.0\\pythonw.exe"';
+ target_path = '"' + wsh.CurrentDirectory + '\\..\\python3\\python.exe"';
icon_path = wsh.CurrentDirectory + '\\web_ui\\favicon.ico';
diff --git a/launcher/setup.py b/launcher/setup.py
index 5015cee2d9..383f65e74b 100644
--- a/launcher/setup.py
+++ b/launcher/setup.py
@@ -4,7 +4,7 @@
import sys
current_path = os.path.dirname(os.path.abspath(__file__))
-python_path = os.path.abspath( os.path.join(current_path, os.pardir, 'python27', '1.0'))
+python_path = os.path.abspath( os.path.join(current_path, os.pardir, 'python3'))
noarch_lib = os.path.abspath( os.path.join(python_path, 'lib', 'noarch'))
sys.path.append(noarch_lib)
diff --git a/launcher/setup_win_python.py b/launcher/setup_win_python.py
index e46e3f6e2c..6116e6232d 100644
--- a/launcher/setup_win_python.py
+++ b/launcher/setup_win_python.py
@@ -6,7 +6,7 @@
import time
current_path = os.path.dirname(os.path.abspath(__file__))
-python_path = os.path.abspath( os.path.join(current_path, os.pardir, 'python27', '1.0'))
+python_path = os.path.abspath( os.path.join(current_path, os.pardir, 'python3'))
def copy_VCR_files():
src_path = os.path.join(python_path, "WinSxS")
diff --git a/launcher/update_from_github.py b/launcher/update_from_github.py
index 5c5f54ca0b..9792424f41 100644
--- a/launcher/update_from_github.py
+++ b/launcher/update_from_github.py
@@ -13,7 +13,7 @@
current_path = os.path.dirname(os.path.abspath(__file__))
root_path = os.path.abspath( os.path.join(current_path, os.pardir))
-python_path = os.path.join(root_path, 'python27', '1.0')
+python_path = os.path.join(root_path, 'python3')
noarch_lib = os.path.join(python_path, 'lib', 'noarch')
sys.path.append(noarch_lib)
@@ -250,4 +250,4 @@ def delete_file(file):
delete_file(os.path.join(root_path, "gae_proxy", "local", "xlog.py"))
delete_file(os.path.join(root_path, "gae_proxy", "local", "xlog.pyc"))
-clean_old_file()
\ No newline at end of file
+clean_old_file()
diff --git a/lib/noarch/socks.py b/lib/noarch/socks.py
index a9d3ec61c5..a6bfecb3d2 100644
--- a/lib/noarch/socks.py
+++ b/lib/noarch/socks.py
@@ -55,10 +55,9 @@
__version__ = "1.5.1"
import os, sys
-current_path = os.path.dirname(os.path.abspath(__file__))
-python_path = os.path.abspath( os.path.join(current_path, os.pardir, os.pardir, 'python27', '1.0'))
if sys.platform == "win32":
- win32_lib = os.path.abspath( os.path.join(python_path, 'lib', 'win32'))
+ current_path = os.path.dirname(os.path.abspath(__file__))
+ win32_lib = os.path.abspath( os.path.join(current_path, os.pardir, 'win32'))
sys.path.append(win32_lib)
import socket
@@ -726,4 +725,4 @@ def check_ip_valid(ip):
if __name__ == "__main__":
name = "abc"
name2 = name.encode('idna')
- print(name2)
\ No newline at end of file
+ print(name2)
diff --git a/start.vbs b/start.vbs
index 9d4dc5c411..512f8e6853 100644
--- a/start.vbs
+++ b/start.vbs
@@ -7,5 +7,5 @@ strFolder = objFSO.GetParentFolderName(objFile)
Dim strArgs
quo = """"
-strArgs = quo & strFolder & "/python27/1.0/pythonw.exe" & quo & " " & quo & strFolder & "/launcher/start.py " & quo
+strArgs = quo & strFolder & "/python3/python.exe" & quo & " " & quo & strFolder & "/launcher/start.py " & quo
oShell.Run strArgs, 0, false
\ No newline at end of file
| 1. temporary disable x_tunnel
2. fix start.vbs
3. fix test appid
4. change python path in some files
| https://api.github.com/repos/XX-net/XX-Net/pulls/2410 | 2016-03-17T14:40:16Z | 2016-03-18T03:23:28Z | 2016-03-18T03:23:28Z | 2016-03-18T03:23:28Z | 2,698 | XX-net/XX-Net | 17,236 |
add two JS demos | diff --git a/README.md b/README.md
index a364d439..719956ae 100644
--- a/README.md
+++ b/README.md
@@ -565,6 +565,8 @@ Further resources:
#### Demos and Scripts
* [The Bot](https://github.com/sta-ger/TheBot) - Example of how the neural network learns to predict the angle between two points created with [Synaptic](https://github.com/cazala/synaptic).
* [Half Beer](https://github.com/sta-ger/HalfBeer) - Beer glass classifier created with [Synaptic](https://github.com/cazala/synaptic).
+* [NSFWJS](http://nsfwjs.com) - Indecent content checker with TensorFlow.js
+* [Rock Paper Scissors](https://rps-tfjs.netlify.com/) - Rock Paper Scissors trained in the browser with TensorFlow.js
<a name="julia"></a>
## Julia
| https://api.github.com/repos/josephmisiti/awesome-machine-learning/pulls/631 | 2019-09-19T02:07:57Z | 2019-09-19T13:41:11Z | 2019-09-19T13:41:11Z | 2019-09-19T13:41:11Z | 211 | josephmisiti/awesome-machine-learning | 51,994 |
|
Add 3D graphing via square approximation. | diff --git a/topics/three_dimensions.py b/topics/three_dimensions.py
index 752e481810..5fc3fe6894 100644
--- a/topics/three_dimensions.py
+++ b/topics/three_dimensions.py
@@ -7,7 +7,7 @@
from camera import Camera
from animation.continual_animation import AmbientMovement
from animation.transform import ApplyMethod
-
+import numpy as np
class CameraWithPerspective(Camera):
CONFIG = {
"camera_distance" : 20,
@@ -237,53 +237,139 @@ def __init__(self, *args, **kwargs):
VMobject.__init__(self, *args, **kwargs)
shade_in_3d(self)
+
class Cube(ThreeDMobject):
CONFIG = {
- "fill_opacity" : 0.75,
- "fill_color" : BLUE,
- "stroke_width" : 0,
- "propagate_style_to_family" : True,
- "side_length" : 2,
+ 'fill_opacity': 0.75,
+ 'fill_color': BLUE,
+ 'stroke_width': 0,
+ 'propagate_style_to_family': True,
+ 'side_length': 2
}
- def generate_points(self):
- for vect in IN, OUT, LEFT, RIGHT, UP, DOWN:
- face = Square(side_length = self.side_length)
- face.shift(self.side_length*OUT/2.0)
- face.apply_function(lambda p : np.dot(p, z_to_vector(vect).T))
+ def generate_points(self):
+ for vect in (IN,
+ OUT,
+ LEFT,
+ RIGHT,
+ UP,
+ DOWN):
+ face = Square(side_length=self.side_length)
+ face.shift(self.side_length * OUT / 2.0)
+ face.apply_function(lambda p: np.dot(p, z_to_vector(vect).T))
self.add(face)
-class Prism(Cube):
+
+class Sphere(ThreeDMobject):
CONFIG = {
- "dimensions" : [3, 2, 1]
+ 'fill_opacity': .75,
+ 'fill_color': BLUE,
+ 'stroke_width': 0,
+ 'propagate_style_to_family': True,
+ 'side_length': 2
}
- def generate_points(self):
- Cube.generate_points(self)
- for dim, value in enumerate(self.dimensions):
- self.rescale_to_fit(value, dim, stretch = True)
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
+ def __init__(self, r, eps, opacity = .75):
+ self.r = r
+ self.eps = eps
+ self.CONFIG['fill_opacity'] = opacity
+ ThreeDMobject.__init__(self)
+ def generate_points(self):
+ points = [
+ (
+ self.r * (np.sin(phi) * np.cos(theta)),
+ self.r * (np.sin(phi) * np.sin(theta)),
+ self.r * np.cos(phi)
+ )
+ for phi in np.arange(0, 2 * np.pi, self.eps)
+ for theta in np.arange(0, 2 * np.pi, self.eps)
+ ]
+ for vect in points:
+ face = Square(side_length=self.eps)
+ scalefactor = np.linalg.norm(vect)
+ face.shift(scalefactor * OUT / 2.0)
+ face.apply_function(lambda p: np.dot(p, z_to_vector(vect).T))
+ self.add(face)
+ shade_in_3d(self)
+class Torus(ThreeDMobject):
+ CONFIG = {
+ 'fill_opacity': .75,
+ 'fill_color': BLUE,
+ 'stroke_width': 0,
+ 'propagate_style_to_family': True,
+ 'side_length': 2
+ }
+ def __init__(self, r1, r2, eps, opacity=.75):
+ self.r1 = r1
+ self.r2 = r2
+ self.eps = eps
+ self.CONFIG['fill_opacity'] = opacity
+ ThreeDMobject.__init__(self)
+ def generate_points(self):
+ points = [
+ (
+ (self.r1 + self.r2 * np.cos(theta)) * np.cos(phi),
+ (self.r1 + self.r2 * np.cos(theta)) * np.sin(phi),
+ self.r2 * np.sin(theta)
+ )
+ for phi in np.arange(0, 2 * np.pi, self.eps)
+ for theta in np.arange(0, 2 * np.pi, self.eps)
+ ]
+ for vect in points:
+ face = Square(side_length=self.eps)
+ scalefactor = np.linalg.norm(vect)
+ face.shift(scalefactor * OUT / 2.0)
+ face.apply_function(lambda p: np.dot(p, z_to_vector(vect).T))
+ self.add(face)
+ shade_in_3d(self)
+class Parametric3D(ThreeDMobject):
+ CONFIG = {
+ 'fill_opacity': 0.75,
+ 'fill_color': BLUE,
+ 'stroke_width': 0,
+ 'propagate_style_to_family': True
+ }
+ def __init__(self, f, g, h, phi_min, phi_max, theta_min, theta_max, eps, opacity = .75):
+ self.f = f
+ self.g = g
+ self.h = h
+ self.phi_min = phi_min
+ self.phi_max = phi_max
+ self.theta_min = theta_min
+ self.theta_max = theta_max
+ self.eps = eps
+ self.CONFIG['fill_opacity'] = opacity
+ ThreeDMobject.__init__(self)
+ def generate_points(self):
+ points = [
+ (
+ self.f(phi, theta),
+ self.g(phi, theta),
+ self.h(phi, theta)
+ )
+ for phi in np.arange(self.phi_min, self.phi_max, self.eps)
+ for theta in np.arange(self.theta_min, self.theta_max, self.eps)
+ ]
+ for vect in points:
+ face = Square(side_length=self.eps)
+ scalefactor = np.linalg.norm(vect)
+ face.shift(scalefactor * OUT / 2.0)
+ face.apply_function(lambda p: np.dot(p, z_to_vector(vect).T))
+ self.add(face)
+ shade_in_3d(self)
+class Prism(Cube):
+ CONFIG = {'dimensions': [3, 2, 1]}
+ def generate_points(self):
+ Cube.generate_points(self)
+ for dim, value in enumerate(self.dimensions):
+ self.rescale_to_fit(value, dim, stretch=True)
\ No newline at end of file
| While this isn't perfect, it is a way to graph a 3D surface using square approximations. (Also apparently some linting got in, which I can change if necessary.) | https://api.github.com/repos/3b1b/manim/pulls/145 | 2018-03-06T00:54:41Z | 2018-03-06T04:33:34Z | 2018-03-06T04:33:34Z | 2018-03-06T19:07:35Z | 1,531 | 3b1b/manim | 18,165 |
Allow delegator to return delegate's properties | diff --git a/fundamental/delegation_pattern.py b/fundamental/delegation_pattern.py
index 7f109721..3e26be9d 100644
--- a/fundamental/delegation_pattern.py
+++ b/fundamental/delegation_pattern.py
@@ -13,26 +13,38 @@
class Delegator(object):
"""
>>> delegator = Delegator(Delegate())
+ >>> delegator.p1
+ 123
+ >>> delegator.p2
+ Traceback (most recent call last):
+ ...
+ AttributeError: 'Delegate' object has no attribute 'p2'
>>> delegator.do_something("nothing")
'Doing nothing'
>>> delegator.do_anything()
-
+ Traceback (most recent call last):
+ ...
+ AttributeError: 'Delegate' object has no attribute 'do_anything'
"""
def __init__(self, delegate):
self.delegate = delegate
def __getattr__(self, name):
- def wrapper(*args, **kwargs):
- if hasattr(self.delegate, name):
- attr = getattr(self.delegate, name)
- if callable(attr):
- return attr(*args, **kwargs)
+ attr = getattr(self.delegate, name)
+
+ if not callable(attr):
+ return attr
+ def wrapper(*args, **kwargs):
+ return attr(*args, **kwargs)
return wrapper
class Delegate(object):
+ def __init__(self):
+ self.p1 = 123
+
def do_something(self, something):
return "Doing %s" % something
| The current code does not allow the delegator to return the value of properties of the delegate but can only call its methods. Moreover if the method does not exist then `None` is returned rather than raising an exception.
This PR is for fixing the above. | https://api.github.com/repos/faif/python-patterns/pulls/243 | 2018-09-28T14:10:22Z | 2018-10-05T21:54:14Z | 2018-10-05T21:54:14Z | 2018-10-05T21:54:14Z | 351 | faif/python-patterns | 33,557 |
fix some error for parse dns_servre in config | diff --git a/config.json.example b/config.json.example
index 40066564a..2cee3b811 100644
--- a/config.json.example
+++ b/config.json.example
@@ -7,6 +7,7 @@
"method":"aes-256-cfb",
"local_address":"127.0.0.1",
"fast_open":false,
+ "dns_server":["8.8.8.8", 8.8.4.4],
"tunnel_remote":"8.8.8.8",
"tunnel_remote_port":53,
"tunnel_port":53
diff --git a/shadowsocks/shell.py b/shadowsocks/shell.py
index 1a7322ce8..af33456ab 100644
--- a/shadowsocks/shell.py
+++ b/shadowsocks/shell.py
@@ -167,8 +167,7 @@ def check_config(config, is_local):
config['server_port'] = int(config['server_port'])
if 'tunnel_remote_port' in config:
- config['tunnel_remote_port'] = \
- int(config['tunnel_remote_port'])
+ config['tunnel_remote_port'] = int(config['tunnel_remote_port'])
if 'tunnel_port' in config:
config['tunnel_port'] = int(config['tunnel_port'])
@@ -198,6 +197,8 @@ def check_config(config, is_local):
logging.error('user can be used only on Unix')
sys.exit(1)
if config.get('dns_server', None) is not None:
+ if type(config['dns_server']) != list:
+ config['dns_server'] = to_str(config['dns_server'])
logging.info('Specified DNS server: %s' % config['dns_server'])
cryptor.try_cipher(config['password'], config['method'])
@@ -313,8 +314,7 @@ def get_config(is_local):
config['prefer_ipv6'] = config.get('prefer_ipv6', False)
config['server_port'] = config.get('server_port', 8388)
- config['tunnel_remote'] = \
- to_str(config.get('tunnel_remote', '8.8.8.8'))
+ config['tunnel_remote'] = to_str(config.get('tunnel_remote', '8.8.8.8'))
config['tunnel_remote_port'] = config.get('tunnel_remote_port', 53)
config['tunnel_port'] = config.get('tunnel_port', 53)
config['dns_server'] = config.get('dns_server', None)
| and fix pep8
https://github.com/shadowsocks/shadowsocks/pull/739
see : https://github.com/shadowsocks/shadowsocks/issues/738#issuecomment-287591740 | https://api.github.com/repos/shadowsocks/shadowsocks/pulls/798 | 2017-03-19T04:13:47Z | 2017-03-19T04:37:57Z | 2017-03-19T04:37:57Z | 2017-03-19T09:04:44Z | 570 | shadowsocks/shadowsocks | 24,682 |
[MRG] Switch BernoulliRBM to CSR format | diff --git a/sklearn/neural_network/rbm.py b/sklearn/neural_network/rbm.py
index 88250abb02851..323a20f15b78a 100644
--- a/sklearn/neural_network/rbm.py
+++ b/sklearn/neural_network/rbm.py
@@ -16,6 +16,7 @@
from ..utils import check_arrays
from ..utils import check_random_state
from ..utils import gen_even_slices
+from ..utils import issparse
from ..utils.extmath import safe_sparse_dot
from ..utils.extmath import logistic_sigmoid
@@ -48,8 +49,10 @@ class BernoulliRBM(BaseEstimator, TransformerMixin):
Number of iterations/sweeps over the training dataset to perform
during training.
- verbose : bool, optional
- The verbosity level.
+ verbose : int, optional
+ The verbosity level. Enabling it (with a non-zero value) will compute
+ the log-likelihood of each mini-batch and hence cause a runtime overhead
+ in the order of 10%.
random_state : integer or numpy.RandomState, optional
A random number generator instance to define the state of the
@@ -112,7 +115,7 @@ def transform(self, X):
h : array, shape (n_samples, n_components)
Latent representations of the data.
"""
- X, = check_arrays(X, sparse_format='csc', dtype=np.float)
+ X, = check_arrays(X, sparse_format='csr', dtype=np.float)
return self._mean_hiddens(X)
def _mean_hiddens(self, v):
@@ -185,9 +188,9 @@ def _free_energy(self, v):
free_energy : array-like, shape (n_samples,)
The value of the free energy.
"""
- return - np.dot(v, self.intercept_visible_) - np.log(1. + np.exp(
- safe_sparse_dot(v, self.components_.T) + self.intercept_hidden_)) \
- .sum(axis=1)
+ return (- safe_sparse_dot(v, self.intercept_visible_)
+ - np.log(1. + np.exp(safe_sparse_dot(v, self.components_.T)
+ + self.intercept_hidden_)).sum(axis=1))
def gibbs(self, v):
"""Perform one Gibbs sampling step.
@@ -262,7 +265,10 @@ def score_samples(self, v):
rng = check_random_state(self.random_state)
fe = self._free_energy(v)
- v_ = v.copy()
+ if issparse(v):
+ v_ = v.toarray()
+ else:
+ v_ = v.copy()
i_ = rng.randint(0, v.shape[1], v.shape[0])
v_[np.arange(v.shape[0]), i_] = 1 - v_[np.arange(v.shape[0]), i_]
fe_ = self._free_energy(v_)
@@ -282,7 +288,7 @@ def fit(self, X, y=None):
self : BernoulliRBM
The fitted model.
"""
- X, = check_arrays(X, sparse_format='csc', dtype=np.float)
+ X, = check_arrays(X, sparse_format='csr', dtype=np.float)
n_samples = X.shape[0]
rng = check_random_state(self.random_state)
diff --git a/sklearn/neural_network/tests/test_rbm.py b/sklearn/neural_network/tests/test_rbm.py
index 34d50e53e7c0f..64eaa2dd60852 100644
--- a/sklearn/neural_network/tests/test_rbm.py
+++ b/sklearn/neural_network/tests/test_rbm.py
@@ -1,7 +1,7 @@
import sys
+import re
import numpy as np
-
from numpy.testing import assert_almost_equal, assert_array_equal
from sklearn.datasets import load_digits
@@ -121,3 +121,24 @@ def test_rbm_verbose():
rbm.fit(Xdigits)
finally:
sys.stdout = old_stdout
+
+
+def test_sparse_and_verbose():
+ """
+ Make sure RBM works with sparse input when verbose=True
+ """
+ old_stdout = sys.stdout
+ sys.stdout = StringIO()
+ from scipy.sparse import csc_matrix
+ X = csc_matrix([[0.], [1.]])
+ rbm = BernoulliRBM(n_components=2, batch_size=2, n_iter=1,
+ random_state=42, verbose=True)
+ try:
+ rbm.fit(X)
+ s = sys.stdout.getvalue()
+ # make sure output is sound
+ assert(re.match(r"Iteration 0, pseudo-likelihood = -?(\d)+(\.\d+)?",
+ s))
+ finally:
+ sio = sys.stdout
+ sys.stdout = old_stdout
| While taking a look at Issue #2455, I've noticed that `BernoulliRBM` enforces CSC format for sparse data. However it accesses the data row-wise (it trains in Minibatches of `batch_size` rows). Thus CSR format seems like a much more natural choice.
I ran some short tests on a 2000x100000 sparse matrix with 95% sparsity on an RBM with 128 components, and saw a speedup of ~ 5% when switching from CSC to CSR matrices... (I'd expect the speedup to be larger on larger matrices).
| https://api.github.com/repos/scikit-learn/scikit-learn/pulls/2457 | 2013-09-18T15:24:08Z | 2013-09-25T15:46:27Z | 2013-09-25T15:46:27Z | 2014-06-13T11:32:15Z | 1,082 | scikit-learn/scikit-learn | 46,071 |
Update llama.cpp.md | diff --git a/docs/llama.cpp.md b/docs/llama.cpp.md
index 68aa1cfa5f..c27ad65aa2 100644
--- a/docs/llama.cpp.md
+++ b/docs/llama.cpp.md
@@ -9,7 +9,10 @@ llama.cpp is the best backend in two important scenarios:
#### Pre-converted
-Download the ggml model directly into your `text-generation-webui/models` folder, making sure that its name contains `ggml` somewhere and ends in `.bin`. It's a single file.
+Download the GGUF or GGML models directly into your `text-generation-webui/models` folder. It will be a single file.
+
+* For GGUF models, make sure its name contains `.gguf`.
+* For GGML models, make sure its name contains `ggml` and ends in `.bin`.
`q4_K_M` quantization is recommended.
| Update docs to reflect the newer GGUF format.
## Checklist:
- [ x ] I have read the [Contributing guidelines](https://github.com/oobabooga/text-generation-webui/wiki/Contributing-guidelines).
| https://api.github.com/repos/oobabooga/text-generation-webui/pulls/3702 | 2023-08-27T03:52:05Z | 2023-08-29T20:56:51Z | 2023-08-29T20:56:51Z | 2023-08-29T20:56:51Z | 205 | oobabooga/text-generation-webui | 26,410 |
Add Dask to MapReduce Section | diff --git a/README.md b/README.md
index 40e7bc668..8f53992c6 100644
--- a/README.md
+++ b/README.md
@@ -760,6 +760,7 @@ Inspired by [awesome-php](https://github.com/ziadoz/awesome-php).
* [luigi](https://github.com/spotify/luigi) - A module that helps you build complex pipelines of batch jobs.
* [mrjob](https://github.com/Yelp/mrjob) - Run MapReduce jobs on Hadoop or Amazon Web Services.
* [streamparse](https://github.com/Parsely/streamparse) - Run Python code against real-time streams of data. Integrates with [Apache Storm](http://storm.apache.org/).
+* [dask](https://dask.pydata.org/en/latest/) - A flexible parallel computing library for analytic computing.
## Microsoft Windows
| ## What is this Python project?
A Pythonic Distributed Data Science Framework
https://www.youtube.com/watch?v=RA_2qdipVng
--
Anyone who agrees with this pull request could vote for it by adding a :+1: to it, and usually, the maintainer will merge it when votes reach **20**.
| https://api.github.com/repos/vinta/awesome-python/pulls/936 | 2017-09-15T07:10:21Z | 2017-09-15T14:15:59Z | 2017-09-15T14:15:59Z | 2017-09-15T14:15:59Z | 201 | vinta/awesome-python | 26,936 |
bootstrap: use a proper dependency test for Arch | diff --git a/bootstrap/_arch_common.sh b/bootstrap/_arch_common.sh
index 6895addf4bd..f66067ffb90 100755
--- a/bootstrap/_arch_common.sh
+++ b/bootstrap/_arch_common.sh
@@ -1,29 +1,27 @@
#!/bin/sh
# Tested with:
-# - Manjaro 15.09 (x86_64)
# - ArchLinux (x86_64)
-
-# Both "gcc-multilib" and "gcc" packages provide gcc. If user already has
-# "gcc-multilib" installed, let's stick to their choice
-if pacman -Qc gcc-multilib &>/dev/null
-then
- GCC_PACKAGE="gcc-multilib";
-else
- GCC_PACKAGE="gcc";
-fi
-
+#
# "python-virtualenv" is Python3, but "python2-virtualenv" provides
# only "virtualenv2" binary, not "virtualenv" necessary in
# ./bootstrap/dev/_common_venv.sh
-pacman -S --needed \
- git \
- python2 \
- python-virtualenv \
- "$GCC_PACKAGE" \
- dialog \
- augeas \
- openssl \
- libffi \
- ca-certificates \
- pkg-config \
+
+deps="
+ git
+ python2
+ python-virtualenv
+ gcc
+ dialog
+ augeas
+ openssl
+ libffi
+ ca-certificates
+ pkg-config
+"
+
+missing=$(pacman -T $deps)
+
+if [ "$missing" ]; then
+ pacman -S --needed $missing
+fi
| `pacman -T` exists for this exact purpose; it respects provides without having to manually code them into the script.
| https://api.github.com/repos/certbot/certbot/pulls/1210 | 2015-10-30T15:14:31Z | 2015-10-30T23:10:46Z | 2015-10-30T23:10:46Z | 2016-05-06T19:21:22Z | 375 | certbot/certbot | 346 |
Set self.num_features to neck_chans if neck_chans > 0 for vision_transformer_sam | diff --git a/timm/models/vision_transformer_sam.py b/timm/models/vision_transformer_sam.py
index c561ea1b22..9beb7b0162 100644
--- a/timm/models/vision_transformer_sam.py
+++ b/timm/models/vision_transformer_sam.py
@@ -434,6 +434,7 @@ def __init__(
),
LayerNorm2d(neck_chans),
)
+ self.num_features = neck_chans
else:
self.neck = nn.Identity()
neck_chans = embed_dim
| Knowing the number of feature channels is crucial when this neural network is used as a backbone in other systems. It allows for better integration and adaptability with various architectures that might require information about the dimensions of the output features. I think that self.num_features should be neck_chans if neck_chans > 0. | https://api.github.com/repos/huggingface/pytorch-image-models/pulls/1914 | 2023-08-11T05:38:22Z | 2023-08-11T18:23:57Z | 2023-08-11T18:23:57Z | 2023-08-12T06:58:06Z | 128 | huggingface/pytorch-image-models | 16,417 |
Complete test coverage db.models.base.refresh_from_db | diff --git a/tests/basic/tests.py b/tests/basic/tests.py
index 342c39ea3403e..b3bb4d02cc6c6 100644
--- a/tests/basic/tests.py
+++ b/tests/basic/tests.py
@@ -649,6 +649,12 @@ def test_unknown_kwarg(self):
with self.assertRaisesMessage(TypeError, msg):
s.refresh_from_db(unknown_kwarg=10)
+ def test_lookup_in_fields(self):
+ s = SelfRef.objects.create()
+ msg = 'Found "__" in fields argument. Relations and transforms are not allowed in fields.'
+ with self.assertRaisesMessage(ValueError, msg):
+ s.refresh_from_db(fields=['foo__bar'])
+
def test_refresh_fk(self):
s1 = SelfRef.objects.create()
s2 = SelfRef.objects.create()
| Try to cover
https://github.com/django/django/blob/e1fc07c047f8e46c2cea0120f44011fc458f1e91/django/db/models/base.py#L595 | https://api.github.com/repos/django/django/pulls/10648 | 2018-11-15T15:47:08Z | 2018-11-15T16:38:48Z | 2018-11-15T16:38:48Z | 2018-11-15T16:38:48Z | 179 | django/django | 51,004 |
DOC: Specify use of google cloud storage for CSVs | diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py
index 8bc8470ae7658..4e26ceef0af26 100755
--- a/pandas/io/parsers.py
+++ b/pandas/io/parsers.py
@@ -89,7 +89,7 @@
----------
filepath_or_buffer : str, path object or file-like object
Any valid string path is acceptable. The string could be a URL. Valid
- URL schemes include http, ftp, s3, and file. For file URLs, a host is
+ URL schemes include http, ftp, s3, gs, and file. For file URLs, a host is
expected. A local file could be: file://localhost/path/to/table.csv.
If you want to pass in a path object, pandas accepts any ``os.PathLike``.
| - [x] tests added / passed
- [ ] passes `black pandas`
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/31860 | 2020-02-10T23:18:12Z | 2020-02-11T02:01:49Z | 2020-02-11T02:01:49Z | 2020-02-11T09:59:01Z | 186 | pandas-dev/pandas | 45,379 |
Refactor How to rebase PR copy | diff --git a/CONTRIBUTING.rst b/CONTRIBUTING.rst
index c60d1153f29d9..077e57d861701 100644
--- a/CONTRIBUTING.rst
+++ b/CONTRIBUTING.rst
@@ -954,65 +954,94 @@ modified the master in your fork, you might loose those changes.
How to rebase PR
================
-A lot of people are unfamiliar with rebase workflow in Git, but we think it is an excellent workflow,
-much better than merge workflow, so here is a short guide for those who would like to learn it. It's really
-worth to spend a few minutes learning it. As opposed to merge workflow, the rebase workflow allows to
-clearly separate your changes from changes of others, puts responsibility of proper rebase on the
-author of the change. It also produces a "single-line" series of commits in master branch which
-makes it much easier to understand what was going on and to find reasons for problems (it is especially
-useful for "bisecting" when looking for a commit that introduced some bugs.
-
-First of all - you can read about rebase workflow here:
-`Merging vs. rebasing <https://www.atlassian.com/git/tutorials/merging-vs-rebasing>`_ - this is an
-excellent article that describes all ins/outs of rebase. I recommend reading it and keeping it as reference.
+A lot of people are unfamiliar with the rebase workflow in Git, but we think it is an excellent workflow,
+providing a better alternative to the merge workflow. We've therefore written a short guide for those who would like to learn it.
+
+As opposed to the merge workflow, the rebase workflow allows us to
+clearly separate your changes from the changes of others. It puts the responsibility of rebasing on the
+author of the change. It also produces a "single-line" series of commits on the master branch. This
+makes it easier to understand what was going on and to find reasons for problems (it is especially
+useful for "bisecting" when looking for a commit that introduced some bugs).
+
+First of all, we suggest you read about the rebase workflow here:
+`Merging vs. rebasing <https://www.atlassian.com/git/tutorials/merging-vs-rebasing>`_. This is an
+excellent article that describes all the ins/outs of the rebase workflow. I recommend keeping it for future reference.
The goal of rebasing your PR on top of ``apache/master`` is to "transplant" your change on top of
the latest changes that are merged by others. It also allows you to fix all the conflicts
-that are result of other people changing the same files as you and merging the changes to ``apache/master``.
+that arise as a result of other people changing the same files as you and merging the changes to ``apache/master``.
Here is how rebase looks in practice:
-1. You need to add Apache remote to your git repository. You can add it as "apache" remote so that
- you can refer to it easily:
+1. You first need to add the Apache project remote to your git repository. In this example, we will be adding the remote
+as "apache" so you can refer to it easily:
+
+* If you use ssh: ``git remote add apache git@github.com:apache/airflow.git``
+* If you use https: ``git remote add apache https://github.com/apache/airflow.git``
+
+2. You then need to make sure that you have the latest master fetched from the ``apache`` repository. You can do this
+ via:
+
+ ``git fetch apache`` (to fetch apache remote)
+
+ ``git fetch --all`` (to fetch all remotes)
+
+3. Assuming that your feature is in a branch in your repository called ``my-branch`` you can easily check
+ what is the base commit you should rebase from by:
+
+ ``git merge-base my-branch apache/master``
+
+ This will print the HASH of the base commit which you should use to rebase your feature from.
+ For example: ``5abce471e0690c6b8d06ca25685b0845c5fd270f``. You can also find this commit hash manually if you want
+ better control.
+
+ Run:
-``git remote add apache git@github.com:apache/airflow.git`` if you use ssh or
-``git remote add apache https://github.com/apache/airflow.git`` if you use https.
+ ``git log``
-Later on
+ And find the first commit that you DO NOT want to "transplant".
-2. You need to make sure that you have the latest master fetched from ``apache`` repository. You can do it
- by ``git fetch apache`` for apache remote or ``git fetch --all`` to fetch all remotes.
+ Performing:
-3. Assuming that your feature is in a branch in your repository called ``my-branch`` you can check easily
- what is the base commit you should rebase from by: ``git merge-base my-branch apache/master``.
- This will print the HASH of the base commit which you should use to rebase your feature from -
- for example: ``5abce471e0690c6b8d06ca25685b0845c5fd270f``. You can also find this commit hash manually -
- if you want better control. Run ``git log`` and find the first commit that you DO NOT want to "transplant".
- ``git rebase HASH`` will "trasplant" all commits after the commit with the HASH.
+ ``git rebase HASH``
-4. Make sure you checked out your branch locally:
+ Will "transplant" all commits after the commit with the HASH.
-``git checkout my-branch``
+4. Check out your feature branch locally via:
+
+ ``git checkout my-branch``
5. Rebase:
- Run: ``git rebase HASH --onto apache/master``
- for example: ``git rebase 5abce471e0690c6b8d06ca25685b0845c5fd270f --onto apache/master``
+
+ ``git rebase HASH --onto apache/master``
+
+ For example:
+
+ ``git rebase 5abce471e0690c6b8d06ca25685b0845c5fd270f --onto apache/master``
6. If you have no conflicts - that's cool. You rebased. You can now run ``git push --force-with-lease`` to
push your changes to your repository. That should trigger the build in our CI if you have a
- Pull Request opened already.
+ Pull Request (PR) opened already.
7. While rebasing you might have conflicts. Read carefully what git tells you when it prints information
about the conflicts. You need to solve the conflicts manually. This is sometimes the most difficult
- part and requires deliberate correcting your code looking what has changed since you developed your
- changes. There are various tools that can help you with that. You can use ``git mergetool`` (and you can
- configure different merge tools with it). Also you can use IntelliJ/PyCharm excellent merge tool.
- When you open project in PyCharm which has conflict you can go to VCS->Git->Resolve Conflicts and there
- you have a very intuitive and helpful merge tool. You can see more information
- about it in `Resolve conflicts <https://www.jetbrains.com/help/idea/resolving-conflicts.html.>`_
-
-8. After you solved conflicts simply run ``git rebase --continue`` and go either to point 6. or 7.
- above depending if you have more commits that cause conflicts in your PR (rebasing applies each
+ part and requires deliberately correcting your code and looking at what has changed since you developed your
+ changes.
+
+ There are various tools that can help you with this. You can use:
+
+ ``git mergetool``
+
+ You can configure different merge tools with it. You can also use IntelliJ/PyCharm's excellent merge tool.
+ When you open a project in PyCharm which has conflicts, you can go to VCS > Git > Resolve Conflicts and there
+ you have a very intuitive and helpful merge tool. For more information, see
+ `Resolve conflicts <https://www.jetbrains.com/help/idea/resolving-conflicts.html.>`_.
+
+8. After you've solved your conflict run:
+
+ ``git rebase --continue``
+
+ And go either to point 6. or 7, depending on whether you have more commits that cause conflicts in your PR (rebasing applies each
commit from your PR one-by-one).
How to communicate
| Brings some corrections/copy style updates/clarifications to the `How to Rebase PR` section.
| https://api.github.com/repos/apache/airflow/pulls/11030 | 2020-09-19T22:13:05Z | 2020-09-20T19:32:58Z | 2020-09-20T19:32:58Z | 2020-11-14T17:13:51Z | 1,959 | apache/airflow | 14,342 |
Backport PR #37657 on branch 1.1.x: BUG: unpickling modifies Block.ndim | diff --git a/doc/source/whatsnew/v1.1.5.rst b/doc/source/whatsnew/v1.1.5.rst
index a122154904996..e0fa68e3b9f80 100644
--- a/doc/source/whatsnew/v1.1.5.rst
+++ b/doc/source/whatsnew/v1.1.5.rst
@@ -24,6 +24,7 @@ Fixed regressions
Bug fixes
~~~~~~~~~
- Bug in metadata propagation for ``groupby`` iterator (:issue:`37343`)
+- Bug in indexing on a :class:`Series` with ``CategoricalDtype`` after unpickling (:issue:`37631`)
-
.. ---------------------------------------------------------------------------
diff --git a/pandas/core/internals/managers.py b/pandas/core/internals/managers.py
index c9ac9cb0f140a..4c52343d08513 100644
--- a/pandas/core/internals/managers.py
+++ b/pandas/core/internals/managers.py
@@ -284,14 +284,17 @@ def __getstate__(self):
return axes_array, block_values, block_items, extra_state
def __setstate__(self, state):
- def unpickle_block(values, mgr_locs):
- return make_block(values, placement=mgr_locs)
+ def unpickle_block(values, mgr_locs, ndim: int):
+ # TODO(EA2D): ndim would be unnecessary with 2D EAs
+ return make_block(values, placement=mgr_locs, ndim=ndim)
if isinstance(state, tuple) and len(state) >= 4 and "0.14.1" in state[3]:
state = state[3]["0.14.1"]
self.axes = [ensure_index(ax) for ax in state["axes"]]
+ ndim = len(self.axes)
self.blocks = tuple(
- unpickle_block(b["values"], b["mgr_locs"]) for b in state["blocks"]
+ unpickle_block(b["values"], b["mgr_locs"], ndim=ndim)
+ for b in state["blocks"]
)
else:
raise NotImplementedError("pre-0.14.1 pickles are no longer supported")
diff --git a/pandas/tests/io/test_pickle.py b/pandas/tests/io/test_pickle.py
index e4d43db7834e3..376091c62619b 100644
--- a/pandas/tests/io/test_pickle.py
+++ b/pandas/tests/io/test_pickle.py
@@ -477,3 +477,15 @@ def test_read_pickle_with_subclass():
tm.assert_series_equal(result[0], expected[0])
assert isinstance(result[1], MyTz)
+
+
+def test_pickle_preserves_block_ndim():
+ # GH#37631
+ ser = pd.Series(list("abc")).astype("category").iloc[[0]]
+ res = tm.round_trip_pickle(ser)
+
+ assert res._mgr.blocks[0].ndim == 1
+ assert res._mgr.blocks[0].shape == (1,)
+
+ # GH#37631 OP issue was about indexing, underlying problem was pickle
+ tm.assert_series_equal(res[[True]], ser)
| Backport PR #37657 | https://api.github.com/repos/pandas-dev/pandas/pulls/37713 | 2020-11-09T11:19:25Z | 2020-11-09T12:49:09Z | 2020-11-09T12:49:09Z | 2020-11-09T12:49:28Z | 705 | pandas-dev/pandas | 45,269 |
[beeg] Fix extraction | diff --git a/youtube_dl/extractor/beeg.py b/youtube_dl/extractor/beeg.py
index d5c5822f2b2..bbeae4bacbe 100644
--- a/youtube_dl/extractor/beeg.py
+++ b/youtube_dl/extractor/beeg.py
@@ -9,6 +9,7 @@
from ..utils import (
int_or_none,
parse_iso8601,
+ urljoin,
)
@@ -36,9 +37,11 @@ def _real_extract(self, url):
webpage = self._download_webpage(url, video_id)
cpl_url = self._search_regex(
- r'<script[^>]+src=(["\'])(?P<url>(?:https?:)?//static\.beeg\.com/cpl/\d+\.js.*?)\1',
+ r'<script[^>]+src=(["\'])(?P<url>(?:/static|(?:https?:)?//static\.beeg\.com)/cpl/\d+\.js.*?)\1',
webpage, 'cpl', default=None, group='url')
+ cpl_url = urljoin(url, cpl_url)
+
beeg_version, beeg_salt = [None] * 2
if cpl_url:
@@ -54,7 +57,7 @@ def _real_extract(self, url):
r'beeg_salt\s*=\s*(["\'])(?P<beeg_salt>.+?)\1', cpl, 'beeg salt',
default=None, group='beeg_salt')
- beeg_version = beeg_version or '2000'
+ beeg_version = beeg_version or '2185'
beeg_salt = beeg_salt or 'pmweAkq8lAYKdfWcFCUj0yoVgoPlinamH5UE1CB3H'
video = self._download_json(
| ### Before submitting a *pull request* make sure you have:
- [x] At least skimmed through [adding new extractor tutorial](https://github.com/rg3/youtube-dl#adding-support-for-a-new-site) and [youtube-dl coding conventions](https://github.com/rg3/youtube-dl#youtube-dl-coding-conventions) sections
- [x] [Searched](https://github.com/rg3/youtube-dl/search?q=is%3Apr&type=Issues) the bugtracker for similar pull requests
### In order to be accepted and merged into youtube-dl each piece of code must be in public domain or released under [Unlicense](http://unlicense.org/). Check one of the following options:
- [x] I am the original author of this code and I am willing to release it under [Unlicense](http://unlicense.org/)
- [ ] I am not the original author of this code but it is in public domain or released under [Unlicense](http://unlicense.org/) (provide reliable evidence)
### What is the purpose of your *pull request*?
- [x] Bug fix
- [ ] Improvement
- [ ] New extractor
- [ ] New feature
---
### Description of your *pull request* and other information
Fix #14275 | https://api.github.com/repos/ytdl-org/youtube-dl/pulls/14279 | 2017-09-20T18:59:27Z | 2017-09-20T21:05:34Z | 2017-09-20T21:05:34Z | 2017-09-20T21:06:26Z | 423 | ytdl-org/youtube-dl | 50,580 |
fix indentation | diff --git a/README.md b/README.md
index ecf5623..942f6eb 100644
--- a/README.md
+++ b/README.md
@@ -2569,17 +2569,17 @@ None
```py
def some_recursive_func(a):
if a[0] == 0:
- return
+ return
a[0] -= 1
some_recursive_func(a)
return a
def similar_recursive_func(a):
- if a == 0:
- return a
- a -= 1
- similar_recursive_func(a)
+ if a == 0:
return a
+ a -= 1
+ similar_recursive_func(a)
+ return a
```
**Output:**
| https://api.github.com/repos/satwikkansal/wtfpython/pulls/237 | 2020-10-27T04:19:55Z | 2020-10-27T08:20:27Z | 2020-10-27T08:20:27Z | 2020-10-27T08:21:00Z | 169 | satwikkansal/wtfpython | 25,735 |
|
Add debug and move pdb into secondary links | diff --git a/README.md b/README.md
index a74b2c587..6d426114e 100644
--- a/README.md
+++ b/README.md
@@ -843,8 +843,8 @@ A curated list of awesome Python frameworks, libraries and software. Inspired by
*Libraries for debugging code.*
-* [pdb](https://docs.python.org/2/library/pdb.html) - (Python standard library) The Python Debugger.
-* [ipdb](https://pypi.python.org/pypi/ipdb) - IPython-enabled pdb.
+* [ipdb](https://pypi.python.org/pypi/ipdb) - IPython-enabled [pdb](https://docs.python.org/2/library/pdb.html).
+* [debug](https://pypi.python.org/pypi/debug) - `import debug` to fall into `ipdb` with `see`.
* [winpdb](http://winpdb.org/) - A Platform Independent Python Debugger with GUI.
* [pudb](https://pypi.python.org/pypi/pudb) – A full-screen, console-based Python debugger.
* [pyringe](https://github.com/google/pyringe) - Debugger capable of attaching to and injecting code into Python processes.
| `pdb` from standard library is not awesome (especially with all these alternatives)
| https://api.github.com/repos/vinta/awesome-python/pulls/265 | 2014-11-19T10:25:12Z | 2014-11-19T20:49:02Z | 2014-11-19T20:49:02Z | 2014-11-19T20:49:02Z | 264 | vinta/awesome-python | 27,279 |
Readme: Fix link for faker | diff --git a/README.md b/README.md
index 0f2b89c3d..836806e2e 100644
--- a/README.md
+++ b/README.md
@@ -898,7 +898,7 @@ Inspired by [awesome-php](https://github.com/ziadoz/awesome-php).
* Code Coverage
* [coverage](https://pypi.python.org/pypi/coverage) - Code coverage measurement.
* Fake Data
- * [faker](http://www.joke2k.net/faker/) - A Python package that generates fake data.
+ * [faker](https://github.com/joke2k/faker) - A Python package that generates fake data.
* [fake2db](https://github.com/emirozer/fake2db) - Fake database generator.
* [radar](https://pypi.python.org/pypi/radar) - Generate random datetime / time.
* Error Handler
| Fixed link for the Faker library
Closes https://github.com/vinta/awesome-python/issues/700
| https://api.github.com/repos/vinta/awesome-python/pulls/702 | 2016-08-19T18:03:12Z | 2016-08-20T17:06:42Z | 2016-08-20T17:06:42Z | 2016-08-21T06:54:29Z | 206 | vinta/awesome-python | 27,020 |
Fix call patterns that contain as-expression on the kwargs | diff --git a/CHANGES.md b/CHANGES.md
index cb637d94c11..70397adb567 100644
--- a/CHANGES.md
+++ b/CHANGES.md
@@ -12,6 +12,8 @@
(#2686)
- Fix cases that contain multiple top-level as-expressions, like `case 1 as a, 2 as b`
(#2716)
+- Fix call patterns that contain as-expressions with keyword arguments, like
+ `case Foo(bar=baz as quux)` (#2749)
- No longer color diff headers white as it's unreadable in light themed terminals
(#2691)
- Tuple unpacking on `return` and `yield` constructs now implies 3.8+ (#2700)
diff --git a/src/blib2to3/Grammar.txt b/src/blib2to3/Grammar.txt
index 27776a3b65c..cf4799f8abe 100644
--- a/src/blib2to3/Grammar.txt
+++ b/src/blib2to3/Grammar.txt
@@ -187,7 +187,7 @@ arglist: argument (',' argument)* [',']
argument: ( test [comp_for] |
test ':=' test |
test 'as' test |
- test '=' test |
+ test '=' asexpr_test |
'**' test |
'*' test )
diff --git a/tests/data/pattern_matching_extras.py b/tests/data/pattern_matching_extras.py
index b652d2685ec..9f6907f7575 100644
--- a/tests/data/pattern_matching_extras.py
+++ b/tests/data/pattern_matching_extras.py
@@ -103,3 +103,17 @@ def func(match: case, case: match) -> case:
case 4 as d, (5 as e), (6 | 7 as g), *h:
pass
+
+
+match bar1:
+ case Foo(aa=Callable() as aa, bb=int()):
+ print(bar1.aa, bar1.bb)
+ case _:
+ print("no match", "\n")
+
+
+match bar1:
+ case Foo(
+ normal=x, perhaps=[list, {an: d, dict: 1.0}] as y, otherwise=something, q=t as u
+ ):
+ pass
| Fixes #2746 | https://api.github.com/repos/psf/black/pulls/2749 | 2022-01-05T23:28:28Z | 2022-01-07T16:51:36Z | 2022-01-07T16:51:36Z | 2022-01-07T16:51:36Z | 520 | psf/black | 23,730 |
Remove boulder-integration.conf.sh | diff --git a/certbot-nginx/certbot_nginx/_internal/tests/boulder-integration.conf.sh b/certbot-nginx/certbot_nginx/_internal/tests/boulder-integration.conf.sh
deleted file mode 100755
index 35cedf5edbc..00000000000
--- a/certbot-nginx/certbot_nginx/_internal/tests/boulder-integration.conf.sh
+++ /dev/null
@@ -1,107 +0,0 @@
-#!/usr/bin/env bash
-# Based on
-# https://www.exratione.com/2014/03/running-nginx-as-a-non-root-user/
-# https://github.com/exratione/non-root-nginx/blob/9a77f62e5d5cb9c9026fd62eece76b9514011019/nginx.conf
-
-# USAGE: ./boulder-integration.conf.sh /path/to/root cert.key cert.pem >> nginx.conf
-
-ROOT=$1
-CERT_KEY_PATH=$2
-CERT_PATH=$3
-
-cat <<EOF
-# This error log will be written regardless of server scope error_log
-# definitions, so we have to set this here in the main scope.
-#
-# Even doing this, Nginx will still try to create the default error file, and
-# log a non-fatal error when it fails. After that things will work, however.
-error_log $ROOT/error.log;
-
-# The pidfile will be written to /var/run unless this is set.
-pid $ROOT/nginx.pid;
-
-worker_processes 1;
-
-events {
- worker_connections 1024;
-}
-
-http {
- # Set an array of temp, cache and log file options that will otherwise default to
- # restricted locations accessible only to root.
- client_body_temp_path $ROOT/client_body;
- fastcgi_temp_path $ROOT/fastcgi_temp;
- proxy_temp_path $ROOT/proxy_temp;
- #scgi_temp_path $ROOT/scgi_temp;
- #uwsgi_temp_path $ROOT/uwsgi_temp;
- access_log $ROOT/error.log;
-
- # This should be turned off in a Virtualbox VM, as it can cause some
- # interesting issues with data corruption in delivered files.
- sendfile off;
-
- tcp_nopush on;
- tcp_nodelay on;
- keepalive_timeout 65;
- types_hash_max_size 2048;
-
- #include /etc/nginx/mime.types;
- index index.html index.htm index.php;
-
- log_format main '\$remote_addr - \$remote_user [\$time_local] \$status '
- '"\$request" \$body_bytes_sent "\$http_referer" '
- '"\$http_user_agent" "\$http_x_forwarded_for"';
-
- default_type application/octet-stream;
-
- server {
- # IPv4.
- listen 5002 $default_server;
- # IPv6.
- listen [::]:5002 $default_server;
- server_name nginx.wtf nginx2.wtf;
-
- root $ROOT/webroot;
-
- location / {
- # First attempt to serve request as file, then as directory, then fall
- # back to index.html.
- try_files \$uri \$uri/ /index.html;
- }
- }
-
- server {
- listen 5002;
- listen [::]:5002;
- server_name nginx3.wtf;
-
- root $ROOT/webroot;
-
- location /.well-known/ {
- return 404;
- }
-
- return 301 https://\$host\$request_uri;
- }
-
- server {
- listen 8082;
- listen [::]:8082;
- server_name nginx4.wtf nginx5.wtf;
- }
-
- server {
- listen 5002;
- listen [::]:5002;
- listen 5001 ssl;
- listen [::]:5001 ssl;
- if (\$scheme != "https") {
- return 301 https://\$host\$request_uri;
- }
- server_name nginx6.wtf nginx7.wtf;
-
- ssl_certificate ${CERT_PATH};
- ssl_certificate_key ${CERT_KEY_PATH};
- }
-}
-EOF
| Per Alex's suggestion at https://github.com/certbot/certbot/pull/9638#pullrequestreview-1361962084. | https://api.github.com/repos/certbot/certbot/pulls/9640 | 2023-03-28T22:02:39Z | 2023-03-28T22:23:16Z | 2023-03-28T22:23:16Z | 2023-03-28T22:23:18Z | 946 | certbot/certbot | 830 |
Minor typo fixes to the preprocessing tutorial in the docs | diff --git a/docs/source/preprocessing.rst b/docs/source/preprocessing.rst
index 5fe278e49e585..a7a91788f1c2b 100644
--- a/docs/source/preprocessing.rst
+++ b/docs/source/preprocessing.rst
@@ -51,7 +51,7 @@ The tokenizer can decode a list of token ids in a proper sentence:
>>> tokenizer.decode(encoded_input["input_ids"])
"[CLS] Hello, I'm a single sentence! [SEP]"
-As you can see, the tokenizer automatically added some special tokens that the model expect. Not all model need special
+As you can see, the tokenizer automatically added some special tokens that the model expects. Not all models need special
tokens; for instance, if we had used` gtp2-medium` instead of `bert-base-cased` to create our tokenizer, we would have
seen the same sentence as the original one here. You can disable this behavior (which is only advised if you have added
those special tokens yourself) by passing ``add_special_tokens=False``.
@@ -76,7 +76,7 @@ tokenizer:
[1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1]]}
-We get back a dictionary once again, this time with values being list of list of ints.
+We get back a dictionary once again, this time with values being lists of lists of ints.
If the purpose of sending several sentences at a time to the tokenizer is to build a batch to feed the model, you will
probably want:
@@ -114,7 +114,7 @@ You can do all of this by using the following options when feeding your list of
[1, 1, 1, 1, 1, 0, 0, 0, 0],
[1, 1, 1, 1, 1, 1, 1, 1, 0]])}
-It returns a dictionary string to tensor. We can now see what the `attention_mask <glossary.html#attention-mask>`__ is
+It returns a dictionary with string keys and tensor values. We can now see what the `attention_mask <glossary.html#attention-mask>`__ is
all about: it points out which tokens the model should pay attention to and which ones it should not (because they
represent padding in this case).
@@ -127,7 +127,7 @@ can safely ignore it. You can also pass ``verbose=False`` to stop the tokenizer
Preprocessing pairs of sentences
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-Sometimes you need to feed pair of sentences to your model. For instance, if you want to classify if two sentences in a
+Sometimes you need to feed a pair of sentences to your model. For instance, if you want to classify if two sentences in a
pair are similar, or for question-answering models, which take a context and a question. For BERT models, the input is
then represented like this: :obj:`[CLS] Sequence A [SEP] Sequence B [SEP]`
@@ -179,7 +179,7 @@ list of first sentences and the list of second sentences:
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]}
-As we can see, it returns a dictionary with the values being list of lists of ints.
+As we can see, it returns a dictionary where each value is a list of lists of ints.
To double-check what is fed to the model, we can decode each list in `input_ids` one by one:
@@ -286,7 +286,7 @@ predictions in `named entity recognition (NER) <https://en.wikipedia.org/wiki/Na
.. warning::
- Pre-tokenized does not mean your inputs are already tokenized (you wouldn't need to pass them though the tokenizer
+ Pre-tokenized does not mean your inputs are already tokenized (you wouldn't need to pass them through the tokenizer
if that was the case) but just split into words (which is often the first step in subword tokenization algorithms
like BPE).
| Minor typo fixes to the preprocessing tutorial in the docs
# What does this PR do?
Minor typo fixes to the tokenizer summary in the docs
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
documentation: @sgugger
| https://api.github.com/repos/huggingface/transformers/pulls/8046 | 2020-10-26T10:59:53Z | 2020-10-26T14:22:30Z | 2020-10-26T14:22:30Z | 2020-10-27T16:25:42Z | 991 | huggingface/transformers | 12,475 |
`baichuan_secret_key` use pydantic.types.SecretStr & Add Baichuan tests | diff --git a/libs/langchain/langchain/chat_models/baichuan.py b/libs/langchain/langchain/chat_models/baichuan.py
index 39b14d2d119bde..91be5c38cf302d 100644
--- a/libs/langchain/langchain/chat_models/baichuan.py
+++ b/libs/langchain/langchain/chat_models/baichuan.py
@@ -2,13 +2,13 @@
import json
import logging
import time
-from typing import Any, Dict, Iterator, List, Mapping, Optional, Type
+from typing import Any, Dict, Iterator, List, Mapping, Optional, Type, Union
import requests
from langchain.callbacks.manager import CallbackManagerForLLMRun
from langchain.chat_models.base import BaseChatModel, _generate_from_stream
-from langchain.pydantic_v1 import Field, root_validator
+from langchain.pydantic_v1 import Field, SecretStr, root_validator
from langchain.schema import (
AIMessage,
BaseMessage,
@@ -29,7 +29,7 @@
logger = logging.getLogger(__name__)
-def convert_message_to_dict(message: BaseMessage) -> dict:
+def _convert_message_to_dict(message: BaseMessage) -> dict:
message_dict: Dict[str, Any]
if isinstance(message, ChatMessage):
message_dict = {"role": message.role, "content": message.content}
@@ -69,6 +69,21 @@ def _convert_delta_to_message_chunk(
return default_class(content=content)
+def _to_secret(value: Union[SecretStr, str]) -> SecretStr:
+ """Convert a string to a SecretStr if needed."""
+ if isinstance(value, SecretStr):
+ return value
+ return SecretStr(value)
+
+
+# signature generation
+def _signature(secret_key: SecretStr, payload: Dict[str, Any], timestamp: int) -> str:
+ input_str = secret_key.get_secret_value() + json.dumps(payload) + str(timestamp)
+ md5 = hashlib.md5()
+ md5.update(input_str.encode("utf-8"))
+ return md5.hexdigest()
+
+
class ChatBaichuan(BaseChatModel):
"""Baichuan chat models API by Baichuan Intelligent Technology.
@@ -90,21 +105,25 @@ def lc_serializable(self) -> bool:
"""Baichuan custom endpoints"""
baichuan_api_key: Optional[str] = None
"""Baichuan API Key"""
- baichuan_secret_key: Optional[str] = None
+ baichuan_secret_key: Optional[SecretStr] = None
"""Baichuan Secret Key"""
- streaming: Optional[bool] = False
- """streaming mode."""
- request_timeout: Optional[int] = 60
+ streaming: bool = False
+ """Whether to stream the results or not."""
+ request_timeout: int = 60
"""request timeout for chat http requests"""
model = "Baichuan2-53B"
"""model name of Baichuan, default is `Baichuan2-53B`."""
temperature: float = 0.3
+ """What sampling temperature to use."""
top_k: int = 5
+ """What search sampling control to use."""
top_p: float = 0.85
+ """What probability mass to use."""
with_search_enhance: bool = False
"""Whether to use search enhance, default is False."""
model_kwargs: Dict[str, Any] = Field(default_factory=dict)
+ """Holds any model parameters valid for API call not explicitly specified."""
class Config:
"""Configuration for this pydantic object."""
@@ -149,10 +168,12 @@ def validate_environment(cls, values: Dict) -> Dict:
"baichuan_api_key",
"BAICHUAN_API_KEY",
)
- values["baichuan_secret_key"] = get_from_dict_or_env(
- values,
- "baichuan_secret_key",
- "BAICHUAN_SECRET_KEY",
+ values["baichuan_secret_key"] = _to_secret(
+ get_from_dict_or_env(
+ values,
+ "baichuan_secret_key",
+ "BAICHUAN_SECRET_KEY",
+ )
)
return values
@@ -169,15 +190,6 @@ def _default_params(self) -> Dict[str, Any]:
return {**normal_params, **self.model_kwargs}
- def _signature(self, data: Dict[str, Any], timestamp: int) -> str:
- if self.baichuan_secret_key is None:
- raise ValueError("Baichuan secret key is not set.")
-
- input_str = self.baichuan_secret_key + json.dumps(data) + str(timestamp)
- md5 = hashlib.md5()
- md5.update(input_str.encode("utf-8"))
- return md5.hexdigest()
-
def _generate(
self,
messages: List[BaseMessage],
@@ -224,6 +236,9 @@ def _stream(
run_manager.on_llm_new_token(chunk.content)
def _chat(self, messages: List[BaseMessage], **kwargs: Any) -> requests.Response:
+ if self.baichuan_secret_key is None:
+ raise ValueError("Baichuan secret key is not set.")
+
parameters = {**self._default_params, **kwargs}
model = parameters.pop("model")
@@ -231,7 +246,7 @@ def _chat(self, messages: List[BaseMessage], **kwargs: Any) -> requests.Response
payload = {
"model": model,
- "messages": [convert_message_to_dict(m) for m in messages],
+ "messages": [_convert_message_to_dict(m) for m in messages],
"parameters": parameters,
}
@@ -249,7 +264,11 @@ def _chat(self, messages: List[BaseMessage], **kwargs: Any) -> requests.Response
"Content-Type": "application/json",
"Authorization": f"Bearer {self.baichuan_api_key}",
"X-BC-Timestamp": str(timestamp),
- "X-BC-Signature": self._signature(payload, timestamp),
+ "X-BC-Signature": _signature(
+ secret_key=self.baichuan_secret_key,
+ payload=payload,
+ timestamp=timestamp,
+ ),
"X-BC-Sign-Algo": "MD5",
**headers,
},
diff --git a/libs/langchain/tests/integration_tests/chat_models/test_baichuan.py b/libs/langchain/tests/integration_tests/chat_models/test_baichuan.py
new file mode 100644
index 00000000000000..58b5dd5aa285fb
--- /dev/null
+++ b/libs/langchain/tests/integration_tests/chat_models/test_baichuan.py
@@ -0,0 +1,40 @@
+from langchain.chat_models.baichuan import ChatBaichuan
+from langchain.schema.messages import AIMessage, HumanMessage
+
+
+def test_chat_baichuan() -> None:
+ chat = ChatBaichuan()
+ message = HumanMessage(content="Hello")
+ response = chat([message])
+ assert isinstance(response, AIMessage)
+ assert isinstance(response.content, str)
+
+
+def test_chat_baichuan_with_model() -> None:
+ chat = ChatBaichuan(model="Baichuan2-13B")
+ message = HumanMessage(content="Hello")
+ response = chat([message])
+ assert isinstance(response, AIMessage)
+ assert isinstance(response.content, str)
+
+
+def test_chat_baichuan_with_temperature() -> None:
+ chat = ChatBaichuan(model="Baichuan2-13B", temperature=1.0)
+ message = HumanMessage(content="Hello")
+ response = chat([message])
+ assert isinstance(response, AIMessage)
+ assert isinstance(response.content, str)
+
+
+def test_chat_baichuan_with_kwargs() -> None:
+ chat = ChatBaichuan()
+ message = HumanMessage(content="Hello")
+ response = chat([message], temperature=0.88, top_p=0.7)
+ assert isinstance(response, AIMessage)
+ assert isinstance(response.content, str)
+
+
+def test_extra_kwargs() -> None:
+ chat = ChatBaichuan(temperature=0.88, top_p=0.7)
+ assert chat.temperature == 0.88
+ assert chat.top_p == 0.7
diff --git a/libs/langchain/tests/unit_tests/chat_models/test_baichuan.py b/libs/langchain/tests/unit_tests/chat_models/test_baichuan.py
new file mode 100644
index 00000000000000..000771b7dc87a7
--- /dev/null
+++ b/libs/langchain/tests/unit_tests/chat_models/test_baichuan.py
@@ -0,0 +1,99 @@
+import pytest
+
+from langchain.chat_models.baichuan import (
+ _convert_delta_to_message_chunk,
+ _convert_dict_to_message,
+ _convert_message_to_dict,
+ _signature,
+)
+from langchain.pydantic_v1 import SecretStr
+from langchain.schema.messages import (
+ AIMessage,
+ AIMessageChunk,
+ ChatMessage,
+ FunctionMessage,
+ HumanMessage,
+ HumanMessageChunk,
+ SystemMessage,
+)
+
+
+def test__convert_message_to_dict_human() -> None:
+ message = HumanMessage(content="foo")
+ result = _convert_message_to_dict(message)
+ expected_output = {"role": "user", "content": "foo"}
+ assert result == expected_output
+
+
+def test__convert_message_to_dict_ai() -> None:
+ message = AIMessage(content="foo")
+ result = _convert_message_to_dict(message)
+ expected_output = {"role": "assistant", "content": "foo"}
+ assert result == expected_output
+
+
+def test__convert_message_to_dict_system() -> None:
+ message = SystemMessage(content="foo")
+ with pytest.raises(TypeError) as e:
+ _convert_message_to_dict(message)
+ assert "Got unknown type" in str(e)
+
+
+def test__convert_message_to_dict_function() -> None:
+ message = FunctionMessage(name="foo", content="bar")
+ with pytest.raises(TypeError) as e:
+ _convert_message_to_dict(message)
+ assert "Got unknown type" in str(e)
+
+
+def test__convert_dict_to_message_human() -> None:
+ message_dict = {"role": "user", "content": "foo"}
+ result = _convert_dict_to_message(message_dict)
+ expected_output = HumanMessage(content="foo")
+ assert result == expected_output
+
+
+def test__convert_dict_to_message_ai() -> None:
+ message_dict = {"role": "assistant", "content": "foo"}
+ result = _convert_dict_to_message(message_dict)
+ expected_output = AIMessage(content="foo")
+ assert result == expected_output
+
+
+def test__convert_dict_to_message_other_role() -> None:
+ message_dict = {"role": "system", "content": "foo"}
+ result = _convert_dict_to_message(message_dict)
+ expected_output = ChatMessage(role="system", content="foo")
+ assert result == expected_output
+
+
+def test__convert_delta_to_message_assistant() -> None:
+ delta = {"role": "assistant", "content": "foo"}
+ result = _convert_delta_to_message_chunk(delta, AIMessageChunk)
+ expected_output = AIMessageChunk(content="foo")
+ assert result == expected_output
+
+
+def test__convert_delta_to_message_human() -> None:
+ delta = {"role": "user", "content": "foo"}
+ result = _convert_delta_to_message_chunk(delta, HumanMessageChunk)
+ expected_output = HumanMessageChunk(content="foo")
+ assert result == expected_output
+
+
+def test__signature() -> None:
+ secret_key = SecretStr("YOUR_SECRET_KEY")
+
+ result = _signature(
+ secret_key=secret_key,
+ payload={
+ "model": "Baichuan2-53B",
+ "messages": [{"role": "user", "content": "Hi"}],
+ },
+ timestamp=1697734335,
+ )
+
+ # The signature was generated by the demo provided by Baichuan.
+ # https://platform.baichuan-ai.com/docs/api#4
+ expected_output = "24a50b2db1648e25a244c67c5ab57d3f"
+ assert result == expected_output
| ### Description
- `baichuan_secret_key` use pydantic.types.SecretStr
- Add Baichuan tests | https://api.github.com/repos/langchain-ai/langchain/pulls/12031 | 2023-10-19T17:03:25Z | 2023-10-19T18:37:41Z | 2023-10-19T18:37:41Z | 2023-10-19T18:37:41Z | 2,791 | langchain-ai/langchain | 43,559 |
Fixed #28265 -- Prevented renderer warning on widgets.render() with **kwargs | diff --git a/django/forms/boundfield.py b/django/forms/boundfield.py
index 771661ba6a71c..83944422ff30b 100644
--- a/django/forms/boundfield.py
+++ b/django/forms/boundfield.py
@@ -6,7 +6,7 @@
from django.utils.deprecation import RemovedInDjango21Warning
from django.utils.functional import cached_property
from django.utils.html import conditional_escape, format_html, html_safe
-from django.utils.inspect import func_supports_parameter
+from django.utils.inspect import func_accepts_kwargs, func_supports_parameter
from django.utils.safestring import mark_safe
from django.utils.translation import gettext_lazy as _
@@ -103,7 +103,7 @@ def as_widget(self, widget=None, attrs=None, only_initial=False):
name = self.html_initial_name
kwargs = {}
- if func_supports_parameter(widget.render, 'renderer'):
+ if func_supports_parameter(widget.render, 'renderer') or func_accepts_kwargs(widget.render):
kwargs['renderer'] = self.form.renderer
else:
warnings.warn(
diff --git a/docs/releases/1.11.3.txt b/docs/releases/1.11.3.txt
index 7eacbd8a36fcc..15200fafd5df0 100644
--- a/docs/releases/1.11.3.txt
+++ b/docs/releases/1.11.3.txt
@@ -9,4 +9,6 @@ Django 1.11.3 fixes several bugs in 1.11.2.
Bugfixes
========
-* ...
+* Removed an incorrect deprecation warning about a missing ``renderer``
+ argument if a ``Widget.render()`` method accepts ``**kwargs``
+ (:ticket:`28265`).
diff --git a/tests/forms_tests/widget_tests/test_render_deprecation.py b/tests/forms_tests/widget_tests/test_render_deprecation.py
new file mode 100644
index 0000000000000..4059f043e3dfe
--- /dev/null
+++ b/tests/forms_tests/widget_tests/test_render_deprecation.py
@@ -0,0 +1,35 @@
+from django import forms
+from django.test import SimpleTestCase
+from django.utils.deprecation import RemovedInDjango21Warning
+
+
+class RenderDeprecationTests(SimpleTestCase):
+ def test_custom_widget_renderer_warning(self):
+ class CustomWidget1(forms.TextInput):
+ def render(self, name, value, attrs=None, renderer=None):
+ return super().render(name, value, attrs, renderer)
+
+ class CustomWidget2(forms.TextInput):
+ def render(self, *args, **kwargs):
+ return super().render(*args, **kwargs)
+
+ class CustomWidget3(forms.TextInput):
+ def render(self, name, value, attrs=None):
+ return super().render(name, value, attrs)
+
+ class MyForm(forms.Form):
+ foo = forms.CharField(widget=CustomWidget1)
+ bar = forms.CharField(widget=CustomWidget2)
+ baz = forms.CharField(widget=CustomWidget3)
+
+ form = MyForm()
+ str(form['foo']) # No warning.
+ str(form['bar']) # No warning.
+ msg = (
+ "Add the `renderer` argument to the render() method of <class "
+ "'forms_tests.widget_tests.test_render_deprecation"
+ ".RenderDeprecationTests.test_custom_widget_renderer_warning.<locals>"
+ ".CustomWidget3'>. It will be mandatory in Django 2.1."
+ )
+ with self.assertRaisesMessage(RemovedInDjango21Warning, msg):
+ str(form['baz'])
| https://api.github.com/repos/django/django/pulls/8584 | 2017-06-02T00:40:21Z | 2017-06-02T13:46:43Z | 2017-06-02T13:46:43Z | 2017-06-03T19:55:33Z | 797 | django/django | 51,481 |
|
Add The Guardian Open Platform | diff --git a/README.md b/README.md
index 3779d8013a..60a565a846 100644
--- a/README.md
+++ b/README.md
@@ -332,6 +332,7 @@ Please note a passing build status indicates all listed APIs are available since
|---|---|---|---|---|
| New York Times | Provides news | `apikey` | Yes | [Go!](https://developer.nytimes.com/) |
| News API | headlines currently published on a range of news sources and blogs | `apikey` | Yes | [Go!](https://newsapi.org/) |
+| The Guardian | Access all the content the Guardian creates, categorised by tags and section | `apikey` | Yes | [Go!](http://open-platform.theguardian.com/) |
### Open Source projects
| [The Open Platform](http://open-platform.theguardian.com/) is a public web service for accessing all the content the Guardian creates, categorized by tags and section.
The Developer key (for any non-commercial usage of the content, such as student dissertations, hackathons, nonprofit app developers) has the following features:
* Up to 12 calls per second
* Up to 5,000 calls per day
* Access to article text
* Access to over 1,900,000 pieces of content
* Free for non-commercial usage
Even though they do have a commercial version, the free Developer version seems quite helpful and enough for most developer needs. | https://api.github.com/repos/public-apis/public-apis/pulls/338 | 2017-04-12T15:10:38Z | 2017-04-15T18:16:09Z | 2017-04-15T18:16:09Z | 2017-04-15T18:16:50Z | 183 | public-apis/public-apis | 35,844 |
Remove reference to #certbot on OFTC | diff --git a/README.rst b/README.rst
index f986703acdf..ab12562df34 100644
--- a/README.rst
+++ b/README.rst
@@ -88,7 +88,7 @@ Main Website: https://certbot.eff.org
Let's Encrypt Website: https://letsencrypt.org
-IRC Channel: #letsencrypt on `Freenode`_ or #certbot on `OFTC`_
+IRC Channel: #letsencrypt on `Freenode`_
Community: https://community.letsencrypt.org
| The #letsencrypt channel on Freenode is much more active, and is the defacto place for questions about Certbot. Users posting questions on #certbot on OFTC are not getting prompt answers. | https://api.github.com/repos/certbot/certbot/pulls/4230 | 2017-02-16T18:24:55Z | 2017-03-01T02:33:41Z | 2017-03-01T02:33:41Z | 2022-02-28T20:01:41Z | 127 | certbot/certbot | 3,341 |
SSL handshake question | diff --git a/README.md b/README.md
index ecece1319..4d82c75f7 100644
--- a/README.md
+++ b/README.md
@@ -290,6 +290,18 @@ Bonus question: what is the RTT of LAN?
<details>
<summary>How does SSL handshake work?</summary><br><b>
+SSL handshake is a process that establishes a secure connection between a client and a server.
+
+1. The client sends a Client Hello message to the server, which includes the client's version of the SSL/TLS protocol, a list of the cryptographic algorithms supported by the client, and a random value.
+2. The server responds with a Server Hello message, which includes the server's version of the SSL/TLS protocol, a random value, and a session ID.
+3. The server sends a Certificate message, which contains the server's certificate.
+4. The server sends a Server Hello Done message, which indicates that the server is done sending messages for the Server Hello phase.
+5. The client sends a Client Key Exchange message, which contains the client's public key.
+6. The client sends a Change Cipher Spec message, which notifies the server that the client is about to send a message encrypted with the new cipher spec.
+7. The client sends an Encrypted Handshake Message, which contains the pre-master secret encrypted with the server's public key.
+8. The server sends a Change Cipher Spec message, which notifies the client that the server is about to send a message encrypted with the new cipher spec.
+9. The server sends an Encrypted Handshake Message, which contains the pre-master secret encrypted with the client's public key.
+10. The client and server can now exchange application data.
</b></details>
<details>
| Adding an answer to How does "SSL handshake work?" question. | https://api.github.com/repos/bregman-arie/devops-exercises/pulls/326 | 2022-12-20T12:23:28Z | 2022-12-23T08:12:19Z | 2022-12-23T08:12:19Z | 2022-12-23T08:12:19Z | 383 | bregman-arie/devops-exercises | 17,529 |
Add gym-pybullet-drones to third party environment list | diff --git a/docs/environments.md b/docs/environments.md
index c4dd3b43d6d..271b9a61ae1 100644
--- a/docs/environments.md
+++ b/docs/environments.md
@@ -296,4 +296,10 @@ Learn more here: https://github.com/addy1997/Gridworld
An environment for simulating the classical optimal control problem where the thrust of a vertically ascending rocket shall be determined such that it reaches the maximum possible altitude, while being subject to varying aerodynamic drag, gravity and mass.
-Learn more here: https://github.com/osannolik/gym-goddard
\ No newline at end of file
+Learn more here: https://github.com/osannolik/gym-goddard
+
+### gym-pybullet-drones: Learning Quadcopter Control
+
+A simple environment using [PyBullet](http://github.com/bulletphysics/bullet3) to simulate the dynamics of a [Bitcraze Crazyflie 2.x](https://www.bitcraze.io/documentation/hardware/crazyflie_2_1/crazyflie_2_1-datasheet.pdf) nanoquadrotor
+
+Learn more here: https://github.com/JacopoPan/gym-pybullet-drones
\ No newline at end of file
| 3rd party quadcopter dynamics simulator | https://api.github.com/repos/openai/gym/pulls/2024 | 2020-08-12T22:38:51Z | 2020-08-28T20:42:04Z | 2020-08-28T20:42:04Z | 2020-08-28T20:42:04Z | 285 | openai/gym | 5,478 |
[llama] fix dataloader for hybrid parallel | diff --git a/applications/Colossal-LLaMA-2/colossal_llama2/dataset/loader.py b/applications/Colossal-LLaMA-2/colossal_llama2/dataset/loader.py
index a2cfb2ef6264..5115e456349f 100644
--- a/applications/Colossal-LLaMA-2/colossal_llama2/dataset/loader.py
+++ b/applications/Colossal-LLaMA-2/colossal_llama2/dataset/loader.py
@@ -1,20 +1,16 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
-import numpy as np
import os
-import random
from dataclasses import dataclass
-from typing import Dict, List, Union, Sequence, Optional, Iterator, Callable
+from typing import Dict, Iterator, List, Optional, Sequence, Union
import torch
-from datasets import dataset_dict, load_from_disk
+import torch.nn.functional as F
from datasets import Dataset as HFDataset
-from torch.distributed import ProcessGroup
-from torch.distributed.distributed_c10d import _get_default_group
-from torch.utils.data import ConcatDataset, Dataset, DataLoader, DistributedSampler
+from datasets import dataset_dict, load_from_disk
+from torch.utils.data import ConcatDataset, Dataset, DistributedSampler
from transformers.tokenization_utils import PreTrainedTokenizer
-import torch.nn.functional as F
DatasetType = Union[Dataset, ConcatDataset, dataset_dict.Dataset]
PathType = Union[str, os.PathLike]
@@ -171,49 +167,3 @@ def __len__(self) -> int:
def set_start_index(self, start_index: int) -> None:
self.start_index = start_index
-
-
-def setup_distributed_dataloader(
- dataset: DatasetType,
- batch_size: int = 1,
- shuffle: bool = False,
- seed: int = 1024,
- drop_last: bool = False,
- pin_memory: bool = False,
- num_workers: int = 0,
- collate_fn: Callable[[Sequence[Dict[str, Union[str, List[int]]]]], Dict[str, torch.Tensor]] = None,
- process_group: Optional[ProcessGroup] = None,
- **kwargs,
-) -> DataLoader:
- """
- Setup dataloader for distributed training.
- """
- _kwargs = kwargs.copy()
- process_group = process_group or _get_default_group()
- sampler = StatefulDistributedSampler(
- dataset=dataset,
- num_replicas=process_group.size(),
- rank=process_group.rank(),
- shuffle=shuffle,
- seed=seed,
- drop_last=drop_last,
- )
-
- # Deterministic dataloader
- def seed_worker(worker_id: int) -> None:
- worker_seed = seed
- np.random.seed(worker_seed)
- torch.manual_seed(worker_seed)
- random.seed(worker_seed)
-
- return DataLoader(
- dataset=dataset,
- batch_size=batch_size,
- sampler=sampler,
- num_workers=num_workers,
- collate_fn=collate_fn,
- pin_memory=pin_memory,
- drop_last=drop_last,
- worker_init_fn=seed_worker,
- **_kwargs,
- )
diff --git a/applications/Colossal-LLaMA-2/train.py b/applications/Colossal-LLaMA-2/train.py
index 92863e8e4bba..4aecd46d7e42 100644
--- a/applications/Colossal-LLaMA-2/train.py
+++ b/applications/Colossal-LLaMA-2/train.py
@@ -16,7 +16,6 @@
DataCollatorForSupervisedDataset,
StatefulDistributedSampler,
load_tokenized_dataset,
- setup_distributed_dataloader,
)
from colossal_llama2.utils.ckpt_io import load_checkpoint, save_checkpoint
from colossal_llama2.utils.flash_attention_patch import replace_with_flash_attention
@@ -194,12 +193,13 @@ def main() -> None:
dataset = load_tokenized_dataset(dataset_paths=args.dataset, mode="train")
data_collator = DataCollatorForSupervisedDataset(tokenizer=tokenizer, max_length=args.max_length)
- dataloader = setup_distributed_dataloader(
+ dataloader = plugin.prepare_dataloader(
dataset=dataset,
batch_size=args.micro_batch_size,
shuffle=True,
drop_last=True,
collate_fn=data_collator,
+ distributed_sampler_cls=StatefulDistributedSampler,
)
coordinator.print_on_master(
f"Max CUDA memory after data loader: {torch.cuda.max_memory_allocated() / 1024 ** 2:.2f} MB"
diff --git a/applications/Colossal-LLaMA-2/train_sft.py b/applications/Colossal-LLaMA-2/train_sft.py
index fd9e1cd3e747..27a7ce096a3d 100644
--- a/applications/Colossal-LLaMA-2/train_sft.py
+++ b/applications/Colossal-LLaMA-2/train_sft.py
@@ -16,7 +16,6 @@
DataCollatorForSupervisedDataset,
StatefulDistributedSampler,
load_tokenized_dataset,
- setup_distributed_dataloader,
)
from colossal_llama2.utils.ckpt_io import load_checkpoint, save_checkpoint
from colossal_llama2.utils.flash_attention_patch import replace_with_flash_attention
@@ -203,12 +202,13 @@ def main() -> None:
dataset = load_tokenized_dataset(dataset_paths=args.dataset, mode="train")
data_collator = DataCollatorForSupervisedDataset(tokenizer=tokenizer, max_length=args.max_length)
- dataloader = setup_distributed_dataloader(
+ dataloader = plugin.prepare_dataloader(
dataset=dataset,
batch_size=args.micro_batch_size,
shuffle=True,
drop_last=True,
collate_fn=data_collator,
+ distributed_sampler_cls=StatefulDistributedSampler,
)
coordinator.print_on_master(
f"Max CUDA memory after data loader: {torch.cuda.max_memory_allocated() / 1024 ** 2:.2f} MB"
diff --git a/colossalai/booster/plugin/dp_plugin_base.py b/colossalai/booster/plugin/dp_plugin_base.py
index d2dd00453e32..27285f95ce52 100644
--- a/colossalai/booster/plugin/dp_plugin_base.py
+++ b/colossalai/booster/plugin/dp_plugin_base.py
@@ -21,7 +21,16 @@ def __init__(self) -> None:
self.world_size = dist.get_world_size()
def prepare_dataloader(
- self, dataset, batch_size, shuffle=False, seed=1024, drop_last=False, pin_memory=False, num_workers=0, **kwargs
+ self,
+ dataset,
+ batch_size,
+ shuffle=False,
+ seed=1024,
+ drop_last=False,
+ pin_memory=False,
+ num_workers=0,
+ distributed_sampler_cls=None,
+ **kwargs,
):
r"""
Prepare a dataloader for distributed training. The dataloader will be wrapped by
@@ -45,7 +54,8 @@ def prepare_dataloader(
:class:`torch.utils.data.DataLoader`: A DataLoader used for training or testing.
"""
_kwargs = kwargs.copy()
- sampler = DistributedSampler(dataset, num_replicas=self.world_size, rank=self.rank, shuffle=shuffle)
+ distributed_sampler_cls = distributed_sampler_cls or DistributedSampler
+ sampler = distributed_sampler_cls(dataset, num_replicas=self.world_size, rank=self.rank, shuffle=shuffle)
# Deterministic dataloader
def seed_worker(worker_id):
diff --git a/colossalai/booster/plugin/gemini_plugin.py b/colossalai/booster/plugin/gemini_plugin.py
index d14109dd43e5..95b96bbfd9ed 100644
--- a/colossalai/booster/plugin/gemini_plugin.py
+++ b/colossalai/booster/plugin/gemini_plugin.py
@@ -456,7 +456,16 @@ def supported_devices(self) -> List[str]:
return ["cuda", "npu"]
def prepare_dataloader(
- self, dataset, batch_size, shuffle=False, seed=1024, drop_last=False, pin_memory=False, num_workers=0, **kwargs
+ self,
+ dataset,
+ batch_size,
+ shuffle=False,
+ seed=1024,
+ drop_last=False,
+ pin_memory=False,
+ num_workers=0,
+ distributed_sampler_cls=None,
+ **kwargs,
):
r"""
Prepare a dataloader for distributed training. The dataloader will be wrapped by
@@ -484,7 +493,8 @@ def prepare_dataloader(
extra_dp_world_size = self.pg_mesh.size(DP_AXIS)
zero_rank = self.pg_mesh.coordinate(ZERO_AXIS)
extra_dp_rank = self.pg_mesh.coordinate(DP_AXIS)
- sampler = DistributedSampler(
+ distributed_sampler_cls = distributed_sampler_cls or DistributedSampler
+ sampler = distributed_sampler_cls(
dataset,
num_replicas=zero_world_size * extra_dp_world_size,
rank=zero_rank * extra_dp_world_size + extra_dp_rank,
diff --git a/colossalai/booster/plugin/hybrid_parallel_plugin.py b/colossalai/booster/plugin/hybrid_parallel_plugin.py
index 943e137e6f29..da67e6b41fbf 100644
--- a/colossalai/booster/plugin/hybrid_parallel_plugin.py
+++ b/colossalai/booster/plugin/hybrid_parallel_plugin.py
@@ -1205,7 +1205,16 @@ def execute_pipeline(
return outputs
def prepare_dataloader(
- self, dataset, batch_size, shuffle=False, seed=1024, drop_last=False, pin_memory=False, num_workers=0, **kwargs
+ self,
+ dataset,
+ batch_size,
+ shuffle=False,
+ seed=1024,
+ drop_last=False,
+ pin_memory=False,
+ num_workers=0,
+ distributed_sampler_cls=None,
+ **kwargs,
):
r"""
Prepare a dataloader for distributed training. The dataloader will be wrapped by
@@ -1229,7 +1238,8 @@ def prepare_dataloader(
:class:`torch.utils.data.DataLoader`: A DataLoader used for training or testing.
"""
_kwargs = kwargs.copy()
- sampler = DistributedSampler(
+ distributed_sampler_cls = distributed_sampler_cls or DistributedSampler
+ sampler = distributed_sampler_cls(
dataset, num_replicas=self.pg_mesh.size(DP_AXIS), rank=self.pg_mesh.coordinate(DP_AXIS), shuffle=shuffle
)
| ## 📌 Checklist before creating the PR
- [ ] I have created an issue for this PR for traceability
- [ ] The title follows the standard format: `[doc/gemini/tensor/...]: A concise description`
- [ ] I have added relevant tags if possible for us to better distinguish different PRs
## 🚨 Issue number
> Link this PR to your issue with words like fixed to automatically close the linked issue upon merge
>
> e.g. `fixed #1234`, `closed #1234`, `resolved #1234`
## 📝 What does this PR do?
> Summarize your work here.
> if you have any plots/diagrams/screenshots/tables, please attach them here.
## 💥 Checklist before requesting a review
- [ ] I have linked my PR to an issue ([instruction](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue))
- [ ] My issue clearly describes the problem/feature/proposal, with diagrams/charts/table/code if possible
- [ ] I have performed a self-review of my code
- [ ] I have added thorough tests.
- [ ] I have added docstrings for all the functions/methods I implemented
## ⭐️ Do you enjoy contributing to Colossal-AI?
- [ ] 🌝 Yes, I do.
- [ ] 🌚 No, I don't.
Tell us more if you don't enjoy contributing to Colossal-AI.
| https://api.github.com/repos/hpcaitech/ColossalAI/pulls/5358 | 2024-02-05T04:39:08Z | 2024-02-05T07:14:56Z | 2024-02-05T07:14:56Z | 2024-02-05T07:14:58Z | 2,386 | hpcaitech/ColossalAI | 11,780 |
use of multiple chocolatey package names | diff --git a/lib/ansible/modules/windows/win_chocolatey.py b/lib/ansible/modules/windows/win_chocolatey.py
index ab90ed6f999220..cf54f16c015d15 100644
--- a/lib/ansible/modules/windows/win_chocolatey.py
+++ b/lib/ansible/modules/windows/win_chocolatey.py
@@ -38,6 +38,7 @@
name:
description:
- Name of the package to be installed.
+ - This must be a single package name.
required: yes
state:
description:
@@ -147,4 +148,20 @@
win_chocolatey:
name: git
state: absent
+
+- name: install multiple packages
+ win_chocolatey:
+ name: "{{ item }}"
+ state: absent
+ with_items:
+ - pscx
+ - windirstat
+
+- name: uninstall multiple packages
+ win_chocolatey:
+ name: "{{ item }}"
+ state: absent
+ with_items:
+ - pscx
+ - windirstat
'''
| It might be helpful to users, to clarify whether/when <name:> must specify a single package.
Users who are familiar with chocolatey may be accustomed to installing multiple packages in a single invocation of 'choco install'.
I believe win_chocolatey currently accepts multiple package names when state: is latest or present.
For instance, this appears to work currently:
- win_chocolatey:
name: >-
pscx
windirstat
state: latest
However, when state: is absent, uninstall is not performed if multiple package are specified.
The chocolate.log output suggests that chocolatey is treating the multiple packages as an 'exact' name of a single package name:
2017-08-10 19:04:04,087 2424 [DEBUG] - Command line: "C:\ProgramData\chocolatey\choco.exe" list --local-only --exact pscx windirstat
2017-08-10 19:04:04,087 2424 [DEBUG] - Received arguments: list --local-only --exact pscx windirstat
I find the current behavior helpful in terms of accepting multiple package names, even if uninstall must be treated differently.
It might be helpful to show an example of how multiple uninstalls can be handled by looping over them.
- win_chocolatey:
name: "{{ item }}"
state: absent
with_items:
- pscx
- windirstat
##### SUMMARY
<!--- Describe the change, including rationale and design decisions -->
<!---
If you are fixing an existing issue, please include "Fixes #nnn" in your
commit message and your description; but you should still explain what
the change does.
-->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Feature Pull Request
- New Module Pull Request
- Bugfix Pull Request
- Docs Pull Request
##### COMPONENT NAME
<!--- Name of the module/plugin/module/task -->
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
```
##### ADDITIONAL INFORMATION
<!---
Include additional information to help people understand the change here.
For bugs that don't have a linked bug report, a step-by-step reproduction
of the problem is helpful.
-->
<!--- Paste verbatim command output below, e.g. before and after your change -->
```
```
| https://api.github.com/repos/ansible/ansible/pulls/28046 | 2017-08-11T00:58:45Z | 2017-08-11T02:36:15Z | 2017-08-11T02:36:15Z | 2019-04-26T22:14:38Z | 243 | ansible/ansible | 49,549 |
catch load style.csv error | diff --git a/modules/styles.py b/modules/styles.py
index 9edcc7e4447..60bd8a7fb01 100644
--- a/modules/styles.py
+++ b/modules/styles.py
@@ -1,4 +1,5 @@
from pathlib import Path
+from modules import errors
import csv
import os
import typing
@@ -128,19 +129,22 @@ def reload(self):
self.load_from_csv(styles_file)
def load_from_csv(self, path: str | Path):
- with open(path, "r", encoding="utf-8-sig", newline="") as file:
- reader = csv.DictReader(file, skipinitialspace=True)
- for row in reader:
- # Ignore empty rows or rows starting with a comment
- if not row or row["name"].startswith("#"):
- continue
- # Support loading old CSV format with "name, text"-columns
- prompt = row["prompt"] if "prompt" in row else row["text"]
- negative_prompt = row.get("negative_prompt", "")
- # Add style to database
- self.styles[row["name"]] = PromptStyle(
- row["name"], prompt, negative_prompt, str(path)
- )
+ try:
+ with open(path, "r", encoding="utf-8-sig", newline="") as file:
+ reader = csv.DictReader(file, skipinitialspace=True)
+ for row in reader:
+ # Ignore empty rows or rows starting with a comment
+ if not row or row["name"].startswith("#"):
+ continue
+ # Support loading old CSV format with "name, text"-columns
+ prompt = row["prompt"] if "prompt" in row else row["text"]
+ negative_prompt = row.get("negative_prompt", "")
+ # Add style to database
+ self.styles[row["name"]] = PromptStyle(
+ row["name"], prompt, negative_prompt, str(path)
+ )
+ except Exception:
+ errors.report(f'Error loading styles from {path}: ', exc_info=True)
def get_style_paths(self) -> set:
"""Returns a set of all distinct paths of files that styles are loaded from."""
| ## Description
catch load style.csv error
- prevent https://github.com/AUTOMATIC1111/stable-diffusion-webui/discussions/14804 which causes webui to not launch
## Checklist:
- [x] I have read [contributing wiki page](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Contributing)
- [x] I have performed a self-review of my own code
- [x] My code follows the [style guidelines](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Contributing#code-style)
- [x] My code passes [tests](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Tests)
| https://api.github.com/repos/AUTOMATIC1111/stable-diffusion-webui/pulls/14814 | 2024-02-01T06:43:41Z | 2024-02-01T19:02:28Z | 2024-02-01T19:02:28Z | 2024-02-01T19:02:31Z | 484 | AUTOMATIC1111/stable-diffusion-webui | 40,516 |
Boilerplate OAuth website <=> inference | diff --git a/.gitignore b/.gitignore
index 79dbba7c1c..ba8212f101 100644
--- a/.gitignore
+++ b/.gitignore
@@ -17,3 +17,5 @@ backend/openapi.json
# edit docs using obsidian.md, these files should not appear in the repo
.obsidian/
.pytest_cache/
+
+/docker-compose.override.yml
diff --git a/docker-compose.yaml b/docker-compose.yaml
index 14a641f77c..b009aa4546 100644
--- a/docker-compose.yaml
+++ b/docker-compose.yaml
@@ -181,6 +181,7 @@ services:
POSTGRES_DB: oasst_inference
DEBUG_API_KEYS: "0000"
ALLOW_DEBUG_AUTH: "True"
+ AUTH_CALLBACK_ROOT: "http://localhost:3000/api/inference_auth"
volumes:
- "./oasst-shared:/opt/inference/lib/oasst-shared"
- "./inference/server:/opt/inference/server"
diff --git a/inference/server/oasst_inference_server/routes/auth.py b/inference/server/oasst_inference_server/routes/auth.py
index 2580f9374a..05b9889dff 100644
--- a/inference/server/oasst_inference_server/routes/auth.py
+++ b/inference/server/oasst_inference_server/routes/auth.py
@@ -13,10 +13,15 @@
)
+@router.get("/check")
+async def check_user_auth(user_id: str = Depends(auth.get_current_user_id)):
+ return user_id
+
+
@router.get("/login/discord")
-async def login_discord():
- redirect_uri = f"{settings.api_root}/auth/callback/discord"
- auth_url = f"https://discord.com/api/oauth2/authorize?client_id={settings.auth_discord_client_id}&redirect_uri={redirect_uri}&response_type=code&scope=identify"
+async def login_discord(state: str = r"{}"):
+ redirect_uri = f"{settings.auth_callback_root}/discord"
+ auth_url = f"https://discord.com/api/oauth2/authorize?client_id={settings.auth_discord_client_id}&redirect_uri={redirect_uri}&response_type=code&scope=identify&state={state}"
raise HTTPException(status_code=302, headers={"location": auth_url})
@@ -25,7 +30,7 @@ async def callback_discord(
code: str,
db: database.AsyncSession = Depends(deps.create_session),
):
- redirect_uri = f"{settings.api_root}/auth/callback/discord"
+ redirect_uri = f"{settings.auth_callback_root}/discord"
async with aiohttp.ClientSession(raise_for_status=True) as session:
# Exchange the auth code for a Discord access token
@@ -77,9 +82,9 @@ async def callback_discord(
@router.get("/login/github")
-async def login_github():
- redirect_uri = f"{settings.api_root}/auth/callback/github"
- auth_url = f"https://github.com/login/oauth/authorize?client_id={settings.auth_github_client_id}&redirect_uri={redirect_uri}"
+async def login_github(state: str = r"{}"):
+ redirect_uri = f"{settings.auth_callback_root}/github"
+ auth_url = f"https://github.com/login/oauth/authorize?client_id={settings.auth_github_client_id}&redirect_uri={redirect_uri}&state={state}"
raise HTTPException(status_code=302, headers={"location": auth_url})
@@ -88,7 +93,7 @@ async def callback_github(
code: str,
db: database.AsyncSession = Depends(deps.create_session),
):
- redirect_uri = f"{settings.api_root}/auth/callback/github"
+ redirect_uri = f"{settings.auth_callback_root}/github"
async with aiohttp.ClientSession(raise_for_status=True) as session:
# Exchange the auth code for a GitHub access token
@@ -116,7 +121,7 @@ async def callback_github(
user_response_json = await user_response.json()
try:
- github_id = user_response_json["id"]
+ github_id = str(user_response_json["id"])
github_username = user_response_json["login"]
except KeyError:
raise HTTPException(status_code=400, detail="Invalid user info response from GitHub")
diff --git a/inference/server/oasst_inference_server/settings.py b/inference/server/oasst_inference_server/settings.py
index 4b5a65046f..ee53993750 100644
--- a/inference/server/oasst_inference_server/settings.py
+++ b/inference/server/oasst_inference_server/settings.py
@@ -58,7 +58,11 @@ def debug_api_keys_list(self) -> list[str]:
compliance_check_interval: int = 60
compliance_check_timeout: int = 60
- api_root: str = "https://inference.prod.open-assistant.io"
+ # this is the URL which will be redirected to when authenticating with oauth2
+ # we decided on letting the nextjs / website backend handle the token at first
+ # and then proxy this information back to the inference server
+ # in short: this should refer to the website, not to this server
+ auth_callback_root: str = "https://open-assistant.io/api/inference_auth"
allow_debug_auth: bool = False
diff --git a/website/src/lib/oasst_inference_client.ts b/website/src/lib/oasst_inference_client.ts
index 3064074934..d20bca6514 100644
--- a/website/src/lib/oasst_inference_client.ts
+++ b/website/src/lib/oasst_inference_client.ts
@@ -3,7 +3,7 @@ import axios, { AxiosRequestConfig } from "axios";
import Cookies from "cookies";
import type { NextApiRequest, NextApiResponse } from "next";
import { JWT } from "next-auth/jwt";
-import { ChatItem, InferenceDebugTokenResponse, InferenceMessage, InferencePostMessageResponse } from "src/types/Chat";
+import { ChatItem, InferenceTokenResponse, InferenceMessage, InferencePostMessageResponse } from "src/types/Chat";
// TODO: this class could be structured better
export class OasstInferenceClient {
@@ -41,7 +41,7 @@ export class OasstInferenceClient {
// TODO: we have not decided on a format for the user yet, this is here for debug only
const res = await fetch(process.env.INFERENCE_SERVER_HOST + `/auth/login/debug?username=${this.userTokenSub}`);
- const inferenceResponse: InferenceDebugTokenResponse = await res.json();
+ const inferenceResponse: InferenceTokenResponse = await res.json();
this.inferenceToken = inferenceResponse.access_token;
this.cookies.set("inference_token", this.inferenceToken, {
maxAge: 1000 * 60 * 5, // 5 minutes
diff --git a/website/src/pages/api/inference_auth/[...parts].ts b/website/src/pages/api/inference_auth/[...parts].ts
new file mode 100644
index 0000000000..24f4af7306
--- /dev/null
+++ b/website/src/pages/api/inference_auth/[...parts].ts
@@ -0,0 +1,16 @@
+import axios from "axios";
+import type { NextApiRequest, NextApiResponse } from "next";
+import { InferenceTokenResponse } from "src/types/Chat";
+
+export default async function inferenceAuthCallback(req: NextApiRequest, res: NextApiResponse) {
+ const { code, parts } = req.query;
+ console.log(req.query);
+ if (!Array.isArray(parts) || parts.length !== 1) {
+ return res.status(400).end();
+ }
+ const [provider] = parts as string[];
+ const url = process.env.INFERENCE_SERVER_HOST + `/auth/callback/${provider}?code=${code}`;
+ const { data } = await axios<InferenceTokenResponse>(url);
+ console.log(data);
+ return res.send(data);
+}
diff --git a/website/src/pages/team.tsx b/website/src/pages/team.tsx
index 18a9ec7bc5..b7d96494cd 100644
--- a/website/src/pages/team.tsx
+++ b/website/src/pages/team.tsx
@@ -2,8 +2,8 @@ export { getDefaultStaticProps as getStaticProps } from "src/lib/default_static_
import { Avatar, Badge, Box, Card, CardBody, Flex, Grid, Heading, Text } from "@chakra-ui/react";
import { Github } from "lucide-react";
import Head from "next/head";
-import { useTranslation } from "next-i18next";
import Link from "next/link";
+import { useTranslation } from "next-i18next";
import React from "react";
import { getTransparentHeaderLayout } from "src/components/Layout";
@@ -16,7 +16,7 @@ const Team = () => {
<>
<Head>
<title>{t("who_are_we")} - Open Assistant</title>
- <meta name="description" content="The team begind Open Assistant" />
+ <meta name="description" content="The team behind Open Assistant" />
</Head>
<Box fontFamily="Inter" p="6" className="oa-basic-theme">
<Box className="max-w-6xl mx-auto">
diff --git a/website/src/types/Chat.ts b/website/src/types/Chat.ts
index 8592b1a076..afde2327bd 100644
--- a/website/src/types/Chat.ts
+++ b/website/src/types/Chat.ts
@@ -1,4 +1,4 @@
-export interface InferenceDebugTokenResponse {
+export interface InferenceTokenResponse {
access_token: string;
token_type: string;
}
| Refs #2101
Use the website's backend as a callback url for login to discord and github, so that the website also knows the token.
To use this, you need to configure your discord oauth provider to use the url: `http://localhost:3000/api/inference_auth/discord`, I used the same provider I use for logging in to the website and it worked like a charm.
github also: `http://localhost:3000/api/inference_auth` or `http://localhost:3000/api/inference_auth/gihtub`, both should work since github allows sub paths.
you need to set these 4 env variables for the inference server:
```
AUTH_DISCORD_CLIENT_ID
AUTH_DISCORD_CLIENT_SECRET
AUTH_GITHUB_CLIENT_ID
AUTH_GITHUB_CLIENT_SECRET
```
then you can navigate to
```
localhost:8000/auth/login/github
localhost:8000/auth/login/discord
```
| https://api.github.com/repos/LAION-AI/Open-Assistant/pulls/2127 | 2023-03-19T18:17:08Z | 2023-03-20T06:10:13Z | 2023-03-20T06:10:13Z | 2023-03-20T06:10:14Z | 2,162 | LAION-AI/Open-Assistant | 37,429 |
use pretrained_model for eval | diff --git a/doc/doc_ch/detection.md b/doc/doc_ch/detection.md
index a8dee65a22..671fda998d 100644
--- a/doc/doc_ch/detection.md
+++ b/doc/doc_ch/detection.md
@@ -108,9 +108,9 @@ PaddleOCR计算三个OCR检测相关的指标,分别是:Precision、Recall
运行如下代码,根据配置文件`det_db_mv3.yml`中`save_res_path`指定的测试集检测结果文件,计算评估指标。
评估时设置后处理参数`box_thresh=0.5`,`unclip_ratio=1.5`,使用不同数据集、不同模型训练,可调整这两个参数进行优化
-训练中模型参数默认保存在`Global.save_model_dir`目录下。在评估指标时,需要设置`Global.checkpoints`指向保存的参数文件。
+训练中模型参数默认保存在`Global.save_model_dir`目录下。在评估指标时,需要设置`Global.pretrained_model`指向保存的参数文件。
```shell
-python3 tools/eval.py -c configs/det/det_mv3_db.yml -o Global.checkpoints="{path/to/weights}/best_accuracy" PostProcess.box_thresh=0.5 PostProcess.unclip_ratio=1.5
+python3 tools/eval.py -c configs/det/det_mv3_db.yml -o Global.pretrained_model="{path/to/weights}/best_accuracy" PostProcess.box_thresh=0.5 PostProcess.unclip_ratio=1.5
```
diff --git a/doc/doc_ch/recognition.md b/doc/doc_ch/recognition.md
index 028a248fe6..faa015b754 100644
--- a/doc/doc_ch/recognition.md
+++ b/doc/doc_ch/recognition.md
@@ -420,8 +420,8 @@ Eval:
评估数据集可以通过 `configs/rec/rec_icdar15_train.yml` 修改Eval中的 `label_file_path` 设置。
```
-# GPU 评估, Global.checkpoints 为待测权重
-python3 -m paddle.distributed.launch --gpus '0' tools/eval.py -c configs/rec/rec_icdar15_train.yml -o Global.checkpoints={path/to/weights}/best_accuracy
+# GPU 评估, Global.pretrained_model 为待测权重
+python3 -m paddle.distributed.launch --gpus '0' tools/eval.py -c configs/rec/rec_icdar15_train.yml -o Global.pretrained_model={path/to/weights}/best_accuracy
```
<a name="预测"></a>
@@ -432,7 +432,7 @@ python3 -m paddle.distributed.launch --gpus '0' tools/eval.py -c configs/rec/rec
使用 PaddleOCR 训练好的模型,可以通过以下脚本进行快速预测。
-默认预测图片存储在 `infer_img` 里,通过 `-o Global.checkpoints` 指定权重:
+默认预测图片存储在 `infer_img` 里,通过 `-o Global.pretrained_model` 指定权重:
```
# 预测英文结果
diff --git a/doc/doc_en/detection_en.md b/doc/doc_en/detection_en.md
index 3ee9092cc6..897f5b3b09 100644
--- a/doc/doc_en/detection_en.md
+++ b/doc/doc_en/detection_en.md
@@ -101,9 +101,9 @@ Run the following code to calculate the evaluation indicators. The result will b
When evaluating, set post-processing parameters `box_thresh=0.6`, `unclip_ratio=1.5`. If you use different datasets, different models for training, these two parameters should be adjusted for better result.
-The model parameters during training are saved in the `Global.save_model_dir` directory by default. When evaluating indicators, you need to set `Global.checkpoints` to point to the saved parameter file.
+The model parameters during training are saved in the `Global.save_model_dir` directory by default. When evaluating indicators, you need to set `Global.pretrained_model` to point to the saved parameter file.
```shell
-python3 tools/eval.py -c configs/det/det_mv3_db.yml -o Global.checkpoints="{path/to/weights}/best_accuracy" PostProcess.box_thresh=0.6 PostProcess.unclip_ratio=1.5
+python3 tools/eval.py -c configs/det/det_mv3_db.yml -o Global.pretrained_model="{path/to/weights}/best_accuracy" PostProcess.box_thresh=0.6 PostProcess.unclip_ratio=1.5
```
diff --git a/doc/doc_en/recognition_en.md b/doc/doc_en/recognition_en.md
index 73157f864c..67eece7e85 100644
--- a/doc/doc_en/recognition_en.md
+++ b/doc/doc_en/recognition_en.md
@@ -425,8 +425,8 @@ Eval:
The evaluation dataset can be set by modifying the `Eval.dataset.label_file_list` field in the `configs/rec/rec_icdar15_train.yml` file.
```
-# GPU evaluation, Global.checkpoints is the weight to be tested
-python3 -m paddle.distributed.launch --gpus '0' tools/eval.py -c configs/rec/rec_icdar15_train.yml -o Global.checkpoints={path/to/weights}/best_accuracy
+# GPU evaluation, Global.pretrained_model is the weight to be tested
+python3 -m paddle.distributed.launch --gpus '0' tools/eval.py -c configs/rec/rec_icdar15_train.yml -o Global.pretrained_model={path/to/weights}/best_accuracy
```
<a name="PREDICTION"></a>
@@ -437,7 +437,7 @@ python3 -m paddle.distributed.launch --gpus '0' tools/eval.py -c configs/rec/rec
Using the model trained by paddleocr, you can quickly get prediction through the following script.
-The default prediction picture is stored in `infer_img`, and the weight is specified via `-o Global.checkpoints`:
+The default prediction picture is stored in `infer_img`, and the weight is specified via `-o Global.pretrained_model`:
```
# Predict English results
| att | https://api.github.com/repos/PaddlePaddle/PaddleOCR/pulls/2334 | 2021-03-25T03:05:08Z | 2021-03-26T08:33:11Z | 2021-03-26T08:33:11Z | 2021-03-26T08:33:11Z | 1,351 | PaddlePaddle/PaddleOCR | 42,084 |
fix(trends): Reset error when requerying api | diff --git a/src/sentry/static/sentry/app/utils/discover/genericDiscoverQuery.tsx b/src/sentry/static/sentry/app/utils/discover/genericDiscoverQuery.tsx
index f6e5e1cd64782..683b30f01b0e6 100644
--- a/src/sentry/static/sentry/app/utils/discover/genericDiscoverQuery.tsx
+++ b/src/sentry/static/sentry/app/utils/discover/genericDiscoverQuery.tsx
@@ -152,6 +152,8 @@ class GenericDiscoverQuery<T, P> extends React.Component<Props<T, P>, State<T>>
this.setState({isLoading: true, tableFetchID});
+ setError?.(undefined);
+
if (limit) {
apiPayload.per_page = limit;
}
| ### Summary
Currently the error isn't disappearing when you change your api call and it succeeds. This will wipe the error state if you make another request.
Refs VIS-322 | https://api.github.com/repos/getsentry/sentry/pulls/21793 | 2020-11-04T20:24:51Z | 2020-11-05T19:16:39Z | 2020-11-05T19:16:38Z | 2020-12-17T23:09:25Z | 168 | getsentry/sentry | 44,208 |
Fix Trainer when model is loaded on a different GPU | diff --git a/src/transformers/trainer.py b/src/transformers/trainer.py
index 590c5da195351..59ac8029e54f3 100755
--- a/src/transformers/trainer.py
+++ b/src/transformers/trainer.py
@@ -377,12 +377,12 @@ def __init__(
else:
self.is_model_parallel = False
- if (
- getattr(model, "hf_device_map", None) is not None
- and len([device for device in set(model.hf_device_map.values()) if device not in ["cpu", "disk"]]) > 1
- and not self.is_model_parallel
- ):
- self.is_model_parallel = True
+ if getattr(model, "hf_device_map", None) is not None:
+ devices = [device for device in set(model.hf_device_map.values()) if device not in ["cpu", "disk"]]
+ if len(devices) > 1:
+ self.is_model_parallel = True
+ else:
+ self.is_model_parallel = self.args.device != torch.device(devices[0])
# warn users
logger.info(
| # What does this PR do?
When a small model is loaded with `device_map="auto"` it might end up all on GPU 1, so currently `is_model_parallel` is set to `False` (cause one device) and later on the Trainer moves the model to GPU 0 which fails the execution of all the Accelerate hooks.
This PR fixes this by making sure `is_model_parallel` is set to `True` when there is one device but it's not GPU 0. | https://api.github.com/repos/huggingface/transformers/pulls/23792 | 2023-05-26T13:33:58Z | 2023-05-31T11:54:26Z | 2023-05-31T11:54:26Z | 2023-05-31T11:57:17Z | 256 | huggingface/transformers | 12,398 |
Update Open CLIP embd | diff --git a/libs/experimental/langchain_experimental/open_clip/open_clip.py b/libs/experimental/langchain_experimental/open_clip/open_clip.py
index 838148cb319aaf..b19567d179d066 100644
--- a/libs/experimental/langchain_experimental/open_clip/open_clip.py
+++ b/libs/experimental/langchain_experimental/open_clip/open_clip.py
@@ -19,8 +19,8 @@ def validate_environment(cls, values: Dict) -> Dict:
# model_name = "ViT-B-32"
# checkpoint = "laion2b_s34b_b79k"
### Larger, more performant
- model_name = "ViT-g-14"
- checkpoint = "laion2b_s34b_b88k"
+ model_name = "ViT-H-14"
+ checkpoint = "laion2b_s32b_b79k"
model, _, preprocess = open_clip.create_model_and_transforms(
model_name=model_name, pretrained=checkpoint
)
| Prior default model required a large amt of RAM and often crashed Jupyter ntbk kernel. | https://api.github.com/repos/langchain-ai/langchain/pulls/14155 | 2023-12-01T23:10:41Z | 2023-12-01T23:13:21Z | 2023-12-01T23:13:21Z | 2023-12-01T23:13:21Z | 221 | langchain-ai/langchain | 43,748 |
snap_config: set a timeout when talking to snapd | diff --git a/certbot/certbot/_internal/snap_config.py b/certbot/certbot/_internal/snap_config.py
index f006c8be1f2..dd3a6399c77 100644
--- a/certbot/certbot/_internal/snap_config.py
+++ b/certbot/certbot/_internal/snap_config.py
@@ -54,7 +54,8 @@ def prepare_env(cli_args: List[str]) -> List[str]:
session.mount('http://snapd/', _SnapdAdapter())
try:
- response = session.get('http://snapd/v2/connections?snap=certbot&interface=content')
+ response = session.get('http://snapd/v2/connections?snap=certbot&interface=content',
+ timeout=30.0)
response.raise_for_status()
except RequestException as e:
if isinstance(e, HTTPError) and e.response.status_code == 404:
| Fixes #8475.
---
I came across another thread which looked suspiciously like a hung snapd process. Having a timeout here seems wise anyway. | https://api.github.com/repos/certbot/certbot/pulls/9218 | 2022-02-28T01:02:39Z | 2022-02-28T19:16:59Z | 2022-02-28T19:16:59Z | 2022-02-28T19:16:59Z | 211 | certbot/certbot | 137 |
Updating pagan description | diff --git a/README.md b/README.md
index df089eeb7..0b1af648f 100644
--- a/README.md
+++ b/README.md
@@ -320,7 +320,7 @@ Inspired by [awesome-php](https://github.com/ziadoz/awesome-php).
*Libraries for manipulating images.*
-* [pagan](https://github.com/daboth/pagan) - is avatar generator for absolute nerds.
+* [pagan](https://github.com/daboth/pagan) - Retro identicon (Avatar) generation based on input string and hash.
* [pillow](https://github.com/python-pillow/Pillow) - Pillow is the friendly [PIL](http://www.pythonware.com/products/pil/) fork.
* [hmap](https://github.com/rossgoodwin/hmap) - Image histogram remapping.
* [imgSeek](https://sourceforge.net/projects/imgseek/) - A project for searching a collection of images using visual similarity.
| Proposing to update the pagan description, the previous version had grammar issues. Focusing on the functionality rather than the name now.
| https://api.github.com/repos/vinta/awesome-python/pulls/752 | 2016-10-19T13:46:50Z | 2016-10-20T03:54:25Z | 2016-10-20T03:54:25Z | 2016-10-20T03:54:25Z | 219 | vinta/awesome-python | 26,975 |
[Youku]Support full length of youku's video | diff --git a/src/you_get/extractors/youku.py b/src/you_get/extractors/youku.py
index 006e5a7292..ca565b5383 100644
--- a/src/you_get/extractors/youku.py
+++ b/src/you_get/extractors/youku.py
@@ -7,6 +7,9 @@
import base64
import time
import traceback
+import urllib.parse
+import math
+import pdb
class Youku(VideoExtractor):
name = "优酷 (Youku)"
@@ -21,11 +24,32 @@ class Youku(VideoExtractor):
{'id': 'mp4', 'container': 'mp4', 'video_profile': '高清'},
{'id': 'flvhd', 'container': 'flv', 'video_profile': '标清'},
{'id': 'flv', 'container': 'flv', 'video_profile': '标清'},
- {'id': '3gphd', 'container': '3gp', 'video_profile': '标清(3GP)'},
+ {'id': '3gphd', 'container': 'mp4', 'video_profile': '标清(3GP)'},
]
- def generate_ep(vid, ep):
- f_code_1 = 'becaf9be'
+
+ def trans_e(a, c):
+ f = h = 0
+ b = list(range(256))
+ result = ''
+ while h < 256:
+ f = (f + b[h] + ord(a[h % len(a)])) % 256
+ b[h], b[f] = b[f], b[h]
+ h += 1
+ q = f = h = 0
+ while q < len(c):
+ h = (h + 1) % 256
+ f = (f + b[h]) % 256
+ b[h], b[f] = b[f], b[h]
+ if isinstance(c[q], int):
+ result += chr(c[q] ^ b[(b[h] + b[f]) % 256])
+ else:
+ result += chr(ord(c[q]) ^ b[(b[h] + b[f]) % 256])
+ q += 1
+
+ return result
+
+ def generate_ep(no,streamfileids,sid,token):
f_code_2 = 'bf7e5f01'
def trans_e(a, c):
@@ -49,13 +73,17 @@ def trans_e(a, c):
return result
- e_code = trans_e(f_code_1, base64.b64decode(bytes(ep, 'ascii')))
- sid, token = e_code.split('_')
- new_ep = trans_e(f_code_2, '%s_%s_%s' % (sid, vid, token))
- return base64.b64encode(bytes(new_ep, 'latin')), sid, token
+ number = hex(int(str(no),10))[2:].upper()
+ if len(number) == 1:
+ number = '0' + number
+ fileId = streamfileids[0:8] + number + streamfileids[10:]
+
+ ep = urllib.parse.quote(base64.b64encode(''.join(trans_e(f_code_2,sid+'_'+fileId+'_'+token)).encode('latin1')),safe='~()*!.\'')
+ return fileId,ep
def parse_m3u8(m3u8):
- return re.findall(r'(http://[^?]+)\?ts_start=0', m3u8)
+ return re.findall('(http://[^\r]+)\r',m3u8)
+# return re.findall(r'(http://[^?]+)\?ts_start=0', m3u8)
def get_vid_from_url(url):
"""Extracts video ID from URL.
@@ -102,6 +130,7 @@ def download_playlist_by_url(self, url, **kwargs):
traceback.print_exception(exc_type, exc_value, exc_traceback)
def prepare(self, **kwargs):
+ self.streams_parameter = {}
assert self.url or self.vid
if self.url and not self.vid:
@@ -112,9 +141,12 @@ def prepare(self, **kwargs):
exit(0)
api_url = 'http://play.youku.com/play/get.json?vid=%s&ct=12' % self.vid
+ api_url1 = 'http://play.youku.com/play/get.json?vid=%s&ct=10' % self.vid
try:
meta = json.loads(get_html(api_url))
+ meta1 = json.loads(get_html(api_url1))
data = meta['data']
+ data1 = meta1['data']
assert 'stream' in data
except:
if 'error' in data:
@@ -123,7 +155,10 @@ def prepare(self, **kwargs):
self.password_protected = True
self.password = input(log.sprint('Password: ', log.YELLOW))
api_url += '&pwd={}'.format(self.password)
+ api_url1 += '&pwd={}'.format(self.password)
+ meta1 = json.loads(get_html(api_url1))
meta = json.loads(get_html(api_url))
+ data1 = meta1['data']
data = meta['data']
else:
log.wtf('[Failed] ' + data['error']['note'])
@@ -135,6 +170,18 @@ def prepare(self, **kwargs):
self.ip = data['security']['ip']
stream_types = dict([(i['id'], i) for i in self.stream_types])
+
+ for stream in data1['stream']:
+ stream_id = stream['stream_type']
+ if stream_id in stream_types:
+ if 'alias-of' in stream_types[stream_id]:
+ stream_id = stream_types[stream_id]['alias-of']
+ if stream_id not in self.streams_parameter:
+ self.streams_parameter[stream_id] = {
+ 'fileid': stream['stream_fileid'],
+ 'segs': stream['segs']
+ }
+
for stream in data['stream']:
stream_id = stream['stream_type']
if stream_id in stream_types:
@@ -145,6 +192,11 @@ def prepare(self, **kwargs):
'video_profile': stream_types[stream_id]['video_profile'],
'size': stream['size']
}
+ if stream_id not in self.streams_parameter:
+ self.streams_parameter[stream_id] = {
+ 'fileid': stream['stream_fileid'],
+ 'segs': stream['segs']
+ }
# Audio languages
if 'dvd' in data and 'audiolang' in data['dvd']:
@@ -165,31 +217,45 @@ def extract(self, **kwargs):
# Extract stream with the best quality
stream_id = self.streams_sorted[0]['id']
- new_ep, sid, token = self.__class__.generate_ep(self.vid, self.ep)
- m3u8_query = parse.urlencode(dict(
- ctype=12,
- ep=new_ep,
- ev=1,
- keyframe=1,
- oip=self.ip,
- sid=sid,
- token=token,
- ts=int(time.time()),
- type=stream_id,
- vid=self.vid,
- ))
- m3u8_url = 'http://pl.youku.com/playlist/m3u8?' + m3u8_query
+ f_code_1 = 'becaf9be'
+ e_code = self.__class__.trans_e(f_code_1, base64.b64decode(bytes(self.ep, 'ascii')))
- if not kwargs['info_only']:
- if self.password_protected:
- m3u8_url += '&password={}'.format(self.password)
+ sid, token = e_code.split('_')
- m3u8 = get_html(m3u8_url)
+ m3u8 = ''
+ segs = self.streams_parameter[stream_id]['segs']
+ streamfileid = self.streams_parameter[stream_id]['fileid']
+ for no in range(0,len(segs)):
+ k = segs[no]['key']
+ if k == -1:
+ log.e('Error')
+ exit()
+ fileId,ep = self.__class__.generate_ep(no,streamfileid ,sid,token)
+# pdb.set_trace()
+ m3u8 += 'http://k.youku.com/player/getFlvPath/sid/'+ sid
+ m3u8+='_00/st/'+ self.streams[stream_id]['container']
+ m3u8+='/fileid/'+ fileId
+ m3u8+='?K='+ k
+ m3u8+='&ctype=12&ev=1&token='+ token
+ m3u8+='&oip='+ str(self.ip)
+ m3u8+='&ep='+ ep+'\r\n'
+ if not kwargs['info_only']:
self.streams[stream_id]['src'] = self.__class__.parse_m3u8(m3u8)
if not self.streams[stream_id]['src'] and self.password_protected:
log.e('[Failed] Wrong password.')
+
+# if not kwargs['info_only']:
+# if self.password_protected:
+# m3u8_url += '&password={}'.format(self.password)
+#
+# m3u8 = get_html(m3u8_url)
+#
+# self.streams[stream_id]['src'] = self.__class__.parse_m3u8(m3u8)
+# if not self.streams[stream_id]['src'] and self.password_protected:
+# log.e('[Failed] Wrong password.')
+
site = Youku()
download = site.download_by_url
download_playlist = site.download_playlist_by_url
| Come from https://github.com/soimort/you-get/pull/463
fix #390
<!-- Reviewable:start -->
[<img src="https://reviewable.io/review_button.png" height=40 alt="Review on Reviewable"/>](https://reviewable.io/reviews/soimort/you-get/783)
<!-- Reviewable:end -->
| https://api.github.com/repos/soimort/you-get/pulls/783 | 2015-12-02T19:51:37Z | 2015-12-09T09:47:26Z | 2015-12-09T09:47:26Z | 2015-12-11T08:22:06Z | 2,206 | soimort/you-get | 21,034 |
add example notebook | diff --git a/notebooks/README.md b/notebooks/README.md
index edb5da3388..2eea7a150f 100644
--- a/notebooks/README.md
+++ b/notebooks/README.md
@@ -1,10 +1,14 @@
# Notebooks
-This is a folders with some useful notebooks, all the notebooks have a markdown
-file with the same name explaining what they do.
+This is a folder with some useful notebooks, all the notebooks have a markdown
+file with the same name (or a README.md if its a single notebook folder)
+explaining what they do.
## Contributing
-Contributing to both notebooks and making new notebooks is very welcome. If you
-do so, make sure to make a markdown (.md) file to go with your notebook, makes
+Contributing to notebooks and making new notebooks is very welcome. If you do
+so, make sure to make a markdown (.md) file to go with your notebook, it makes
it easier for people to know what your notebook is about.
+
+Check out the [example notebook](example/) for a reference example you can use
+as a starting point.
diff --git a/notebooks/example/README.md b/notebooks/example/README.md
new file mode 100644
index 0000000000..2136834d91
--- /dev/null
+++ b/notebooks/example/README.md
@@ -0,0 +1,48 @@
+# Example Notebook
+
+[](https://colab.research.google.com/github/andrewm4894/Open-Assistant/blob/main/notebooks/example/example.ipynb)
+
+This folder contains an example reference notebook structure and approach for
+this project. Please try and follow this structure as closely as possible. While
+things will not exactly be the same for each notebook some principles we would
+like to try ensure are:
+
+1. Each notebook or collection of related or dependant notebooks should live in
+ its own folder.
+1. Each notebook should have a markdown file with the same name as the notebook
+ (or README.md if it's a single notebook folder) that explains what the
+ notebook does and how to use it.
+1. Add an "Open in Colab" badge to the top of the notebook (see the markdown
+ cell near the top of `example.ipynb` as an example you can adapt).
+1. Make it as easy as possible for someone to run the notebook in Google Colab
+ or some other environment based on standard practices like providing a
+ `requirements.txt` file or anything else needed to successfully run the
+ notebook.
+
+## Running in Google Colab
+
+At the top of the [example notebook](example.ipynb) there is a code cell that
+will (once uncommented):
+
+1. clone the repository into your colab instance.
+1. `cd` into the relevant notebook directory.
+1. run `pip install -r requirements.txt` to install the required packages.
+
+At this point you can run the notebook as normal and the folder structure will
+match that of the repository and the colab notebook will be running from the
+same directory that the notebook lives in so relative links etc should work as
+expected (for example `example.ipynb` will read some sample data from
+`data/data.csv`).
+
+If you are adding a notebook please try and add a similar cell to the top of the
+notebook so that it is easy for others to run the notebook in colab. If your
+notebook does not have any dependencies beyond what already comes as standard in
+Google Colab then you do not need such a cell, just an "Open in Colab" badge
+will suffice.
+
+## example.ipynb
+
+This notebook contains an example "Open In Colab" badge and a code cell to
+prepare the colab environment to run the notebook. It also contains a code cell
+that will read in some sample data from the `data` folder in the repository and
+display it as a pandas dataframe.
diff --git a/notebooks/example/data/data.csv b/notebooks/example/data/data.csv
new file mode 100644
index 0000000000..126a03bbc7
--- /dev/null
+++ b/notebooks/example/data/data.csv
@@ -0,0 +1,3 @@
+row,text,label
+1,some example data,1
+2,some more data,0
diff --git a/notebooks/example/example.ipynb b/notebooks/example/example.ipynb
new file mode 100644
index 0000000000..2c6b1e0184
--- /dev/null
+++ b/notebooks/example/example.ipynb
@@ -0,0 +1,160 @@
+{
+ "cells": [
+ {
+ "attachments": {},
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# Example Notebook"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "[](https://colab.research.google.com/github/andrewm4894/Open-Assistant/blob/example-notebook/notebooks/example/example.ipynb)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 7,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# uncomment and run below lines to set up if running in colab\n",
+ "# !git clone https://github.com/andrewm4894/Open-Assistant.git\n",
+ "# %cd Open-Assistant/notebooks/example\n",
+ "# !pip install -r requirements.txt"
+ ]
+ },
+ {
+ "attachments": {},
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Try to add a markdown section to the notebook that explains what the notebook is about and what it does. This will help people understand what the notebook is for and how to use it."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 8,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# import required packages\n",
+ "import pandas as pd\n",
+ "from transformers import pipeline"
+ ]
+ },
+ {
+ "attachments": {},
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Use Headings\n",
+ "\n",
+ "(it will help with link sharing to specific sections of the notebook)"
+ ]
+ },
+ {
+ "attachments": {},
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Make fancy markdown cells if you want."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 10,
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/html": [
+ "<div>\n",
+ "<style scoped>\n",
+ " .dataframe tbody tr th:only-of-type {\n",
+ " vertical-align: middle;\n",
+ " }\n",
+ "\n",
+ " .dataframe tbody tr th {\n",
+ " vertical-align: top;\n",
+ " }\n",
+ "\n",
+ " .dataframe thead th {\n",
+ " text-align: right;\n",
+ " }\n",
+ "</style>\n",
+ "<table border=\"1\" class=\"dataframe\">\n",
+ " <thead>\n",
+ " <tr style=\"text-align: right;\">\n",
+ " <th></th>\n",
+ " <th>row</th>\n",
+ " <th>text</th>\n",
+ " <th>label</th>\n",
+ " </tr>\n",
+ " </thead>\n",
+ " <tbody>\n",
+ " <tr>\n",
+ " <th>0</th>\n",
+ " <td>1</td>\n",
+ " <td>some example data</td>\n",
+ " <td>1</td>\n",
+ " </tr>\n",
+ " <tr>\n",
+ " <th>1</th>\n",
+ " <td>2</td>\n",
+ " <td>some more data</td>\n",
+ " <td>0</td>\n",
+ " </tr>\n",
+ " </tbody>\n",
+ "</table>\n",
+ "</div>"
+ ],
+ "text/plain": [
+ " row text label\n",
+ "0 1 some example data 1\n",
+ "1 2 some more data 0"
+ ]
+ },
+ "execution_count": 10,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "# Do cool stuff here\n",
+ "df = pd.read_csv(\"data/data.csv\")\n",
+ "df.head()"
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.10.4"
+ },
+ "orig_nbformat": 4,
+ "vscode": {
+ "interpreter": {
+ "hash": "3ad933181bd8a04b432d3370b9dc3b0662ad032c4dfaa4e4f1596c548f763858"
+ }
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 2
+}
diff --git a/notebooks/example/requirements.txt b/notebooks/example/requirements.txt
new file mode 100644
index 0000000000..976a2b1f39
--- /dev/null
+++ b/notebooks/example/requirements.txt
@@ -0,0 +1 @@
+transformers
| this adds
- an example notebook as `notebooks/example/`
- a readme in `notebooks/example/README.md`
- a code cell at the top of `notebooks/example/example.ipynb`, to be uncommented and ran if running in colab, that will set up the colab notebook to just work and be able to use and files or dirs that live with the notebook, for example reading from a /data/ folder or installing from a requirements.txt.
Main idea here is to give an example that's as easy as just pressing the "Open In Colab", uncommenting top set up cell (if needed) and then ability to just run all cells. This is useful as you can just get a gpu instance (if needed) in colab so should make it easier for users to play with notebooks without too crazy configurations/env's locally that otherwise might block them.
I had a look at mybinder but it did not really work well (was not able to get it to open in a subfolder ([thread](https://discourse.jupyter.org/t/mybinder-jupyterlab-link-to-a-repo-subfolder-works-but-not-to-a-specific-notebook-in-that-folder/9443) suggests it should), was much slower than colab also since needed to create images etc).
So i think Colab + set up cell at top should probably be fine and does seem to work pretty well in terms of minimal time to just being able to press "run all" in a notebook and have it run.
Thoughts? I don't mind helping to make sure any notebooks that come in are "runnable in colab" as best we can.
| https://api.github.com/repos/LAION-AI/Open-Assistant/pulls/512 | 2023-01-08T01:18:16Z | 2023-01-09T14:11:36Z | 2023-01-09T14:11:36Z | 2023-01-09T14:11:36Z | 2,484 | LAION-AI/Open-Assistant | 37,521 |
Use parentheses on method access on float and int literals | diff --git a/CHANGES.md b/CHANGES.md
index 0dc4952f069..6966a91aa11 100644
--- a/CHANGES.md
+++ b/CHANGES.md
@@ -42,6 +42,9 @@
- Make passing `SRC` or `--code` mandatory and mutually exclusive (#2804)
- Work around bug that causes unstable formatting in some cases in the presence of the
magic trailing comma (#2807)
+- Use parentheses for attribute access on decimal float and int literals (#2799)
+- Don't add whitespace for attribute access on hexadecimal, binary, octal, and complex
+ literals (#2799)
- Deprecate the `black-primer` tool (#2809)
### Packaging
diff --git a/src/black/linegen.py b/src/black/linegen.py
index ac60ed1986d..b572ed0b52f 100644
--- a/src/black/linegen.py
+++ b/src/black/linegen.py
@@ -197,6 +197,28 @@ def visit_decorators(self, node: Node) -> Iterator[Line]:
yield from self.line()
yield from self.visit(child)
+ def visit_power(self, node: Node) -> Iterator[Line]:
+ for idx, leaf in enumerate(node.children[:-1]):
+ next_leaf = node.children[idx + 1]
+
+ if not isinstance(leaf, Leaf):
+ continue
+
+ value = leaf.value.lower()
+ if (
+ leaf.type == token.NUMBER
+ and next_leaf.type == syms.trailer
+ # Ensure that we are in an attribute trailer
+ and next_leaf.children[0].type == token.DOT
+ # It shouldn't wrap hexadecimal, binary and octal literals
+ and not value.startswith(("0x", "0b", "0o"))
+ # It shouldn't wrap complex literals
+ and "j" not in value
+ ):
+ wrap_in_parentheses(node, leaf)
+
+ yield from self.visit_default(node)
+
def visit_SEMI(self, leaf: Leaf) -> Iterator[Line]:
"""Remove a semicolon and put the other statement on a separate line."""
yield from self.line()
diff --git a/src/black/nodes.py b/src/black/nodes.py
index 74dfa896295..51d4cb8618d 100644
--- a/src/black/nodes.py
+++ b/src/black/nodes.py
@@ -306,12 +306,7 @@ def whitespace(leaf: Leaf, *, complex_subscript: bool) -> str: # noqa: C901
return NO
if not prev:
- if t == token.DOT:
- prevp = preceding_leaf(p)
- if not prevp or prevp.type != token.NUMBER:
- return NO
-
- elif t == token.LSQB:
+ if t == token.DOT or t == token.LSQB:
return NO
elif prev.type != token.COMMA:
diff --git a/tests/data/attribute_access_on_number_literals.py b/tests/data/attribute_access_on_number_literals.py
new file mode 100644
index 00000000000..7c16bdfb3a5
--- /dev/null
+++ b/tests/data/attribute_access_on_number_literals.py
@@ -0,0 +1,47 @@
+x = 123456789 .bit_count()
+x = (123456).__abs__()
+x = .1.is_integer()
+x = 1. .imag
+x = 1E+1.imag
+x = 1E-1.real
+x = 123456789.123456789.hex()
+x = 123456789.123456789E123456789 .real
+x = 123456789E123456789 .conjugate()
+x = 123456789J.real
+x = 123456789.123456789J.__add__(0b1011.bit_length())
+x = 0XB1ACC.conjugate()
+x = 0B1011 .conjugate()
+x = 0O777 .real
+x = 0.000000006 .hex()
+x = -100.0000J
+
+if 10 .real:
+ ...
+
+y = 100[no]
+y = 100(no)
+
+# output
+
+x = (123456789).bit_count()
+x = (123456).__abs__()
+x = (0.1).is_integer()
+x = (1.0).imag
+x = (1e1).imag
+x = (1e-1).real
+x = (123456789.123456789).hex()
+x = (123456789.123456789e123456789).real
+x = (123456789e123456789).conjugate()
+x = 123456789j.real
+x = 123456789.123456789j.__add__(0b1011.bit_length())
+x = 0xB1ACC.conjugate()
+x = 0b1011.conjugate()
+x = 0o777.real
+x = (0.000000006).hex()
+x = -100.0000j
+
+if (10).real:
+ ...
+
+y = 100[no]
+y = 100(no)
diff --git a/tests/data/expression.diff b/tests/data/expression.diff
index 5f29a18dc7f..2eaaeb479f8 100644
--- a/tests/data/expression.diff
+++ b/tests/data/expression.diff
@@ -11,7 +11,7 @@
True
False
1
-@@ -21,71 +21,104 @@
+@@ -21,99 +21,135 @@
Name1 or (Name2 and Name3) or Name4
Name1 or Name2 and Name3 or Name4
v1 << 2
@@ -144,8 +144,11 @@
call(**self.screen_kwargs)
call(b, **self.screen_kwargs)
lukasz.langa.pl
-@@ -94,26 +127,29 @@
- 1.0 .real
+ call.me(maybe)
+-1 .real
+-1.0 .real
++(1).real
++(1.0).real
....__class__
list[str]
dict[str, int]
diff --git a/tests/data/expression.py b/tests/data/expression.py
index b056841027d..06096c589f1 100644
--- a/tests/data/expression.py
+++ b/tests/data/expression.py
@@ -382,8 +382,8 @@ async def f():
call(b, **self.screen_kwargs)
lukasz.langa.pl
call.me(maybe)
-1 .real
-1.0 .real
+(1).real
+(1.0).real
....__class__
list[str]
dict[str, int]
diff --git a/tests/data/expression_skip_magic_trailing_comma.diff b/tests/data/expression_skip_magic_trailing_comma.diff
index 5b722c91352..eba3fd2da7d 100644
--- a/tests/data/expression_skip_magic_trailing_comma.diff
+++ b/tests/data/expression_skip_magic_trailing_comma.diff
@@ -11,7 +11,7 @@
True
False
1
-@@ -21,71 +21,92 @@
+@@ -21,99 +21,118 @@
Name1 or (Name2 and Name3) or Name4
Name1 or Name2 and Name3 or Name4
v1 << 2
@@ -132,8 +132,11 @@
call(**self.screen_kwargs)
call(b, **self.screen_kwargs)
lukasz.langa.pl
-@@ -94,26 +115,24 @@
- 1.0 .real
+ call.me(maybe)
+-1 .real
+-1.0 .real
++(1).real
++(1.0).real
....__class__
list[str]
dict[str, int]
diff --git a/tests/test_format.py b/tests/test_format.py
index 88f084ea478..aef22545f5b 100644
--- a/tests/test_format.py
+++ b/tests/test_format.py
@@ -15,6 +15,7 @@
)
SIMPLE_CASES: List[str] = [
+ "attribute_access_on_number_literals",
"beginning_backslash",
"bracketmatch",
"class_blank_parentheses",
| Closes #2034
### Checklist - did you ...
- [X] Add a CHANGELOG entry if necessary?
- [X] Add / update tests if necessary?
- [ ] Add new / update outdated documentation? -> n/a
| https://api.github.com/repos/psf/black/pulls/2799 | 2022-01-23T11:09:06Z | 2022-01-28T05:31:50Z | 2022-01-28T05:31:50Z | 2022-01-28T05:35:02Z | 1,848 | psf/black | 23,972 |
Remove dependency from setup.py | diff --git a/setup.py b/setup.py
index 0feeb85e7fa..dd7235899bf 100644
--- a/setup.py
+++ b/setup.py
@@ -29,7 +29,6 @@
zip_safe=False,
install_requires=[
'scipy', 'numpy>=1.10.4', 'pyglet>=1.4.0,<=1.5.0', 'cloudpickle>=1.2.0,<1.4.0',
- 'enum34~=1.1.6;python_version<"3.4"',
],
extras_require=extras,
package_data={'gym': [
| Python 3.4 was dropped, so enum34 is no longer required. | https://api.github.com/repos/openai/gym/pulls/2016 | 2020-08-08T01:13:02Z | 2020-08-28T21:40:36Z | 2020-08-28T21:40:36Z | 2020-08-28T21:42:09Z | 142 | openai/gym | 5,720 |
Updated for python 3 | diff --git a/mitmproxy/contrib/wbxml/ASCommandResponse.py b/mitmproxy/contrib/wbxml/ASCommandResponse.py
index f5f62e856e..4eea05a3cd 100644
--- a/mitmproxy/contrib/wbxml/ASCommandResponse.py
+++ b/mitmproxy/contrib/wbxml/ASCommandResponse.py
@@ -41,7 +41,7 @@ def __init__(self, response):
raise ValueError("Empty WBXML body passed")
except Exception as e:
self.xmlString = None
- raise ValueError("Error: {0}".format(e.message))
+ raise ValueError("Error: {0}".format(e))
def getWBXMLBytes(self):
return self.wbxmlBytes
diff --git a/mitmproxy/contrib/wbxml/ASWBXMLByteQueue.py b/mitmproxy/contrib/wbxml/ASWBXMLByteQueue.py
index b616028c41..0174eb4f00 100644
--- a/mitmproxy/contrib/wbxml/ASWBXMLByteQueue.py
+++ b/mitmproxy/contrib/wbxml/ASWBXMLByteQueue.py
@@ -40,7 +40,7 @@ def __init__(self, wbxmlBytes):
Queue.__init__(self)
for byte in wbxmlBytes:
- self.put(ord(byte))
+ self.put(byte)
self.bytesEnqueued += 1
diff --git a/mitmproxy/contrib/wbxml/ASWBXMLCodePage.py b/mitmproxy/contrib/wbxml/ASWBXMLCodePage.py
index 1d00afd422..da84a85efa 100644
--- a/mitmproxy/contrib/wbxml/ASWBXMLCodePage.py
+++ b/mitmproxy/contrib/wbxml/ASWBXMLCodePage.py
@@ -39,12 +39,12 @@ def addToken(self, token, tag):
self.tagLookup[tag] = token
def getToken(self, tag):
- if self.tagLookup.has_key(tag):
+ if tag in self.tagLookup:
return self.tagLookup[tag]
return 0xFF
def getTag(self, token):
- if self.tokenLookup.has_key(token):
+ if token in self.tokenLookup:
return self.tokenLookup[token]
return None
| https://api.github.com/repos/mitmproxy/mitmproxy/pulls/2106 | 2017-03-06T22:31:40Z | 2017-03-07T11:12:48Z | 2017-03-07T11:12:48Z | 2017-03-07T11:12:48Z | 514 | mitmproxy/mitmproxy | 28,017 |
|
Security updates | diff --git a/.github/dependabot.yml b/.github/dependabot.yml
new file mode 100644
index 0000000000..ba1c6b80ac
--- /dev/null
+++ b/.github/dependabot.yml
@@ -0,0 +1,11 @@
+# To get started with Dependabot version updates, you'll need to specify which
+# package ecosystems to update and where the package manifests are located.
+# Please see the documentation for all configuration options:
+# https://docs.github.com/github/administering-a-repository/configuration-options-for-dependency-updates
+
+version: 2
+updates:
+ - package-ecosystem: "pip" # See documentation for possible values
+ directory: "/" # Location of package manifests
+ schedule:
+ interval: "daily"
diff --git a/Hand-Motion-Detection/requirements.txt b/Hand-Motion-Detection/requirements.txt
index a203bdfcfb..140a0c3ad3 100644
--- a/Hand-Motion-Detection/requirements.txt
+++ b/Hand-Motion-Detection/requirements.txt
@@ -1,3 +1,3 @@
-numpy==1.19.5
-opencv_python==4.5.2.52
-mediapipe==0.8.7.3
+numpy==1.23.4
+opencv_python==4.6.0.66
+mediapipe==0.8.11
diff --git a/ImageDownloader/requirements.txt b/ImageDownloader/requirements.txt
index 566083cb6b..d15ce5a755 100644
--- a/ImageDownloader/requirements.txt
+++ b/ImageDownloader/requirements.txt
@@ -1 +1 @@
-requests==2.22.0
+requests==2.28.1
diff --git a/PDF/requirements.txt b/PDF/requirements.txt
index 2822ee31a3..a8618f45b7 100644
--- a/PDF/requirements.txt
+++ b/PDF/requirements.txt
@@ -1,2 +1,2 @@
-Pillow==9.0.0
+Pillow==9.3.0
fpdf==1.7.2
\ No newline at end of file
diff --git a/PongPong_Game/requirements.txt b/PongPong_Game/requirements.txt
index b29f08dade..cc36e6a04d 100644
--- a/PongPong_Game/requirements.txt
+++ b/PongPong_Game/requirements.txt
@@ -1 +1 @@
-pyglet==1.5.14
+pyglet==1.5.27
diff --git a/async_downloader/requirements.txt b/async_downloader/requirements.txt
index 7a37a73411..b058924139 100644
--- a/async_downloader/requirements.txt
+++ b/async_downloader/requirements.txt
@@ -1 +1 @@
-aiohttp==3.7.4
+aiohttp==3.8.3
| https://api.github.com/repos/geekcomputers/Python/pulls/1787 | 2022-10-30T13:22:18Z | 2022-10-31T22:31:32Z | 2022-10-31T22:31:32Z | 2022-10-31T22:31:32Z | 669 | geekcomputers/Python | 31,330 |
|
Bumping up min version for pyarrow | diff --git a/ci/requirements-optional-conda.txt b/ci/requirements-optional-conda.txt
index c9dc385b87986..8758c8154abca 100644
--- a/ci/requirements-optional-conda.txt
+++ b/ci/requirements-optional-conda.txt
@@ -1,7 +1,7 @@
beautifulsoup4>=4.2.1
blosc
bottleneck>=1.2.0
-fastparquet
+fastparquet>=0.1.2
gcsfs
html5lib
ipython>=5.6.0
@@ -12,7 +12,7 @@ matplotlib>=2.0.0
nbsphinx
numexpr>=2.6.1
openpyxl
-pyarrow>=0.4.1
+pyarrow>=0.7.0
pymysql
pytables>=3.4.2
pytest-cov
diff --git a/ci/requirements-optional-pip.txt b/ci/requirements-optional-pip.txt
index 347ea0d9832b0..62f1c555d8544 100644
--- a/ci/requirements-optional-pip.txt
+++ b/ci/requirements-optional-pip.txt
@@ -3,7 +3,7 @@
beautifulsoup4>=4.2.1
blosc
bottleneck>=1.2.0
-fastparquet
+fastparquet>=0.1.2
gcsfs
html5lib
ipython>=5.6.0
@@ -14,9 +14,9 @@ matplotlib>=2.0.0
nbsphinx
numexpr>=2.6.1
openpyxl
-pyarrow>=0.4.1
+pyarrow>=0.7.0
pymysql
-tables
+pytables>=3.4.2
pytest-cov
pytest-xdist
s3fs
@@ -27,4 +27,4 @@ statsmodels
xarray
xlrd
xlsxwriter
-xlwt
+xlwt
\ No newline at end of file
diff --git a/ci/travis-27.yaml b/ci/travis-27.yaml
index 9641a76152d7b..28bee387a4f4a 100644
--- a/ci/travis-27.yaml
+++ b/ci/travis-27.yaml
@@ -22,7 +22,7 @@ dependencies:
- patsy
- psycopg2
- py
- - pyarrow=0.4.1
+ - pyarrow=0.7.0
- PyCrypto
- pymysql=0.6.3
- pytables
diff --git a/doc/source/install.rst b/doc/source/install.rst
index b32c5b1145e85..89f7b580303f5 100644
--- a/doc/source/install.rst
+++ b/doc/source/install.rst
@@ -258,8 +258,8 @@ Optional Dependencies
* `SciPy <http://www.scipy.org>`__: miscellaneous statistical functions, Version 0.18.1 or higher
* `xarray <http://xarray.pydata.org>`__: pandas like handling for > 2 dims, needed for converting Panels to xarray objects. Version 0.7.0 or higher is recommended.
* `PyTables <http://www.pytables.org>`__: necessary for HDF5-based storage, Version 3.4.2 or higher
-* `pyarrow <http://arrow.apache.org/docs/python/>`__ (>= 0.4.1): necessary for feather-based storage.
-* `Apache Parquet <https://parquet.apache.org/>`__, either `pyarrow <http://arrow.apache.org/docs/python/>`__ (>= 0.4.1) or `fastparquet <https://fastparquet.readthedocs.io/en/latest>`__ (>= 0.0.6) for parquet-based storage. The `snappy <https://pypi.org/project/python-snappy>`__ and `brotli <https://pypi.org/project/brotlipy>`__ are available for compression support.
+* `pyarrow <http://arrow.apache.org/docs/python/>`__ (>= 0.7.0): necessary for feather-based storage.
+* `Apache Parquet <https://parquet.apache.org/>`__, either `pyarrow <http://arrow.apache.org/docs/python/>`__ (>= 0.7.0) or `fastparquet <https://fastparquet.readthedocs.io/en/latest>`__ (>= 0.1.2) for parquet-based storage. The `snappy <https://pypi.org/project/python-snappy>`__ and `brotli <https://pypi.org/project/brotlipy>`__ are available for compression support.
* `SQLAlchemy <http://www.sqlalchemy.org>`__: for SQL database support. Version 0.8.1 or higher recommended. Besides SQLAlchemy, you also need a database specific driver. You can find an overview of supported drivers for each SQL dialect in the `SQLAlchemy docs <http://docs.sqlalchemy.org/en/latest/dialects/index.html>`__. Some common drivers are:
* `psycopg2 <http://initd.org/psycopg/>`__: for PostgreSQL
diff --git a/doc/source/whatsnew/v0.24.0.txt b/doc/source/whatsnew/v0.24.0.txt
index fa748cccc0d65..ecb621fc4507d 100644
--- a/doc/source/whatsnew/v0.24.0.txt
+++ b/doc/source/whatsnew/v0.24.0.txt
@@ -250,7 +250,7 @@ Backwards incompatible API changes
Dependencies have increased minimum versions
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-We have updated our minimum supported versions of dependencies (:issue:`21242`).
+We have updated our minimum supported versions of dependencies (:issue:`21242`, `18742`).
If installed, we now require:
+-----------------+-----------------+----------+
@@ -268,6 +268,10 @@ If installed, we now require:
+-----------------+-----------------+----------+
| scipy | 0.18.1 | |
+-----------------+-----------------+----------+
+| pyarrow | 0.7.0 | |
++-----------------+-----------------+----------+
+| fastparquet | 0.1.2 | |
++-----------------+-----------------+----------+
Additionally we no longer depend on `feather-format` for feather based storage
and replaced it with references to `pyarrow` (:issue:`21639` and :issue:`23053`).
diff --git a/pandas/io/parquet.py b/pandas/io/parquet.py
index 2c75f46385e86..160a26533fb89 100644
--- a/pandas/io/parquet.py
+++ b/pandas/io/parquet.py
@@ -5,7 +5,7 @@
from pandas.compat import string_types
-from pandas import DataFrame, Int64Index, RangeIndex, get_option
+from pandas import DataFrame, get_option
import pandas.core.common as com
from pandas.io.common import get_filepath_or_buffer, is_s3_url
@@ -89,29 +89,20 @@ def __init__(self):
"\nor via pip\n"
"pip install -U pyarrow\n"
)
- if LooseVersion(pyarrow.__version__) < '0.4.1':
+ if LooseVersion(pyarrow.__version__) < '0.7.0':
raise ImportError(
- "pyarrow >= 0.4.1 is required for parquet support\n\n"
+ "pyarrow >= 0.7.0 is required for parquet support\n\n"
"you can install via conda\n"
"conda install pyarrow -c conda-forge\n"
"\nor via pip\n"
"pip install -U pyarrow\n"
)
- self._pyarrow_lt_060 = (
- LooseVersion(pyarrow.__version__) < LooseVersion('0.6.0'))
- self._pyarrow_lt_070 = (
- LooseVersion(pyarrow.__version__) < LooseVersion('0.7.0'))
-
self.api = pyarrow
def write(self, df, path, compression='snappy',
coerce_timestamps='ms', index=None, **kwargs):
self.validate_dataframe(df)
-
- # Only validate the index if we're writing it.
- if self._pyarrow_lt_070 and index is not False:
- self._validate_write_lt_070(df)
path, _, _, _ = get_filepath_or_buffer(path, mode='wb')
if index is None:
@@ -119,27 +110,17 @@ def write(self, df, path, compression='snappy',
else:
from_pandas_kwargs = {'preserve_index': index}
- if self._pyarrow_lt_060:
- table = self.api.Table.from_pandas(df, timestamps_to_ms=True,
- **from_pandas_kwargs)
- self.api.parquet.write_table(
- table, path, compression=compression, **kwargs)
-
- else:
- table = self.api.Table.from_pandas(df, **from_pandas_kwargs)
- self.api.parquet.write_table(
- table, path, compression=compression,
- coerce_timestamps=coerce_timestamps, **kwargs)
+ table = self.api.Table.from_pandas(df, **from_pandas_kwargs)
+ self.api.parquet.write_table(
+ table, path, compression=compression,
+ coerce_timestamps=coerce_timestamps, **kwargs)
def read(self, path, columns=None, **kwargs):
path, _, _, should_close = get_filepath_or_buffer(path)
- if self._pyarrow_lt_070:
- result = self.api.parquet.read_pandas(path, columns=columns,
- **kwargs).to_pandas()
- else:
- kwargs['use_pandas_metadata'] = True
- result = self.api.parquet.read_table(path, columns=columns,
- **kwargs).to_pandas()
+
+ kwargs['use_pandas_metadata'] = True
+ result = self.api.parquet.read_table(path, columns=columns,
+ **kwargs).to_pandas()
if should_close:
try:
path.close()
@@ -148,39 +129,6 @@ def read(self, path, columns=None, **kwargs):
return result
- def _validate_write_lt_070(self, df):
- # Compatibility shim for pyarrow < 0.7.0
- # TODO: Remove in pandas 0.23.0
- from pandas.core.indexes.multi import MultiIndex
- if isinstance(df.index, MultiIndex):
- msg = (
- "Multi-index DataFrames are only supported "
- "with pyarrow >= 0.7.0"
- )
- raise ValueError(msg)
- # Validate index
- if not isinstance(df.index, Int64Index):
- msg = (
- "pyarrow < 0.7.0 does not support serializing {} for the "
- "index; you can .reset_index() to make the index into "
- "column(s), or install the latest version of pyarrow or "
- "fastparquet."
- )
- raise ValueError(msg.format(type(df.index)))
- if not df.index.equals(RangeIndex(len(df))):
- raise ValueError(
- "pyarrow < 0.7.0 does not support serializing a non-default "
- "index; you can .reset_index() to make the index into "
- "column(s), or install the latest version of pyarrow or "
- "fastparquet."
- )
- if df.index.name is not None:
- raise ValueError(
- "pyarrow < 0.7.0 does not serialize indexes with a name; you "
- "can set the index.name to None or install the latest version "
- "of pyarrow or fastparquet."
- )
-
class FastParquetImpl(BaseImpl):
@@ -197,9 +145,9 @@ def __init__(self):
"\nor via pip\n"
"pip install -U fastparquet"
)
- if LooseVersion(fastparquet.__version__) < '0.1.0':
+ if LooseVersion(fastparquet.__version__) < '0.1.2':
raise ImportError(
- "fastparquet >= 0.1.0 is required for parquet "
+ "fastparquet >= 0.1.2 is required for parquet "
"support\n\n"
"you can install via conda\n"
"conda install fastparquet -c conda-forge\n"
diff --git a/pandas/tests/io/test_parquet.py b/pandas/tests/io/test_parquet.py
index 4c58d8ce29d8b..3b3e7f757bf60 100644
--- a/pandas/tests/io/test_parquet.py
+++ b/pandas/tests/io/test_parquet.py
@@ -41,22 +41,6 @@ def engine(request):
@pytest.fixture
def pa():
- if not _HAVE_PYARROW:
- pytest.skip("pyarrow is not installed")
- return 'pyarrow'
-
-
-@pytest.fixture
-def pa_lt_070():
- if not _HAVE_PYARROW:
- pytest.skip("pyarrow is not installed")
- if LooseVersion(pyarrow.__version__) >= LooseVersion('0.7.0'):
- pytest.skip("pyarrow is >= 0.7.0")
- return 'pyarrow'
-
-
-@pytest.fixture
-def pa_ge_070():
if not _HAVE_PYARROW:
pytest.skip("pyarrow is not installed")
if LooseVersion(pyarrow.__version__) < LooseVersion('0.7.0'):
@@ -337,9 +321,9 @@ def test_write_index(self, engine):
df.index.name = 'foo'
check_round_trip(df, engine)
- def test_write_multiindex(self, pa_ge_070):
+ def test_write_multiindex(self, pa):
# Not suppoprted in fastparquet as of 0.1.3 or older pyarrow version
- engine = pa_ge_070
+ engine = pa
df = pd.DataFrame({'A': [1, 2, 3]})
index = pd.MultiIndex.from_tuples([('a', 1), ('a', 2), ('b', 1)])
@@ -352,8 +336,8 @@ def test_write_column_multiindex(self, engine):
df = pd.DataFrame(np.random.randn(4, 3), columns=mi_columns)
self.check_error_on_write(df, engine, ValueError)
- def test_multiindex_with_columns(self, pa_ge_070):
- engine = pa_ge_070
+ def test_multiindex_with_columns(self, pa):
+ engine = pa
dates = pd.date_range('01-Jan-2018', '01-Dec-2018', freq='MS')
df = pd.DataFrame(np.random.randn(2 * len(dates), 3),
columns=list('ABC'))
@@ -456,8 +440,7 @@ def test_unsupported(self, pa):
# older pyarrows raise ArrowInvalid
self.check_error_on_write(df, pa, Exception)
- def test_categorical(self, pa_ge_070):
- pa = pa_ge_070
+ def test_categorical(self, pa):
# supported in >= 0.7.0
df = pd.DataFrame({'a': pd.Categorical(list('abc'))})
@@ -466,13 +449,6 @@ def test_categorical(self, pa_ge_070):
expected = df.assign(a=df.a.astype(object))
check_round_trip(df, pa, expected=expected)
- def test_categorical_unsupported(self, pa_lt_070):
- pa = pa_lt_070
-
- # supported in >= 0.7.0
- df = pd.DataFrame({'a': pd.Categorical(list('abc'))})
- self.check_error_on_write(df, pa, NotImplementedError)
-
def test_s3_roundtrip(self, df_compat, s3_resource, pa):
# GH #19134
check_round_trip(df_compat, pa,
| closes #18742
closes #23409
- [x] tests passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/23482 | 2018-11-04T07:05:35Z | 2018-11-05T21:46:12Z | 2018-11-05T21:46:11Z | 2018-11-05T21:46:13Z | 3,594 | pandas-dev/pandas | 45,455 |
add example of --renew-hook envvar values and hook script (#3502) | diff --git a/certbot/cli.py b/certbot/cli.py
index e100c7715bf..fea83da297e 100644
--- a/certbot/cli.py
+++ b/certbot/cli.py
@@ -1064,9 +1064,11 @@ def prepare_and_parse_args(plugins, args, detect_defaults=False): # pylint: dis
"renew", "--renew-hook",
help="Command to be run in a shell once for each successfully renewed"
" certificate. For this command, the shell variable $RENEWED_LINEAGE"
- " will point to the config live subdirectory containing the new certs"
+ " will point to the config live subdirectory (for example,"
+ " \"/etc/letsencrypt/live/example.com\") containing the new certs"
" and keys; the shell variable $RENEWED_DOMAINS will contain a"
- " space-delimited list of renewed cert domains")
+ " space-delimited list of renewed cert domains (for example,"
+ " \"example.com www.example.com\"")
helpful.add(
"renew", "--disable-hook-validation",
action='store_false', dest='validate_hooks', default=True,
diff --git a/docs/cli-help.txt b/docs/cli-help.txt
index a5f77a3a1c9..91041458e02 100644
--- a/docs/cli-help.txt
+++ b/docs/cli-help.txt
@@ -265,10 +265,12 @@ renew:
Command to be run in a shell once for each
successfully renewed certificate. For this command,
the shell variable $RENEWED_LINEAGE will point to the
- config live subdirectory containing the new certs and
- keys; the shell variable $RENEWED_DOMAINS will contain
- a space-delimited list of renewed cert domains
- (default: None)
+ config live subdirectory (for example,
+ "/etc/letsencrypt/live/example.com") containing the
+ new certs and keys; the shell variable
+ $RENEWED_DOMAINS will contain a space-delimited list
+ of renewed cert domains (for example,
+ "example.com www.example.com") (default: None)
--disable-hook-validation
Ordinarily the commands specified for --pre-hook
/--post-hook/--renew-hook will be checked for
diff --git a/docs/using.rst b/docs/using.rst
index a325ff41317..7eaa92f840a 100644
--- a/docs/using.rst
+++ b/docs/using.rst
@@ -387,6 +387,49 @@ non-zero exit code. Hooks will only be run if a certificate is due for
renewal, so you can run the above command frequently without
unnecessarily stopping your webserver.
+``--pre-hook`` and ``--post-hook`` hooks run before and after every renewal
+attempt. If you want your hook to run only after a successful renewal, use
+``--renew-hook`` in a command like this.
+
+``certbot renew --renew-hook /path/to/renew-hook-script``
+
+For example, if you have a daemon that does not read its certificates as the
+root user, a renew hook like this can copy them to the correct location and
+apply appropriate file permissions.
+
+/path/to/renew-hook-script
+
+.. code-block:: none
+
+ #!/bin/sh
+
+ set -e
+
+ for domain in $RENEWED_DOMAINS; do
+ case $domain in
+ example.com)
+ daemon_cert_root=/etc/some-daemon/certs
+
+ # Make sure the certificate and private key files are
+ # never world readable, even just for an instant while
+ # we're copying them into daemon_cert_root.
+ umask 077
+
+ cp "$RENEWED_LINEAGE/fullchain.pem" "$daemon_cert_root/$domain.cert"
+ cp "$RENEWED_LINEAGE/privkey.pem" "$daemon_cert_root/$domain.key"
+
+ # Apply the proper file ownership and permissions for
+ # the daemon to read its certificate and key.
+ chown some-daemon "$daemon_cert_root/$domain.cert" \
+ "$daemon_cert_root/$domain.key"
+ chmod 400 "$daemon_cert_root/$domain.cert" \
+ "$daemon_cert_root/$domain.key"
+
+ service some-daemon restart >/dev/null
+ ;;
+ esac
+ done
+
More information about renewal hooks can be found by running
``certbot --help renew``.
| https://api.github.com/repos/certbot/certbot/pulls/4028 | 2017-01-12T01:43:18Z | 2017-04-13T16:40:59Z | 2017-04-13T16:40:59Z | 2017-04-13T16:41:00Z | 1,002 | certbot/certbot | 3,240 |
|
Fix spelling mistake | diff --git a/README.md b/README.md
index 2c0109f08..a5128559f 100644
--- a/README.md
+++ b/README.md
@@ -71,7 +71,7 @@ REPL-y 0.3.1
...
```
-If you are scary to blindly run changed command, there's `require_confirmation`
+If you are scared to blindly run changed command, there's `require_confirmation`
[settings](#settings) option:
```bash
| Just a quick fix to a spelling mistake I spotted. Hope this helps.
| https://api.github.com/repos/nvbn/thefuck/pulls/77 | 2015-04-21T14:48:22Z | 2015-04-21T14:55:39Z | 2015-04-21T14:55:39Z | 2015-04-21T14:55:39Z | 111 | nvbn/thefuck | 30,761 |
[Cherrypick] [Doc][Serve] Add minimal docs for model wrappers and http adapters (#23536) | diff --git a/doc/requirements-doc.txt b/doc/requirements-doc.txt
index d618aa3853ba6..91d0a1e4fb6f4 100644
--- a/doc/requirements-doc.txt
+++ b/doc/requirements-doc.txt
@@ -47,6 +47,7 @@ sphinx-external-toc==0.2.3
sphinxcontrib.yt==0.2.2
sphinx-sitemap==2.2.0
sphinx-thebe==0.1.1
+autodoc_pydantic==1.6.1
# MyST
myst-parser==0.15.2
diff --git a/doc/source/conf.py b/doc/source/conf.py
index fe8a40a44b03b..e17a7fa74a6c1 100644
--- a/doc/source/conf.py
+++ b/doc/source/conf.py
@@ -39,6 +39,7 @@
"sphinx.ext.coverage",
"sphinx_external_toc",
"sphinx_thebe",
+ "sphinxcontrib.autodoc_pydantic",
]
myst_enable_extensions = [
diff --git a/doc/source/ray-air/getting-started.rst b/doc/source/ray-air/getting-started.rst
index a99e74b160e64..bcc3af3126a80 100644
--- a/doc/source/ray-air/getting-started.rst
+++ b/doc/source/ray-air/getting-started.rst
@@ -101,13 +101,14 @@ Predictors
:members:
:show-inheritance:
-
+.. _air-serve-integration:
Serving
~~~~~~~
-.. automodule:: ray.serve.model_wrappers
- :members:
+.. autoclass:: ray.serve.model_wrappers.ModelWrapperDeployment
+
+.. autoclass:: ray.serve.model_wrappers.ModelWrapper
Outputs
diff --git a/doc/source/serve/http-servehandle.rst b/doc/source/serve/http-servehandle.rst
index 3b38121e85c32..32f6c4ee1bbb0 100644
--- a/doc/source/serve/http-servehandle.rst
+++ b/doc/source/serve/http-servehandle.rst
@@ -152,6 +152,60 @@ To try it out, save a code snippet in a local python file (i.e. main.py) and in
ray start --head
python main.py
+.. _serve-http-adapters:
+
+HTTP Adapters
+^^^^^^^^^^^^^
+
+Ray Serve provides a suite of adapters to convert HTTP requests to ML inputs like `numpy` arrays.
+You can just use it with :ref:`Ray AI Runtime (AIR) model wrapper<air-serve-integration>` feature
+to one click deploy pre-trained models.
+Alternatively, you can directly import them and put them into your FastAPI app.
+
+For example, we provide a simple adapter for n-dimensional array.
+
+With :ref:`model wrappers<air-serve-integration>`, you can specify it via the ``input_schema`` field.
+
+.. code-block:: python
+
+ from ray import serve
+ from ray.serve.http_adapters import json_to_ndarray
+ from ray.serve.model_wrappers import ModelWrapperDeployment
+
+ ModelWrapperDeployment.options(name="my_model").deploy(
+ my_ray_air_predictor,
+ my_ray_air_checkpoint,
+ input_schema=json_to_ndarray
+ )
+
+You can also bring the adapter to your own FastAPI app using
+`Depends <https://fastapi.tiangolo.com/tutorial/dependencies/#import-depends>`_.
+The input schema will automatically be part of the generated OpenAPI schema with FastAPI.
+
+.. code-block:: python
+
+ from fastapi import FastAPI, Depends
+ from ray.serve.http_adapters import json_to_ndarray
+
+ app = FastAPI()
+
+ @app.post("/endpoint")
+ async def endpoint(np_array = Depends(json_to_ndarray)):
+ ...
+
+It has the following schema for input:
+
+.. _serve-ndarray-schema:
+
+.. autopydantic_model:: ray.serve.http_adapters.NdArray
+
+
+Here is a list of adapters and please feel free to `contribute more <https://github.com/ray-project/ray/issues/new/choose>`_!
+
+.. automodule:: ray.serve.http_adapters
+ :members: json_to_ndarray, image_to_ndarray
+
+
Configuring HTTP Server Locations
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
diff --git a/python/ray/serve/http_adapters.py b/python/ray/serve/http_adapters.py
index 359aaf457782f..c5e7eb040dac4 100644
--- a/python/ray/serve/http_adapters.py
+++ b/python/ray/serve/http_adapters.py
@@ -6,7 +6,6 @@
import numpy as np
from ray.serve.utils import require_packages
-from ray.ml.predictor import DataBatchType
import starlette.requests
@@ -41,8 +40,8 @@ class NdArray(BaseModel):
)
-def array_to_databatch(payload: NdArray) -> DataBatchType:
- """Accepts an NdArray from an HTTP body and converts it to a DataBatchType."""
+def json_to_ndarray(payload: NdArray) -> np.ndarray:
+ """Accepts an NdArray JSON from an HTTP body and converts it to a numpy array."""
arr = np.array(payload.array)
if payload.shape:
arr = arr.reshape(*payload.shape)
@@ -60,9 +59,9 @@ def starlette_request(
@require_packages(["PIL"])
-def image_to_databatch(img: bytes = File(...)) -> DataBatchType:
+def image_to_ndarray(img: bytes = File(...)) -> np.ndarray:
"""Accepts a PIL-readable file from an HTTP form and converts
- it to a DataBatchType.
+ it to a numpy array.
"""
from PIL import Image
diff --git a/python/ray/serve/model_wrappers.py b/python/ray/serve/model_wrappers.py
index 4e2b4263b9a95..20a74bb07f5a7 100644
--- a/python/ray/serve/model_wrappers.py
+++ b/python/ray/serve/model_wrappers.py
@@ -39,36 +39,42 @@ def _load_predictor_cls(
class ModelWrapper(SimpleSchemaIngress):
+ """Serve any Ray AIR predictor from an AIR checkpoint.
+
+ Args:
+ predictor_cls(str, Type[Predictor]): The class or path for predictor class.
+ The type must be a subclass of :class:`ray.ml.predicotr.Predictor`.
+ checkpoint(Checkpoint, dict): The checkpoint object or a dictionary describe
+ the object.
+
+ - The checkpoint object must be a subclass of
+ :class:`ray.ml.checkpoint.Checkpoint`.
+ - The dictionary should be in the form of
+ ``{"checkpoint_cls": "import.path.MyCheckpoint",
+ "uri": "uri_to_load_from"}``.
+ Serve will then call ``MyCheckpoint.from_uri("uri_to_load_from")`` to
+ instantiate the object.
+
+ input_schema(str, InputSchemaFn, None): The FastAPI input conversion
+ function. By default, Serve will use the
+ :ref:`NdArray <serve-ndarray-schema>` schema and convert to numpy array.
+ You can pass in any FastAPI dependency resolver that returns
+ an array. When you pass in a string, Serve will import it.
+ Please refer to :ref:`Serve HTTP adatpers <serve-http-adapters>`
+ documentation to learn more.
+ batching_params(dict, None, False): override the default parameters to
+ :func:`ray.serve.batch`. Pass ``False`` to disable batching.
+ """
+
def __init__(
self,
predictor_cls: Union[str, Type[Predictor]],
checkpoint: Union[Checkpoint, Dict],
input_schema: Union[
str, InputSchemaFn
- ] = "ray.serve.http_adapters.array_to_databatch",
+ ] = "ray.serve.http_adapters.json_to_ndarray",
batching_params: Optional[Union[Dict[str, int], bool]] = None,
):
- """Serve any Ray ML predictor from checkpoint.
-
- Args:
- predictor_cls(str, Type[Predictor]): The class or path for predictor class.
- The type must be a subclass of ray.ml `Predictor`.
- checkpoint(Checkpoint, dict): The checkpoint object or a dictionary describe
- the object.
- - The checkpoint object must be a subclass of ray.ml `Checkpoint`.
- - The dictionary should be in the form of
- {"checkpoint_cls": "import.path.MyCheckpoint",
- "uri": "uri_to_load_from"}.
- Serve will then call `MyCheckpoint.from_uri("uri_to_load_from")` to
- instantiate the object.
- input_schema(str, InputSchemaFn, None): The FastAPI input conversion
- function. By default, Serve will use the `NdArray` schema and convert to
- numpy array. You can pass in any FastAPI dependency resolver that returns
- an array. When you pass in a string, Serve will import it.
- Please refer to Serve HTTP adatper documentation to learn more.
- batching_params(dict, None, False): override the default parameters to
- serve.batch. Pass `False` to disable batching.
- """
predictor_cls = _load_predictor_cls(predictor_cls)
checkpoint = _load_checkpoint(checkpoint)
@@ -96,3 +102,8 @@ async def batched_predict(inp):
async def predict(self, inp):
"""Perform inference directly without HTTP."""
return await self.batched_predict(inp)
+
+
+@serve.deployment
+class ModelWrapperDeployment(ModelWrapper):
+ """Ray Serve Deployment of the ModelWrapper class."""
diff --git a/python/ray/serve/tests/test_http_adapters.py b/python/ray/serve/tests/test_http_adapters.py
index 554bcd11132d7..2b9211d006d5f 100644
--- a/python/ray/serve/tests/test_http_adapters.py
+++ b/python/ray/serve/tests/test_http_adapters.py
@@ -3,7 +3,7 @@
import numpy as np
import pytest
from PIL import Image
-from ray.serve.http_adapters import NdArray, array_to_databatch, image_to_databatch
+from ray.serve.http_adapters import NdArray, json_to_ndarray, image_to_ndarray
from ray.serve.utils import require_packages
@@ -16,28 +16,28 @@ def func():
func()
-def test_array_to_databatch():
+def test_json_to_ndarray():
np.testing.assert_equal(
- array_to_databatch(NdArray(array=[1, 2], shape=None, dtype=None)),
+ json_to_ndarray(NdArray(array=[1, 2], shape=None, dtype=None)),
np.array([1, 2]),
)
np.testing.assert_equal(
- array_to_databatch(NdArray(array=[[1], [2]], shape=None, dtype=None)),
+ json_to_ndarray(NdArray(array=[[1], [2]], shape=None, dtype=None)),
np.array([[1], [2]]),
)
np.testing.assert_equal(
- array_to_databatch(NdArray(array=[[1], [2]], shape=[1, 2], dtype=None)),
+ json_to_ndarray(NdArray(array=[[1], [2]], shape=[1, 2], dtype=None)),
np.array([[1, 2]]),
)
np.testing.assert_equal(
- array_to_databatch(NdArray(array=[[1.9], [2.1]], shape=[1, 2], dtype="int")),
+ json_to_ndarray(NdArray(array=[[1.9], [2.1]], shape=[1, 2], dtype="int")),
np.array([[1.9, 2.1]]).astype("int"),
)
-def test_image_to_databatch():
+def test_image_to_ndarray():
buffer = io.BytesIO()
arr = (np.random.rand(100, 100, 3) * 255).astype("uint8")
image = Image.fromarray(arr).convert("RGB")
image.save(buffer, format="png")
- np.testing.assert_almost_equal(image_to_databatch(buffer.getvalue()), arr)
+ np.testing.assert_almost_equal(image_to_ndarray(buffer.getvalue()), arr)
diff --git a/python/ray/serve/tests/test_model_wrappers.py b/python/ray/serve/tests/test_model_wrappers.py
index d65c4b6afce31..38092ef80fb42 100644
--- a/python/ray/serve/tests/test_model_wrappers.py
+++ b/python/ray/serve/tests/test_model_wrappers.py
@@ -9,11 +9,11 @@
from ray._private.test_utils import wait_for_condition
from ray.ml.checkpoint import Checkpoint
from ray.ml.predictor import DataBatchType, Predictor
-from ray.serve.model_wrappers import ModelWrapper
+from ray.serve.model_wrappers import ModelWrapperDeployment
from ray.serve.pipeline.api import build
from ray.experimental.dag.input_node import InputNode
from ray.serve.api import RayServeDAGHandle
-from ray.serve.http_adapters import array_to_databatch
+from ray.serve.http_adapters import json_to_ndarray
import ray
from ray import serve
@@ -24,7 +24,12 @@ def __init__(self, increment: int) -> None:
@classmethod
def from_checkpoint(cls, checkpoint: "AdderCheckpoint") -> "Predictor":
- return cls(checkpoint.increment)
+ if checkpoint._data_dict:
+ return cls(checkpoint._data_dict["increment"])
+ elif checkpoint._local_path: # uri case
+ with open(checkpoint._local_path) as f:
+ return cls(json.load(f))
+ raise Exception("Unreachable")
def predict(self, data: DataBatchType) -> DataBatchType:
return [
@@ -34,17 +39,7 @@ def predict(self, data: DataBatchType) -> DataBatchType:
class AdderCheckpoint(Checkpoint):
- def __init__(self, increment: int):
- self.increment = increment
-
- @classmethod
- def from_dict(cls, data: dict) -> "Checkpoint":
- return cls(data["increment"])
-
- @classmethod
- def from_uri(cls, uri: str) -> "Checkpoint":
- with open(uri) as f:
- return cls(json.load(f))
+ pass
def adder_schema(query_param_arg: int) -> DataBatchType:
@@ -57,7 +52,7 @@ def send_request(**requests_kargs):
def test_simple_adder(serve_instance):
- serve.deployment(name="Adder")(ModelWrapper).deploy(
+ ModelWrapperDeployment.options(name="Adder").deploy(
predictor_cls=AdderPredictor,
checkpoint=AdderCheckpoint.from_dict({"increment": 2}),
)
@@ -66,7 +61,7 @@ def test_simple_adder(serve_instance):
def test_batching(serve_instance):
- serve.deployment(name="Adder")(ModelWrapper).deploy(
+ ModelWrapperDeployment.options(name="Adder").deploy(
predictor_cls=AdderPredictor,
checkpoint=AdderCheckpoint.from_dict({"increment": 2}),
batching_params=dict(max_batch_size=2, batch_wait_timeout_s=1000),
@@ -87,7 +82,7 @@ def __init__(self, dag: RayServeDAGHandle) -> None:
self.dag = dag
@app.post("/")
- async def predict(self, data=Depends(array_to_databatch)):
+ async def predict(self, data=Depends(json_to_ndarray)):
return await self.dag.remote(data)
@@ -100,7 +95,7 @@ def test_model_wrappers_in_pipeline(serve_instance):
checkpoint_cls = "ray.serve.tests.test_model_wrappers.AdderCheckpoint"
with InputNode() as dag_input:
- m1 = ray.remote(ModelWrapper).bind(
+ m1 = ModelWrapperDeployment.bind(
predictor_cls=predictor_cls, # TODO: can't be the raw class right now?
checkpoint={ # TODO: can't be the raw object right now?
"checkpoint_cls": checkpoint_cls,
@@ -140,7 +135,7 @@ def test_yaml_compatibility(serve_instance):
"deployments": [
{
"name": "Adder",
- "import_path": "ray.serve.model_wrappers.ModelWrapper",
+ "import_path": "ray.serve.model_wrappers.ModelWrapperDeployment",
"init_kwargs": {
"predictor_cls": predictor_cls,
"checkpoint": {
| Pick cb1919b8d011c877a9690e3d09dd5de79b87cdd8
<!-- Thank you for your contribution! Please review https://github.com/ray-project/ray/blob/master/CONTRIBUTING.rst before opening a pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR. If you don't have the access to it, we will shortly find a reviewer and assign them to your PR. -->
## Why are these changes needed?
<!-- Please give a short summary of the change and the problem this solves. -->
## Related issue number
<!-- For example: "Closes #1234" -->
## Checks
- [x] I've run `scripts/format.sh` to lint the changes in this PR.
- [ ] I've included any doc changes needed for https://docs.ray.io/en/master/.
- [ ] I've made sure the tests are passing. Note that there might be a few flaky tests, see the recent failures at https://flakey-tests.ray.io/
- Testing Strategy
- [ ] Unit tests
- [ ] Release tests
- [ ] This PR is not tested :(
| https://api.github.com/repos/ray-project/ray/pulls/23568 | 2022-03-29T18:37:42Z | 2022-03-29T20:30:45Z | 2022-03-29T20:30:45Z | 2022-03-29T20:32:35Z | 3,731 | ray-project/ray | 19,868 |
fix: continue instead of forcing exception openai astream chat | diff --git a/llama_index/llms/openai.py b/llama_index/llms/openai.py
index 92962fcf3e07c..8840e5396ff6e 100644
--- a/llama_index/llms/openai.py
+++ b/llama_index/llms/openai.py
@@ -498,7 +498,7 @@ async def gen() -> ChatResponseAsyncGen:
if len(response.choices) > 0:
delta = response.choices[0].delta
else:
- delta = {}
+ delta = ChoiceDelta()
# check if this chunk is the start of a function call
if delta.tool_calls:
| # Description
Fixes Azure OpenAI LLM failure when returning empty delta dictionary. Currently, an exception is forced anytime a response isn't provided through the completions API call.
Fixes #8995
## Type of Change
Please delete options that are not relevant.
- [X] Bug fix (non-breaking change which fixes an issue)
# How Has This Been Tested?
- [X] Tested using AzureOpenAI and OpenAI with OpenAIAgent
- [X] I stared at the code and made sure it makes sense
# Suggested Checklist:
- [X] I have performed a self-review of my own code
- [X] My changes generate no new warnings
| https://api.github.com/repos/run-llama/llama_index/pulls/9040 | 2023-11-21T00:11:19Z | 2023-11-21T00:55:49Z | 2023-11-21T00:55:49Z | 2023-11-21T16:36:33Z | 152 | run-llama/llama_index | 6,902 |
[Chat] add examples of training with limited resources in chat readme | diff --git a/applications/Chat/README.md b/applications/Chat/README.md
index 8f22084953ba..e3b605d9b796 100644
--- a/applications/Chat/README.md
+++ b/applications/Chat/README.md
@@ -28,6 +28,7 @@
- [Limitation of dataset](#limitation-of-dataset)
- [FAQ](#faq)
- [How to save/load checkpoint](#how-to-saveload-checkpoint)
+ - [How to train with limited resources](#how-to-train-with-limited-resources)
- [The Plan](#the-plan)
- [Real-time progress](#real-time-progress)
- [Invitation to open-source contribution](#invitation-to-open-source-contribution)
@@ -324,6 +325,59 @@ trainer.fit()
trainer.save_model(path=args.save_path, only_rank0=True, tokenizer=tokenizer)
```
+### How to train with limited resources
+
+Here are some examples that can allow you to train a 7B model on a single or multiple consumer-grade GPUs.
+
+If you only have a single 24G GPU, you can use the following script. `batch_size` and `lora_rank` are the most important parameters to successfully train the model.
+```
+torchrun --standalone --nproc_per_node=1 train_sft.py \
+ --pretrain "/path/to/LLaMa-7B/" \
+ --model 'llama' \
+ --strategy naive \
+ --log_interval 10 \
+ --save_path /path/to/Coati-7B \
+ --dataset /path/to/data.json \
+ --batch_size 1 \
+ --accimulation_steps 8 \
+ --lr 2e-5 \
+ --max_datasets_size 512 \
+ --max_epochs 1 \
+ --lora_rank 16 \
+```
+
+`colossalai_gemini` strategy can enable a single 24G GPU to train the whole model without using LoRA if you have sufficient CPU memory. You can use the following script.
+```
+torchrun --standalone --nproc_per_node=1 train_sft.py \
+ --pretrain "/path/to/LLaMa-7B/" \
+ --model 'llama' \
+ --strategy colossalai_gemini \
+ --log_interval 10 \
+ --save_path /path/to/Coati-7B \
+ --dataset /path/to/data.json \
+ --batch_size 1 \
+ --accimulation_steps 8 \
+ --lr 2e-5 \
+ --max_datasets_size 512 \
+ --max_epochs 1 \
+```
+
+If you have 4x32 GB GPUs, you can even train the whole 7B model using our `colossalai_zero2_cpu` strategy! The script is given as follows.
+```
+torchrun --standalone --nproc_per_node=4 train_sft.py \
+ --pretrain "/path/to/LLaMa-7B/" \
+ --model 'llama' \
+ --strategy colossalai_zero2_cpu \
+ --log_interval 10 \
+ --save_path /path/to/Coati-7B \
+ --dataset /path/to/data.json \
+ --batch_size 1 \
+ --accimulation_steps 8 \
+ --lr 2e-5 \
+ --max_datasets_size 512 \
+ --max_epochs 1 \
+```
+
## The Plan
- [x] implement PPO fine-tuning
| ## 📌 Checklist before creating the PR
- [ ] I have created an issue for this PR for traceability
- [x] The title follows the standard format: `[doc/gemini/tensor/...]: A concise description`
- [ ] I have added relevant tags if possible for us to better distinguish different PRs
## 🚨 Issue number
> Link this PR to your issue with words like fixed to automatically close the linked issue upon merge
>
> e.g. `fixed #1234`, `closed #1234`, `resolved #1234`
## 📝 What does this PR do?
> Summarize your work here.
> if you have any plots/diagrams/screenshots/tables, please attach them here.
Add examples of training with limited resources in chat readme.
## 💥 Checklist before requesting a review
- [ ] I have linked my PR to an issue ([instruction](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue))
- [ ] My issue clearly describes the problem/feature/proposal, with diagrams/charts/table/code if possible
- [ ] I have performed a self-review of my code
- [ ] I have added thorough tests.
- [ ] I have added docstrings for all the functions/methods I implemented
## ⭐️ Do you enjoy contributing to Colossal-AI?
- [x] 🌝 Yes, I do.
- [ ] 🌚 No, I don't.
Tell us more if you don't enjoy contributing to Colossal-AI.
| https://api.github.com/repos/hpcaitech/ColossalAI/pulls/3536 | 2023-04-12T07:38:35Z | 2023-04-12T07:47:09Z | 2023-04-12T07:47:09Z | 2023-04-12T07:47:09Z | 795 | hpcaitech/ColossalAI | 11,426 |
Update README-tr-TR.md | diff --git a/doc/translations/README-tr-TR.md b/doc/translations/README-tr-TR.md
index 485a1fcd900..f2508202abc 100644
--- a/doc/translations/README-tr-TR.md
+++ b/doc/translations/README-tr-TR.md
@@ -37,9 +37,9 @@ Bütün seçenekleri gösterir
python sqlmap.py -hh
-Program ile ilgili örnekleri [burada](https://asciinema.org/a/46601) bulabilirsiniz. Daha fazlası içinsqlmap'in bütün açıklamaları ile birlikte bütün özelliklerinin, örnekleri ile bulunduğu [manuel sayfamıza](https://github.com/sqlmapproject/sqlmap/wiki/Usage) bakmanızı tavsiye ediyoruz
+Program ile ilgili örnekleri [burada](https://asciinema.org/a/46601) bulabilirsiniz. Daha fazlası için sqlmap'in bütün açıklamaları ile birlikte bütün özelliklerinin, örnekleri ile bulunduğu [manuel sayfamıza](https://github.com/sqlmapproject/sqlmap/wiki/Usage) bakmanızı tavsiye ediyoruz
-Links
+Bağlantılar
----
* Anasayfa: https://sqlmap.org
| Small fix for documentation. | https://api.github.com/repos/sqlmapproject/sqlmap/pulls/4832 | 2021-09-27T16:57:54Z | 2021-09-28T12:25:28Z | 2021-09-28T12:25:27Z | 2021-09-28T12:25:28Z | 277 | sqlmapproject/sqlmap | 14,963 |
Corrected typo. | diff --git a/docs/topics/practices.rst b/docs/topics/practices.rst
index 79a249482cf..525818d57d2 100644
--- a/docs/topics/practices.rst
+++ b/docs/topics/practices.rst
@@ -148,7 +148,7 @@ If you are still unable to prevent your bot getting banned, consider contacting
Dynamic Creation of Item Classes
================================
-For applications in which the structure of item classs is to be determined by
+For applications in which the structure of item class is to be determined by
user input, or other changing conditions, you can dynamically create item
classes instead of manually coding them.
| https://api.github.com/repos/scrapy/scrapy/pulls/402 | 2013-09-29T21:08:26Z | 2013-09-29T21:21:04Z | 2013-09-29T21:21:04Z | 2014-06-12T16:08:02Z | 144 | scrapy/scrapy | 35,176 |
|
AstraDB VectorStore: implement pre_delete_collection | diff --git a/libs/langchain/langchain/vectorstores/astradb.py b/libs/langchain/langchain/vectorstores/astradb.py
index 0eacbd2d9264ae..0b5274c90badce 100644
--- a/libs/langchain/langchain/vectorstores/astradb.py
+++ b/libs/langchain/langchain/vectorstores/astradb.py
@@ -78,43 +78,46 @@ class AstraDB(VectorStore):
vectorstore.add_texts(["Giraffes", "All good here"])
results = vectorstore.similarity_search("Everything's ok", k=1)
- Constructor args (only keyword-arguments accepted):
- embedding (Embeddings): embedding function to use.
- collection_name (str): name of the Astra DB collection to create/use.
- token (Optional[str]): API token for Astra DB usage.
- api_endpoint (Optional[str]): full URL to the API endpoint,
- such as "https://<DB-ID>-us-east1.apps.astra.datastax.com".
- astra_db_client (Optional[Any]): *alternative to token+api_endpoint*,
- you can pass an already-created 'astrapy.db.AstraDB' instance.
- namespace (Optional[str]): namespace (aka keyspace) where the
- collection is created. Defaults to the database's "default namespace".
- metric (Optional[str]): similarity function to use out of those
- available in Astra DB. If left out, it will use Astra DB API's
- defaults (i.e. "cosine" - but, for performance reasons,
- "dot_product" is suggested if embeddings are normalized to one).
-
- Advanced arguments (coming with sensible defaults):
- batch_size (Optional[int]): Size of batches for bulk insertions.
- bulk_insert_batch_concurrency (Optional[int]): Number of threads
- to insert batches concurrently.
- bulk_insert_overwrite_concurrency (Optional[int]): Number of
- threads in a batch to insert pre-existing entries.
- bulk_delete_concurrency (Optional[int]): Number of threads
- (for deleting multiple rows concurrently).
-
- A note on concurrency: as a rule of thumb, on a typical client machine
- it is suggested to keep the quantity
- bulk_insert_batch_concurrency * bulk_insert_overwrite_concurrency
- much below 1000 to avoid exhausting the client multithreading/networking
- resources. The hardcoded defaults are somewhat conservative to meet
- most machines' specs, but a sensible choice to test may be:
- bulk_insert_batch_concurrency = 80
- bulk_insert_overwrite_concurrency = 10
- A bit of experimentation is required to nail the best results here,
- depending on both the machine/network specs and the expected workload
- (specifically, how often a write is an update of an existing id).
- Remember you can pass concurrency settings to individual calls to
- add_texts and add_documents as well.
+ Constructor Args (only keyword-arguments accepted):
+ embedding (Embeddings): embedding function to use.
+ collection_name (str): name of the Astra DB collection to create/use.
+ token (Optional[str]): API token for Astra DB usage.
+ api_endpoint (Optional[str]): full URL to the API endpoint,
+ such as "https://<DB-ID>-us-east1.apps.astra.datastax.com".
+ astra_db_client (Optional[Any]): *alternative to token+api_endpoint*,
+ you can pass an already-created 'astrapy.db.AstraDB' instance.
+ namespace (Optional[str]): namespace (aka keyspace) where the
+ collection is created. Defaults to the database's "default namespace".
+ metric (Optional[str]): similarity function to use out of those
+ available in Astra DB. If left out, it will use Astra DB API's
+ defaults (i.e. "cosine" - but, for performance reasons,
+ "dot_product" is suggested if embeddings are normalized to one).
+
+ Advanced arguments (coming with sensible defaults):
+ batch_size (Optional[int]): Size of batches for bulk insertions.
+ bulk_insert_batch_concurrency (Optional[int]): Number of threads
+ to insert batches concurrently.
+ bulk_insert_overwrite_concurrency (Optional[int]): Number of
+ threads in a batch to insert pre-existing entries.
+ bulk_delete_concurrency (Optional[int]): Number of threads
+ (for deleting multiple rows concurrently).
+ pre_delete_collection (Optional[bool]): whether to delete the collection
+ before creating it. If False and the collection already exists,
+ the collection will be used as is.
+
+ A note on concurrency: as a rule of thumb, on a typical client machine
+ it is suggested to keep the quantity
+ bulk_insert_batch_concurrency * bulk_insert_overwrite_concurrency
+ much below 1000 to avoid exhausting the client multithreading/networking
+ resources. The hardcoded defaults are somewhat conservative to meet
+ most machines' specs, but a sensible choice to test may be:
+ bulk_insert_batch_concurrency = 80
+ bulk_insert_overwrite_concurrency = 10
+ A bit of experimentation is required to nail the best results here,
+ depending on both the machine/network specs and the expected workload
+ (specifically, how often a write is an update of an existing id).
+ Remember you can pass concurrency settings to individual calls to
+ add_texts and add_documents as well.
"""
@staticmethod
@@ -138,6 +141,7 @@ def __init__(
bulk_insert_batch_concurrency: Optional[int] = None,
bulk_insert_overwrite_concurrency: Optional[int] = None,
bulk_delete_concurrency: Optional[int] = None,
+ pre_delete_collection: bool = False,
) -> None:
"""
Create an AstraDB vector store object. See class docstring for help.
@@ -154,6 +158,7 @@ def __init__(
"Could not import a recent astrapy python package. "
"Please install it with `pip install --upgrade astrapy`."
)
+
# Conflicting-arg checks:
if astra_db_client is not None:
if token is not None or api_endpoint is not None:
@@ -191,7 +196,10 @@ def __init__(
api_endpoint=self.api_endpoint,
namespace=self.namespace,
)
- self._provision_collection()
+ if not pre_delete_collection:
+ self._provision_collection()
+ else:
+ self.clear()
self.collection = LibAstraDBCollection(
collection_name=self.collection_name,
diff --git a/libs/langchain/tests/integration_tests/vectorstores/test_astradb.py b/libs/langchain/tests/integration_tests/vectorstores/test_astradb.py
index d8f4fb494e8125..a4718036fb35b4 100644
--- a/libs/langchain/tests/integration_tests/vectorstores/test_astradb.py
+++ b/libs/langchain/tests/integration_tests/vectorstores/test_astradb.py
@@ -148,6 +148,41 @@ def test_astradb_vectorstore_create_delete(self) -> None:
)
v_store_2.delete_collection()
+ def test_astradb_vectorstore_pre_delete_collection(self) -> None:
+ """Create and delete."""
+ emb = SomeEmbeddings(dimension=2)
+ # creation by passing the connection secrets
+
+ v_store = AstraDB(
+ embedding=emb,
+ collection_name="lc_test_pre_del",
+ token=os.environ["ASTRA_DB_APPLICATION_TOKEN"],
+ api_endpoint=os.environ["ASTRA_DB_API_ENDPOINT"],
+ namespace=os.environ.get("ASTRA_DB_KEYSPACE"),
+ )
+ try:
+ v_store.add_texts(
+ texts=["aa"],
+ metadatas=[
+ {"k": "a", "ord": 0},
+ ],
+ ids=["a"],
+ )
+ res1 = v_store.similarity_search("aa", k=5)
+ assert len(res1) == 1
+ v_store = AstraDB(
+ embedding=emb,
+ pre_delete_collection=True,
+ collection_name="lc_test_pre_del",
+ token=os.environ["ASTRA_DB_APPLICATION_TOKEN"],
+ api_endpoint=os.environ["ASTRA_DB_API_ENDPOINT"],
+ namespace=os.environ.get("ASTRA_DB_KEYSPACE"),
+ )
+ res1 = v_store.similarity_search("aa", k=5)
+ assert len(res1) == 0
+ finally:
+ v_store.delete_collection()
+
def test_astradb_vectorstore_from_x(self) -> None:
"""from_texts and from_documents methods."""
emb = SomeEmbeddings(dimension=2)
| - **Description:** some vector stores have a flag for try deleting the collection before creating it (such as ´vectorpg´). This is a useful flag when prototyping indexing pipelines and also for integration tests. Added the bool flag `pre_delete_collection ` to the constructor (default False)
- **Tag maintainer:** @hemidactylus
- **Twitter handle:** nicoloboschi
| https://api.github.com/repos/langchain-ai/langchain/pulls/13780 | 2023-11-23T15:34:21Z | 2023-12-03T20:06:20Z | 2023-12-03T20:06:20Z | 2023-12-03T20:06:20Z | 1,917 | langchain-ai/langchain | 43,255 |
ref(saml): Clean up metadata view error messages | diff --git a/src/sentry/auth/providers/saml2.py b/src/sentry/auth/providers/saml2.py
index 5a38dbca4c3cf..ef67f2b5a75b5 100644
--- a/src/sentry/auth/providers/saml2.py
+++ b/src/sentry/auth/providers/saml2.py
@@ -342,11 +342,11 @@ def dispatch(self, request, organization_slug):
metadata = saml_settings.get_sp_metadata()
errors = saml_settings.validate_metadata(metadata)
- if len(errors) == 0:
- resp = HttpResponse(content=metadata, content_type='text/xml')
- else:
- resp = HttpResponseServerError(content=', '.join(errors))
- return resp
+ if len(errors) > 0:
+ message = '\n'.join(errors)
+ return HttpResponseServerError(content=message, content_type='plain/text')
+
+ return HttpResponse(content=metadata, content_type='text/xml')
class SAML2Provider(Provider):
| I'm okay with these being plain text, since the metadata is XML anyway. | https://api.github.com/repos/getsentry/sentry/pulls/6174 | 2017-09-22T18:21:36Z | 2017-09-22T20:05:13Z | 2017-09-22T20:05:13Z | 2020-12-22T19:31:03Z | 219 | getsentry/sentry | 44,537 |
Fixed typo in CHANGES.md | diff --git a/CHANGES.md b/CHANGES.md
index 79b5c6034e8..a75b54d8d81 100644
--- a/CHANGES.md
+++ b/CHANGES.md
@@ -68,7 +68,7 @@
### Parser
-- Fix bug where attributes named `type` were not acccepted inside `match` statements
+- Fix bug where attributes named `type` were not accepted inside `match` statements
(#3950)
- Add support for PEP 695 type aliases containing lambdas and other unusual expressions
(#3949)
@@ -926,7 +926,7 @@ and the first release covered by our new
[`master`](https://github.com/psf/black/tree/main) branch with the
[`main`](https://github.com/psf/black/tree/main) branch. Some additional changes in
the source code were also made. (#2210)
-- Sigificantly reorganized the documentation to make much more sense. Check them out by
+- Significantly reorganized the documentation to make much more sense. Check them out by
heading over to [the stable docs on RTD](https://black.readthedocs.io/en/stable/).
(#2174)
| acccepted -> accepted
Sigificantly -> Significantly | https://api.github.com/repos/psf/black/pulls/3963 | 2023-10-22T07:36:38Z | 2023-10-22T21:16:44Z | 2023-10-22T21:16:43Z | 2023-10-22T21:16:44Z | 270 | psf/black | 24,503 |
Decrease default line length for iter_lines | diff --git a/requests/models.py b/requests/models.py
index 5202e6f4ba..bbef7ad832 100644
--- a/requests/models.py
+++ b/requests/models.py
@@ -29,7 +29,7 @@
REDIRECT_STATI = (codes.moved, codes.found, codes.other, codes.temporary_moved)
CONTENT_CHUNK_SIZE = 10 * 1024
-ITER_CHUNK_SIZE = 10 * 1024
+ITER_CHUNK_SIZE = 512
log = logging.getLogger(__name__)
@@ -121,7 +121,7 @@ def _encode_files(files, data):
fp = StringIO(fp)
if isinstance(fp, bytes):
fp = BytesIO(fp)
-
+
if ft:
new_v = (fn, fp.read(), ft)
else:
| Improve the responsiveness of `iter_lines` when called by unsuspecting users. =)
Belatedly changed in response to #989. Obviously, feel free to pick a different number if you think this is no good.
| https://api.github.com/repos/psf/requests/pulls/1122 | 2013-01-21T21:17:17Z | 2013-01-22T13:09:12Z | 2013-01-22T13:09:12Z | 2021-09-08T23:06:17Z | 181 | psf/requests | 32,547 |
NLP: Add VDCNN for NLP and FastText | diff --git a/README.md b/README.md
index a2e01d2..5eb3a51 100644
--- a/README.md
+++ b/README.md
@@ -200,6 +200,10 @@ I would continue adding papers to this roadmap.
**[7]** Karl Moritz Hermann, et al. "**Teaching Machines to Read and Comprehend**." arXiv preprint arXiv:1506.03340(2015) [[pdf]](https://arxiv.org/abs/1506.03340) **(CNN/DailyMail cloze style questions)** :star::star:
+**[8]** Alexis Conneau, et al. "**Very Deep Convolutional Networks for Natural Language Processing**." arXiv preprint arXiv:1606.01781(2016) [[pdf]](https://arxiv.org/abs/1606.01781) **(state-of-the-art in text classification)** :star::star::star:
+
+**[9]** Armand Joulin, et al. "**Bag of Tricks for Efficient Text Classification**." arXiv preprint arXiv:1607.01759(2016) [[pdf]](https://arxiv.org/abs/1607.01759) **(slightly worse than state-of-the-art, but a lot faster)** :star::star::star:
+
## 3.2 Object Detection
**[1]** Szegedy, Christian, Alexander Toshev, and Dumitru Erhan. "**Deep neural networks for object detection**." Advances in Neural Information Processing Systems. 2013. [[pdf]](http://papers.nips.cc/paper/5207-deep-neural-networks-for-object-detection.pdf) :star::star::star:
| These papers describe two of the best algorithms for text classification
| https://api.github.com/repos/floodsung/Deep-Learning-Papers-Reading-Roadmap/pulls/16 | 2016-10-23T17:43:03Z | 2016-10-23T22:58:20Z | 2016-10-23T22:58:20Z | 2016-10-23T22:58:20Z | 385 | floodsung/Deep-Learning-Papers-Reading-Roadmap | 51,710 |
Solution for Euler Problem 26 | diff --git a/project_euler/problem_26/__init__.py b/project_euler/problem_26/__init__.py
new file mode 100644
index 000000000000..e69de29bb2d1
diff --git a/project_euler/problem_26/sol1.py b/project_euler/problem_26/sol1.py
new file mode 100644
index 000000000000..7b8c44c9c828
--- /dev/null
+++ b/project_euler/problem_26/sol1.py
@@ -0,0 +1,41 @@
+"""
+Euler Problem 26
+https://projecteuler.net/problem=26
+Find the value of d < 1000 for which 1/d contains the longest recurring cycle
+in its decimal fraction part.
+"""
+
+def find_digit(numerator: int, digit: int) -> int:
+ """
+ Considering any range can be provided,
+ because as per the problem, the digit d < 1000
+ >>> find_digit(1, 10)
+ 7
+ >>> find_digit(10, 100)
+ 97
+ >>> find_digit(10, 1000)
+ 983
+ """
+ the_digit = 1
+ longest_list_length = 0
+
+ for divide_by_number in range(numerator, digit + 1):
+ has_been_divided = []
+ now_divide = numerator
+ for division_cycle in range(1, digit + 1):
+ if now_divide in has_been_divided:
+ if longest_list_length < len(has_been_divided):
+ longest_list_length = len(has_been_divided)
+ the_digit = divide_by_number
+ else:
+ has_been_divided.append(now_divide)
+ now_divide = now_divide * 10 % divide_by_number
+
+ return the_digit
+
+
+# Tests
+if __name__ == "__main__":
+ import doctest
+
+ doctest.testmod()
| Adding a solution for Euler Problem 26.
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| https://api.github.com/repos/TheAlgorithms/Python/pulls/1939 | 2020-05-03T19:30:36Z | 2020-05-03T20:48:17Z | 2020-05-03T20:48:17Z | 2020-05-03T20:51:07Z | 457 | TheAlgorithms/Python | 29,633 |
Update README.md | diff --git a/README.md b/README.md
index 6a13c0f..002529f 100644
--- a/README.md
+++ b/README.md
@@ -59,6 +59,7 @@ So, here we go...
+ [▶ The disappearing variable from outer scope](#-the-disappearing-variable-from-outer-scope)
+ [▶ The mysterious key type conversion](#-the-mysterious-key-type-conversion)
+ [▶ Let's see if you can guess this?](#-lets-see-if-you-can-guess-this)
+ + [▶ Exceeds the limit for integer string conversion](#-exceeds-the-limit-for-integer-string-conversion)
* [Section: Slippery Slopes](#section-slippery-slopes)
+ [▶ Modifying a dictionary while iterating over it](#-modifying-a-dictionary-while-iterating-over-it)
+ [▶ Stubborn `del` operation](#-stubborn-del-operation)
@@ -1975,9 +1976,45 @@ a, b = a[b] = {}, 5
True
```
+
---
+
+### ▶ Exceeds the limit for integer string conversion
+```py
+>>> # Python 3.10.6
+>>> int("2" * 5432)
+
+>>> # Python 3.10.8
+>>> int("2" * 5432)
+```
+
+**Output:**
+```py
+>>> # Python 3.10.6
+222222222222222222222222222222222222222222222222222222222222222...
+
+>>> # Python 3.10.8
+Traceback (most recent call last):
+ ...
+ValueError: Exceeds the limit (4300) for integer string conversion:
+ value has 5432 digits; use sys.set_int_max_str_digits()
+ to increase the limit.
+```
+
+#### 💡 Explanation:
+This call to `int()` works fine in Python 3.10.6 and raises a ValueError in Python 3.10.8. Note that Python can still work with large integers. The error is only raised when converting between integers and strings.
+
+Fortunately, you can increase the limit for the allowed number of digits when you expect an operation to exceed it. To do this, you can use one of the following:
+- The -X int_max_str_digits command-line flag
+- The set_int_max_str_digits() function from the sys module
+- The PYTHONINTMAXSTRDIGITS environment variable
+
+[Check the documentation](https://docs.python.org/3/library/stdtypes.html#int-max-str-digits) for more details on changing the default limit if you expect your code to exceed this value.
+
+
---
+
## Section: Slippery Slopes
### ▶ Modifying a dictionary while iterating over it
| Add new feature: Exceeds the limit for integer string conversion | https://api.github.com/repos/satwikkansal/wtfpython/pulls/300 | 2022-10-21T20:01:02Z | 2022-11-01T09:21:09Z | 2022-11-01T09:21:09Z | 2022-11-01T09:21:22Z | 638 | satwikkansal/wtfpython | 25,791 |
support 'on prompt' event handler on backend | diff --git a/server.py b/server.py
index d1295342bb..be1048be08 100644
--- a/server.py
+++ b/server.py
@@ -1,6 +1,8 @@
import os
import sys
import asyncio
+import traceback
+
import nodes
import folder_paths
import execution
@@ -88,6 +90,8 @@ def __init__(self, loop):
self.last_node_id = None
self.client_id = None
+ self.on_prompt_handlers = []
+
@routes.get('/ws')
async def websocket_handler(request):
ws = web.WebSocketResponse()
@@ -438,6 +442,7 @@ async def post_prompt(request):
resp_code = 200
out_string = ""
json_data = await request.json()
+ json_data = self.trigger_on_prompt(json_data)
if "number" in json_data:
number = float(json_data['number'])
@@ -606,3 +611,15 @@ async def start(self, address, port, verbose=True, call_on_start=None):
if call_on_start is not None:
call_on_start(address, port)
+ def add_on_prompt_handler(self, handler):
+ self.on_prompt_handlers.append(handler)
+
+ def trigger_on_prompt(self, json_data):
+ for handler in self.on_prompt_handlers:
+ try:
+ json_data = handler(json_data)
+ except Exception as e:
+ print(f"[ERROR] An error occurred during the on_prompt_handler processing")
+ traceback.print_exc()
+
+ return json_data
| Allowing pre-prompt and pre-workflow checks before the prompt is executed enables pre-execution tasks that have a global impact, such as extension garbage collection, prompt editing, and other related actions. | https://api.github.com/repos/comfyanonymous/ComfyUI/pulls/765 | 2023-06-13T08:46:58Z | 2023-08-28T04:52:22Z | 2023-08-28T04:52:22Z | 2023-08-28T04:54:01Z | 340 | comfyanonymous/ComfyUI | 17,725 |
BUG: Correct check for local import | diff --git a/sklearn/__check_build/__init__.py b/sklearn/__check_build/__init__.py
index 7303c9ea0a586..6256b99408256 100644
--- a/sklearn/__check_build/__init__.py
+++ b/sklearn/__check_build/__init__.py
@@ -18,7 +18,7 @@ def raise_build_error(e):
# directory to help debugging on the mailing list.
local_dir = os.path.split(__file__)[0]
msg = STANDARD_MSG
- if local_dir == "sklearn/check_build":
+ if local_dir == "sklearn/__check_build":
# Picking up the local install: this will work only if the
# install is an 'inplace build'
msg = INPLACE_MSG
| With the move of the `check_build` directory to `__check_build` in cd7404706 the comparison for displaying the error message indicating a import from the source tree that has not been build inplace was incorrect. This fixes this bug so that the **INPLACE_MSG** is displayed in these cases.
| https://api.github.com/repos/scikit-learn/scikit-learn/pulls/4095 | 2015-01-13T20:44:25Z | 2015-01-14T13:30:16Z | 2015-01-14T13:30:16Z | 2015-01-14T13:30:16Z | 178 | scikit-learn/scikit-learn | 46,140 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.