text1 stringlengths 2 269k | text2 stringlengths 2 242k | label int64 0 1 |
|---|---|---|
There's no e2e test for dynamic volume provisioning AFAICT.
Also, on Ubernetes-Lite, we should verify that a dynamically provisioned
volume gets zone labels.
|
I'm excited to see that auto-provisioned volumes are going to be in 1.2 (even
if in alpha), but AFAICT there's no e2e coverage yet.
| 1 |
#### Describe the bug
There is an issue when using `SimpleImputer` with a new Pandas dataframe,
specifically if it has a column that is of type `Int64` and has a `NA` value
in the training data.
#### Code to Reproduce
def test_simple_imputer_with_Int64_column():
index = pd.Index(['A', 'B', 'C'], name='group')
df = pd.DataFrame({
'att-1': [10, 20, np.nan],
'att-2': [30, 40, 30]
}, index=index)
# TODO: This line breaks the test! Comment out and it works
df = df.astype('Int64')
imputer = SimpleImputer()
imputer.fit(df)
imputed = imputer.transform(df)
df_imputed = pd.DataFrame(imputed, columns=['att-1', 'att-2'], index=index)
assert df_imputed.loc['C', 'att-1'] == 15
#### Expected Results
Correct value is imputed
#### Actual Results
Exception raised:
TypeError: float() argument must be a string or a number, not 'NAType'
#### Versions
System:
python: 3.7.4 (default, Aug 13 2019, 15:17:50) [Clang 4.0.1 (tags/RELEASE_401/final)]
executable: <path-to-my-project>/.venv/bin/python
machine: Darwin-19.3.0-x86_64-i386-64bit
Python dependencies:
pip: 19.3.1
setuptools: 42.0.2
sklearn: 0.22.1
numpy: 1.18.1
scipy: 1.4.1
Cython: None
pandas: 1.0.1
matplotlib: None
joblib: 0.14.1
Built with OpenMP: True
|
#### Describe the bug
Starting from pandas 1.0, an experimental pd.NA value (singleton) is available
to represent scalar missing values. At this moment, it is used in the nullable
integer, boolean and dedicated string data types as the missing value
indicator.
I get the error `TypeError: float() argument must be a string or a number, not
'NAType'` when transfer integer data containing NaN in the form of a pandas
dataframe to preprocessing module, in particular QuantileTransformer and
StandardScaler after updating pandas to the current version.
#### Steps/Code to Reproduce
Example:
import pandas as pd
import numpy as np
from sklearn.preprocessing import StandardScaler
df = pd.DataFrame({'a': [1,2,3, np.nan, np.nan],
'b': [np.nan, np.nan, 8, 4, 6]},
dtype = pd.Int64Dtype())
scaler = StandardScaler()
scaler.fit_transform(df)
#### Expected Results
array([[-1.22474487, nan],
[ 0. , nan],
[ 1.22474487, 1.22474487],
[ nan, -1.22474487],
[ nan, 0. ]])
#### Actual Results
TypeError Traceback (most recent call last)
<ipython-input-42-2104609ef9c0> in <module>
7 print(df)
8 scaler = StandardScaler()
----> 9 scaler.fit_transform(df)
/anaconda3/lib/python3.6/site-packages/sklearn/base.py in fit_transform(self, X, y, **fit_params)
569 if y is None:
570 # fit method of arity 1 (unsupervised transformation)
--> 571 return self.fit(X, **fit_params).transform(X)
572 else:
573 # fit method of arity 2 (supervised transformation)
/anaconda3/lib/python3.6/site-packages/sklearn/preprocessing/_data.py in fit(self, X, y)
667 # Reset internal state before fitting
668 self._reset()
--> 669 return self.partial_fit(X, y)
670
671 def partial_fit(self, X, y=None):
/anaconda3/lib/python3.6/site-packages/sklearn/preprocessing/_data.py in partial_fit(self, X, y)
698 X = check_array(X, accept_sparse=('csr', 'csc'),
699 estimator=self, dtype=FLOAT_DTYPES,
--> 700 force_all_finite='allow-nan')
701
702 # Even in the case of `with_mean=False`, we update the mean anyway
/anaconda3/lib/python3.6/site-packages/sklearn/utils/validation.py in check_array(array, accept_sparse, accept_large_sparse, dtype, order, copy, force_all_finite, ensure_2d, allow_nd, ensure_min_samples, ensure_min_features, warn_on_dtype, estimator)
529 array = array.astype(dtype, casting="unsafe", copy=False)
530 else:
--> 531 array = np.asarray(array, order=order, dtype=dtype)
532 except ComplexWarning:
533 raise ValueError("Complex data not supported\n"
/anaconda3/lib/python3.6/site-packages/numpy/core/_asarray.py in asarray(a, dtype, order)
83
84 """
---> 85 return array(a, dtype, copy=False, order=order)
86
87
TypeError: float() argument must be a string or a number, not 'NAType'
#### Versions
System:
python: 3.6.10 |Anaconda, Inc.| (default, Jan 7 2020, 15:01:53) [GCC 4.2.1 Compatible Clang 4.0.1 (tags/RELEASE_401/final)]
executable: /anaconda3/bin/python
machine: Darwin-17.7.0-x86_64-i386-64bit
Python dependencies:
pip: 20.0.2
setuptools: 45.2.0.post20200210
sklearn: 0.22.1
numpy: 1.18.1
scipy: 1.4.1
Cython: None
pandas: 1.0.1
matplotlib: 3.1.3
joblib: 0.14.1
Built with OpenMP: True
| 1 |
### Describe the issue:
numpy 1.20 encouraged specifying plain `bool` as a dtype as an equivalent to
`np.bool_`, but these aliases don't behave the same as the explicit numpy
versions. mypy infers the dtype as "Any" instead. See the example below, where
I expected both lines to output the same type.
### Reproduce the code example:
import numpy as np
def what_the() -> None:
reveal_type(np.arange(10, dtype=bool))
reveal_type(np.arange(10, dtype=np.bool_))
### Error message:
No error, but output from mypy 0.961:
show_type2.py:4: note: Revealed type is "numpy.ndarray[Any, numpy.dtype[Any]]"
show_type2.py:5: note: Revealed type is "numpy.ndarray[Any, numpy.dtype[numpy.bool_]]"
### NumPy/Python version information:
1.23.0 3.10.5 (main, Jun 11 2022, 16:53:24) [GCC 9.4.0]
|
Back in #17719 the first steps were taken into introducing static typing
support for array dtypes.
Since the dtype has a substantial effect on the semantics of an array, there
is a lot of type-safety
to be gained if the various function-annotations in numpy can actually utilize
this information.
Examples of this would be the rejection of string-arrays for arithmetic
operations, or inferring the
output dtype of mixed float/integer operations.
## The Plan
With this in mind I'd ideally like to implement some basic dtype support
throughout the main numpy
namespace (xref #16546) before the release of 1.22.
Now, what does "basic" mean in this context? Namely, any array-/dtype-like
that can be parametrized
w.r.t. `np.generic`. Notably this excludes builtin scalar types and character
codes (literal strings), as the
only way of implementing the latter two is via excessive use of overloads.
With this in mind, I realistically only expect dtype-support for builtin
scalar types ( _e.g._ `func(..., dtype=float)`)
to-be added with the help of a mypy plugin, _e.g._ via injecting a type-check-
only method into the likes of
`builtins.int` that holds some sort of explicit reference to `np.int_`.
## Examples
Two examples wherein the dtype can be automatically inferred:
from typing import TYPE_CHECKING
import numpy as np
AR_1 = np.array(np.float64(1))
AR_2 = np.array(1, dtype=np.float64)
if TYPE_CHECKING:
reveal_type(AR_1) # note: Revealed type is "numpy.ndarray[Any, numpy.dtype[numpy.floating*[numpy.typing._64Bit*]]]"
reveal_type(AR_2) # note: Revealed type is "numpy.ndarray[Any, numpy.dtype[numpy.floating*[numpy.typing._64Bit*]]]"
Three examples wherein dtype-support is substantially more difficult to
implement.
AR_3 = np.array(1.0)
AR_4 = np.array(1, dtype=float)
AR_5 = np.array(1, dtype="f8")
if TYPE_CHECKING:
reveal_type(AR_3) # note: Revealed type is "numpy.ndarray[Any, numpy.dtype[Any]]"
reveal_type(AR_4) # note: Revealed type is "numpy.ndarray[Any, numpy.dtype[Any]]"
reveal_type(AR_5) # note: Revealed type is "numpy.ndarray[Any, numpy.dtype[Any]]"
In the latter three cases one can always manually declare the dtype of the
array:
import numpy.typing as npt
AR_6: npt.NDArray[np.float64] = np.array(1.0)
if TYPE_CHECKING:
reveal_type(AR_6) # note: Revealed type is "numpy.ndarray[Any, numpy.dtype[numpy.floating*[numpy.typing._64Bit*]]]"
| 1 |
When I run `flutter test --coverage` on
`https://github.com/dnfield/flutter_svg`, I eventually get this output over
and over again:
unhandled error during test:
/Users/dnfield/src/flutter_svg/test/xml_svg_test.dart
Bad state: Couldn't find line and column for token 2529 in file:///b/build/slave/Linux_Engine/build/src/third_party/dart/sdk/lib/collection/list.dart.
#0 VMScript._lineAndColumn (package:vm_service_client/src/script.dart:243:5)
#1 _ScriptLocation._ensureLineAndColumn (package:vm_service_client/src/script.dart:314:26)
#2 _ScriptLocation.line (package:vm_service_client/src/script.dart:295:5)
#3 _getCoverageJson (package:coverage/src/collect.dart:103:46)
<asynchronous suspension>
#4 _getAllCoverage (package:coverage/src/collect.dart:51:26)
<asynchronous suspension>
#5 collect (package:coverage/src/collect.dart:35:18)
<asynchronous suspension>
#6 CoverageCollector.collectCoverage (package:flutter_tools/src/test/coverage_collector.dart:55:45)
<asynchronous suspension>
#7 CoverageCollector.handleFinishedTest (package:flutter_tools/src/test/coverage_collector.dart:27:11)
<asynchronous suspension>
#8 _FlutterPlatform._startTest (package:flutter_tools/src/test/flutter_platform.dart:650:30)
<asynchronous suspension>
#9 _FlutterPlatform.loadChannel (package:flutter_tools/src/test/flutter_platform.dart:408:36)
#10 PlatformPlugin.load (package:test/src/runner/plugin/platform.dart:65:19)
<asynchronous suspension>
#11 Loader.loadFile.<anonymous closure> (package:test/src/runner/loader.dart:248:36)
<asynchronous suspension>
#12 new LoadSuite.<anonymous closure>.<anonymous closure> (package:test/src/runner/load_suite.dart:92:31)
<asynchronous suspension>
#13 invoke (package:test/src/utils.dart:241:5)
#14 new LoadSuite.<anonymous closure> (package:test/src/runner/load_suite.dart:91:7)
#15 Invoker._onRun.<anonymous closure>.<anonymous closure>.<anonymous closure>.<anonymous closure> (package:test/src/backend/invoker.dart:404:25)
<asynchronous suspension>
#16 new Future.<anonymous closure> (dart:async/future.dart:176:37)
#17 StackZoneSpecification._run (package:stack_trace/src/stack_zone_specification.dart:209:15)
#18 StackZoneSpecification._registerCallback.<anonymous closure> (package:stack_trace/src/stack_zone_specification.dart:119:48)
#19 _rootRun (dart:async/zone.dart:1120:38)
#20 _CustomZone.run (dart:async/zone.dart:1021:19)
#21 _CustomZone.runGuarded (dart:async/zone.dart:923:7)
#22 _CustomZone.bindCallbackGuarded.<anonymous closure> (dart:async/zone.dart:963:23)
#23 StackZoneSpecification._run (package:stack_trace/src/stack_zone_specification.dart:209:15)
#24 StackZoneSpecification._registerCallback.<anonymous closure> (package:stack_trace/src/stack_zone_specification.dart:119:48)
#25 _rootRun (dart:async/zone.dart:1124:13)
#26 _CustomZone.run (dart:async/zone.dart:1021:19)
#27 _CustomZone.bindCallback.<anonymous closure> (dart:async/zone.dart:947:23)
#28 Timer._createTimer.<anonymous closure> (dart:async/runtime/libtimer_patch.dart:21:15)
#29 _Timer._runTimers (dart:isolate/runtime/libtimer_impl.dart:382:19)
#30 _Timer._handleMessage (dart:isolate/runtime/libtimer_impl.dart:416:5)
#31 _RawReceivePortImpl._handleMessage (dart:isolate/runtime/libisolate_patch.dart:171:12)
00:16 +14 -1: loading /Users/dnfield/src/flutter_svg/test/xml_svg_test.dart [E]
Bad state: Couldn't find line and column for token 2529 in file:///b/build/slave/Linux_Engine/build/src/third_party/dart/sdk/lib/collection/list.dart.
package:vm_service_client/src/script.dart 243:5 VMScript._lineAndColumn
package:vm_service_client/src/script.dart 314:26 _ScriptLocation._ensureLineAndColumn
package:vm_service_client/src/script.dart 295:5 _ScriptLocation.line
package:coverage/src/collect.dart 103:46 _getCoverageJson
===== asynchronous gap ===========================
dart:async/future_impl.dart 22:43 _Completer.completeError
dart:async/runtime/libasync_patch.dart 40:18 _AsyncAwaitCompleter.completeError
package:coverage/src/collect.dart _getCoverageJson
===== asynchronous gap ===========================
dart:async/zone.dart 1053:19 _CustomZone.registerUnaryCallback
dart:async/runtime/libasync_patch.dart 77:23 _asyncThenWrapperHelper
package:coverage/src/collect.dart _getCoverageJson
package:coverage/src/collect.dart 51:26 _getAllCoverage
===== asynchronous gap ===========================
dart:async/zone.dart 1053:19 _CustomZone.registerUnaryCallback
dart:async/runtime/libasync_patch.dart 77:23 _asyncThenWrapperHelper
package:coverage/src/collect.dart _getAllCoverage
package:coverage/src/collect.dart 35:18 collect
===== asynchronous gap ===========================
dart:async/zone.dart 1053:19 _CustomZone.registerUnaryCallback
dart:async/runtime/libasync_patch.dart 77:23 _asyncThenWrapperHelper
package:coverage/src/collect.dart collect
package:flutter_tools/src/test/coverage_collector.dart 55:45 CoverageCollector.collectCoverage
===== asynchronous gap ===========================
dart:async/zone.dart 1053:19 _CustomZone.registerUnaryCallback
dart:async/runtime/libasync_patch.dart 77:23 _asyncThenWrapperHelper
package:flutter_tools/src/test/flutter_platform.dart _FlutterPlatform._startTest
package:flutter_tools/src/test/flutter_platform.dart 408:36 _FlutterPlatform.loadChannel
package:test/src/runner/plugin/platform.dart 65:19 PlatformPlugin.load
===== asynchronous gap ===========================
dart:async/zone.dart 1053:19 _CustomZone.registerUnaryCallback
dart:async/runtime/libasync_patch.dart 77:23 _asyncThenWrapperHelper
package:test/src/runner/loader.dart Loader.loadFile.<fn>
package:test/src/runner/load_suite.dart 92:31 new LoadSuite.<fn>.<fn>
===== asynchronous gap ===========================
dart:async/zone.dart 1045:19 _CustomZone.registerCallback
dart:async/zone.dart 962:22 _CustomZone.bindCallbackGuarded
dart:async/timer.dart 52:45 new Timer
dart:async/timer.dart 87:9 Timer.run
dart:async/future.dart 174:11 new Future
package:test/src/backend/invoker.dart 403:15 Invoker._onRun.<fn>.<fn>.<fn>
It eventually causes the process to hang and CI to timeout. It's reproducing
locally as well. Upstream dart issue?
| 0 | |
You need to double-transpose vectors to make them into 1-column matrices so
that
you can add them to 1-column matrices or multiply them with one-row matrices
(on
either side).
|
julia> X = [ i^2 - j | i=1:10, j=1:10 ];
julia> typeof(X)
Array{Int64,2}
julia> X[:,1]
10x1 Int64 Array
0
3
8
15
24
35
48
63
80
99
| 1 |
React v16.3 context provided in `pages/_app.js` can be consumed and rendered
in pages on the client, but is undefined in SSR. This causes React SSR markup
mismatch errors.
Note that context can be universally provided/consumed _within_
`pages/_app.js`, the issue is specifically when providing context in
`pages/_app.js` and consuming it in a page such as `pages/index.js`.
* I have searched the issues of this repository and believe that this is not a duplicate.
## Expected Behavior
Context provided in `pages/_app.js` should be consumable in pages both on the
server for SSR and when browser rendering.
## Current Behavior
Context provided in `pages/_app.js` is undefined when consumed in pages for
SSR. It can only be consumed for client rendering.
## Steps to Reproduce (for bugs)
In `pages/_app.js`:
import App, { Container } from 'next/app'
import React from 'react'
import TestContext from '../context'
export default class MyApp extends App {
render () {
const { Component, pageProps } = this.props
return (
<Container>
<TestContext.Provider value="Test value.">
<Component {...pageProps} />
</TestContext.Provider>
</Container>
)
}
}
In `pages/index.js`:
import TestContext from '../context'
export default () => (
<TestContext.Consumer>
{value => value}
</TestContext.Consumer>
)
In `context.js`:
import React from 'react'
export default React.createContext()
Will result in:

## Context
A large motivation for the `pages/_app.js` feature is to be able to provide
context persistently available across pages. It's unfortunate the current
implementation does not support this basic use case.
I'm attempting to isomorphically provide the cookie in context so that
`graphql-react` `<Query />` components can get the user's access token to make
GraphQL API requests. This approach used to work with separately decorated
pages.
## Your Environment
Tech | Version
---|---
next | v6.0.0-canary.5
node | v9.11.1
|
Relates to #2438 .
If we add the same `<link />` tag multiples in different `<Head />`, they are
duplicated in the rendered the DOM
* I have searched the issues of this repository and believe that this is not a duplicate.
## Expected Behavior
multiple `<link />` tags with exactly the same attributes should be du-duped
when rendering HTML `<head>` elements
e.g.
component1.js
import Head from 'next/head'
<Head>
<link rel="stylesheet" href="/static/style.css" />
</Head>
component2.js
import Head from 'next/head'
<Head>
<link rel="stylesheet" href="/static/style.css" />
</Head>
Then the rendered HTML should be, where duplicated tags are unique'd
<html>
<head>
<link rel="stylesheet" href="/static/style.css" class="next-head" />
</head>
</html>
## Current Behavior
There will be duplicated tags rendered
<html>
<head>
<link rel="stylesheet" href="/static/style.css" class="next-head" />
<link rel="stylesheet" href="/static/style.css" class="next-head" />
</head>
</html>
## Context
I am currently working on a solution with `react-apollo` which allows the data
be pre-fetched before a new page is rendered at client side, the example with-
apollo-redux only supports prefetching data at server side, therefore, at
client side I have to walk through the whole component tree again and do data
fetching if one node from the component tree has been wrapped by the `graphql`
HOC.
Since the component tree contains `<Head />` tags that produces side effects
while going through the tree, so duplicated `<link />` tags get inserted in
the DOM. I was supposed to call `Head.rewind()` but it is not allowed at
client side.
As I know react-helmet will not generate duplicated `<link />` tags
I know we should leave user the liberty of inserting duplicated `<link />`
tags if it is desired. But I believe in most cases user doesn't want it. So I
am wondering if it would be OK by adding a `unique` flag on the tags so the
head.js would de-dupe tags with `unique` specified.
import Head from 'next/head'
<Head>
<link rel="stylesheet" href="/static/style.css" unique />
</Head>
| 0 |
In Bootstrap 3.0.3, when I use the "table table-condensed table-bordered
table-striped" classes on a table, the table-striped defeats all the table
contextual classes (.success, .warning, .danger, .active) in its rows or
cells.
When only the table-striped class is removed, the contextual classes then work
perfectly within the rest of the table-level classes listed above.
I tried substituting the BS CSS "table-striped" rule so it would colorize the
even rows instead of the odd, but it still fails.
Is this a bug or by design?
|
i have table with class .table-striped. it has 3 rows with tr.danger. only the
middle one is red, the other two are default color.
when i remove .table-striped, it works correctly
| 1 |
**TypeScript Version:**
1.9.0-dev / nightly (1.9.0-dev.20160311)
**Code**
class ModelA<T extends BaseData>
{
public prop : T;
public constructor($prop : T) { }
public Foo<TData, TModel>(
$fn1 : ($x : T) => TData,
$fn2 : ($x : TData) => TModel) { }
public Foo1<TData, TModel extends ModelA<any>>(
$fn1 : ($x : T) => TData,
$fn2 : ($x : TData) => TModel) { }
public Foo2<TData extends BaseData, TModel extends ModelA<TData>>(
$fn1 : ($x : T) => TData,
$fn2 : ($x : TData) => TModel) { }
}
class ModelB extends ModelA<Data> { }
class BaseData
{
public a : string;
}
class Data extends BaseData
{
public b : Data;
}
class P
{
public static Run()
{
var modelA = new ModelA<Data>(new Data());
modelA.Foo(x1 => x1.b, x2 => new ModelB(x2));
modelA.Foo1(x1 => x1.b, x2 => new ModelB(x2));
// Why is this not working??? inferred type for x2 : BaseData
modelA.Foo2(x1 => x1.b, x2 => new ModelB(x2)); // Error
}
}
**Expected behavior:**
modelA.Foo2 call should infer the type.
**Actual behavior:**
Is not inferring the actual type.
|
**TypeScript Version:**
1.8.0
**Code**
interface Class<T> {
new(): T;
}
declare function create1<T>(ctor: Class<T>): T;
declare function create2<T, C extends Class<T>>(ctor: C): T;
class A {}
let a1 = create1(A); // a: A --> OK
let a2 = create2(A); // a: {} --> Should be A
**Context**
The example above is simplified to illustrate the difference between `create1`
and `create2`. I need both type parameters for the use case I have in mind
(React) because it returns a type which is parameterized by both `T` and `C`:
declare function createElement<T, C extends Class<T>>(type: C): Element<T, C>;
var e = createElement(A); // e: Element<{}, typeof A> --> Should be Element<A, typeof A>
declare function render<T>(e: Element<T, any>): T;
var a = render(e); // a: {} --> Should be A
Again, this is simplified, but the motivation is to improve the return type
inference of `ReactDOM.render`.
| 1 |
## Steps to Reproduce
Ran the Hello World guide, the app crashes when i run it in debug mode. But
then i can start app the app normally and everything works fine, but i could
not find a way to attach a debugger.
I am using a tablet: Lenovo TB3 710I
## Logs
Exception from flutter run: FormatException: Bad UTF-8 encoding 0xb4
dart:convert/utf.dart 558 _Utf8Decoder.convert
dart:convert/string_conversion.dart 333 _Utf8ConversionSink.addSlice
dart:convert/string_conversion.dart 329 _Utf8ConversionSink.add
dart:convert/chunked_conversion.dart 92 _ConverterStreamEventSink.add
dart:async/stream_transformers.dart 119 _SinkTransformerStreamSubscription._handleData
package:stack_trace/src/stack_zone_specification.dart 107 StackZoneSpecification._registerUnaryCallback.<fn>.<fn>
package:stack_trace/src/stack_zone_specification.dart 185 StackZoneSpecification._run
package:stack_trace/src/stack_zone_specification.dart 107 StackZoneSpecification._registerUnaryCallback.<fn>
package:stack_trace/src/stack_zone_specification.dart 107 StackZoneSpecification._registerUnaryCallback.<fn>.<fn>
package:stack_trace/src/stack_zone_specification.dart 185 StackZoneSpecification._run
package:stack_trace/src/stack_zone_specification.dart 107 StackZoneSpecification._registerUnaryCallback.<fn>
dart:async/zone.dart 1158 _rootRunUnary
dart:async/zone.dart 1037 _CustomZone.runUnary
dart:async/zone.dart 932 _CustomZone.runUnaryGuarded
dart:async/stream_impl.dart 331 _BufferingStreamSubscription._sendData
dart:async/stream_impl.dart 258 _BufferingStreamSubscription._add
dart:async/stream_controller.dart 768 _StreamController&&_SyncStreamControllerDispatch._sendData
dart:async/stream_controller.dart 635 _StreamController._add
dart:async/stream_controller.dart 581 _StreamController.add
dart:io-patch/socket_patch.dart 1680 _Socket._onData
package:stack_trace/src/stack_zone_specification.dart 107 StackZoneSpecification._registerUnaryCallback.<fn>.<fn>
package:stack_trace/src/stack_zone_specification.dart 185 StackZoneSpecification._run
package:stack_trace/src/stack_zone_specification.dart 107 StackZoneSpecification._registerUnaryCallback.<fn>
package:stack_trace/src/stack_zone_specification.dart 107 StackZoneSpecification._registerUnaryCallback.<fn>.<fn>
package:stack_trace/src/stack_zone_specification.dart 185 StackZoneSpecification._run
package:stack_trace/src/stack_zone_specification.dart 107 StackZoneSpecification._registerUnaryCallback.<fn>
dart:async/zone.dart 1162 _rootRunUnary
dart:async/zone.dart 1037 _CustomZone.runUnary
dart:async/zone.dart 932 _CustomZone.runUnaryGuarded
dart:async/stream_impl.dart 331 _BufferingStreamSubscription._sendData
dart:async/stream_impl.dart 258 _BufferingStreamSubscription._add
dart:async/stream_controller.dart 768 _StreamController&&_SyncStreamControllerDispatch._sendData
dart:async/stream_controller.dart 635 _StreamController._add
dart:async/stream_controller.dart 581 _StreamController.add
dart:io-patch/socket_patch.dart 1247 _RawSocket._RawSocket.<fn>
dart:io-patch/socket_patch.dart 781 _NativeSocket.issueReadEvent.issue
dart:async/schedule_microtask.dart 41 _microtaskLoop
dart:async/schedule_microtask.dart 50 _startMicrotaskLoop
dart:isolate-patch/isolate_patch.dart 96 _runPendingImmediateCallback
dart:isolate-patch/isolate_patch.dart 149 _RawReceivePortImpl._handleMessage
## Flutter Doctor
I cant run any flutter command from the IntelliJ terminal.
output:
flutter: command not found
Flutter --version output:
Flutter • channel master • https://github.com/flutter/flutter.git
Framework • revision `3150e3f` (2 hours ago) • 2017-01-13 12:46:13
Engine • revision `b3ed791`
Tools • Dart 1.21.
|
## Steps to Reproduce
1. cd `/examples/hello_world/`
2. connect a physical Android device
3. `flutter run`
Output:
mit-macbookpro2:hello_world mit$ flutter run -d 00ca05b380789730
Launching lib/main.dart on Nexus 5X in debug mode...
Exception from flutter run: FormatException: Bad UTF-8 encoding 0xff
dart:isolate _RawReceivePortImpl._handleMessage
Building APK in debug mode (android-arm)... 5081ms
Installing build/app.apk... 2435ms
Syncing files to device... 3968ms
Running on an Android or iOS simulator does not throw the exception!
## Flutter Doctor
Paste the output of running `flutter doctor` here.
$ flutter doctor
[✓] Flutter (on Mac OS, channel master)
• Flutter at /Users/mit/dev/github/flutter
• Framework revision 3a43fc88b6 (5 hours ago), 2017-01-31 23:32:10
• Engine revision 2d54edf0f9
• Tools Dart version 1.22.0-dev.9.1
[✓] Android toolchain - develop for Android devices (Android SDK 25.0.0)
• Android SDK at /Users/mit/Library/Android/sdk
• Platform android-25, build-tools 25.0.0
• ANDROID_HOME = /Users/mit/Library/Android/sdk
• Java(TM) SE Runtime Environment (build 1.8.0_112-b16)
[✓] iOS toolchain - develop for iOS devices (Xcode 8.2.1)
• XCode at /Applications/Xcode.app/Contents/Developer
• Xcode 8.2.1, Build version 8C1002
[✓] IntelliJ IDEA Ultimate Edition (version 2016.3.4)
• Dart plugin version 163.12753
• Flutter plugin version 0.1.8.1
[✓] Connected devices
• Nexus 5X • 00ca05b380789730 • android-arm • Android 7.0 (API 24)
• Android SDK built for x86 • emulator-5554 • android-x86 • Android 6.0 (API 23) (emulator)
$
| 1 |
deno: 0.21.0
v8: 7.9.304
typescript: 3.6.3
Run this code in both node.js and deno:
let o = {};
o.a = 1;
o.b = 2;
o.c = 3;
o.d = 4;
o.e = 5;
o.f = 6;
console.log(o);
Node.js output: { a: 1, b: 2, c: 3, d: 4, e: 5, f: 6 }
deno output: { a, b, c, d, e, f }
So, if there are six or more properties, deno's console.log doesn't show the
property values.
|
Match colors and semantics.
node's stringify has been tweaked and optimized over many years, changing
anything is likely surprising in a bad way.
| 1 |
Julia has kick ass features for computation.
However what is the point of computing so much if the data cannot be
visualized/inspected ?
Current 2d plotting capabilities are nice, but no real matlab/scilab/scipy
competitor would be credible without some kind of 3d plotting.
One way of going would be to "port" mayavi to Julia.
http://github.enthought.com/mayavi/mayavi/auto/examples.html
Mayavi builds on top of VTK, so the 3d system itself would not be reinvented
from scratch.
I think this would be relevant feature to have for release 2.0...
ps: 2d or 3d plotting systems in Julia should have SVG/PDF output, so it can
be used in scientific publishing...
|
I would like to discuss moving the Julia buildbots to something more
maintainable than `buildbot`. It's worked well for us for many years, but the
siloing of configuration into a separate repository, combined with the
realtively slow pace of development as compared to many other competitors (and
also the amount of cruft we've built up to get it as far as it is today) means
it's time to move to something new.
Anything we use must have the following features:
* Multi-platform; it _must_ support runners on all platforms Julia itself is built on. This encompasses:
* Linux: `x86_64` (`glibc` and `musl`), `i686`, `armv7l`, `aarch64`, `ppc64le`
* Windows: `x86_64`, `i686`
* MacOS: `x86_64`, `aarch64`
* FreeBSD: `x86_64`
* I would like the build configuration to live in an easily-modifiable format, such as a `.yml` file in the Julia repository. It's nice if we can test out different configurations just by making a PR against a repository somewhere.
Possible options include:
* GitLab CI (Note: currently missing linux ppc64le support)
* buildkite
* Azure Pipelines (Note: incomplete runner support)
* GitHub Actions (Note: incomplete runner support)
I unfortunately likely won't have time to do this all by myself, but if we
could get 1-2 community members interested in learning more about CI/CD and
who want to help have a hand in bringing the Julia CI story into a better age,
I'll be happy to work alongside them.
| 0 |
Hi,
If I use a custom distDir:
* the following error appears in the console: `http://myUrl/_next/-/page/_error` / 404 not found
* hot reloading does not work properly anymore
If I go back to the default, it goes away.
my `next.config.js` file:
// see source file server/config.js
module.exports = {
webpack: null,
poweredByHeader: false,
distDir: '.build',
assetPrefix: ''
}
Thanks,
Paul
|
I cannot get the custom distDir to work. Whenever I add a next.config.js file
and set the distDir as described in the documentation
module.exports = {
distDir: 'build'
};
the build is created in that directory, but there is always an exception being
thrown in the browser when loading the page.
Error when loading route: /_error
Error: Error when loading route: /_error
at HTMLScriptElement.script.onerror
What am I missing? This is reproducible with the simple hello world example.
| 1 |
Worked around in CI by pinning to 1.8.5, see gh-9987
For the issue, see e.g. https://circleci.com/gh/scipy/scipy/12334. It ends
with:
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!
./scipy-ref.tex:25: fontspec error: "font-not-found"
!
! The font "FreeSerif" cannot be found.
!
! See the fontspec documentation for further information.
!
! For immediate help type H <return>.
!...............................................
l.25 ]
No pages of output.
Transcript written on scipy-ref.log.
Latexmk: Log file says no output from latex
Latexmk: For rule 'pdflatex', no output was made
Latexmk: Errors, so I did not complete making targets
Collected error summary (may duplicate other messages):
pdflatex: Command for 'pdflatex' gave return code 256
Latexmk: Use the -f option to force complete processing,
unless error was exceeding maximum runs of latex/pdflatex.
Makefile:32: recipe for target 'scipy-ref.pdf' failed
make: *** [scipy-ref.pdf] Error 12
make: Leaving directory '/home/circleci/repo/doc/build/latex'
Exited with code 2
|
_Original tickethttp://projects.scipy.org/scipy/ticket/1889 on 2013-04-10 by
trac user avneesh, assigned to unknown._
Hello all,
I notice a somewhat bizarre issue when constructing sparse matrices by
initializing with 3-tuples (row index, column index, value).
The following is a slight abstraction to what my exact code is, but it shows
the behavior:
kNN = 10
dataset_size = 1661165
rowIdx = np.empty((kNN+1)*dataset_size)
colIdx = np.empty((kNN+1)*dataset_size)
vals = np.empty((kNN+1)*dataset_size)
for i, line in enumerate(data):
#perform certain operations
print vals.size, colIdx.size, rowIdx.size
print vals[np.nonzero(vals)].size
W = sp.csc_matrix((vals, (rowIdx, colIdx)), shape=(dataset_size, dataset_size))
print W.nnz
The printed outputs I get are the following:
18272815 18272815 18272815
18272815
18272465
Therefore, as you can see, there is a difference of 18272815-18272465 = 350
elements that should be non-zero in the resulting sparse matrix, but are not.
I have verified in the rowIdx and colIdx arrays that there are no duplicates,
i.e., a given (rowIdx, colIdx) pair does not appear twice (otherwise two
values would map to the same position in the sparse matrix). As per my
understanding, I should get 18272815 elements in the resulting sparse matrix,
but I fall 350 elements short.
Is this expected behavior? Am I doing something wrong?
I am running Linux x86-64-bit OpenSuSE 11.4, NumPy version 1.5.1, SciPy
version 0.9.0, Python 2.7.
| 0 |
With version 3.1.0, the HintManagerHolder.clear method is executed each time
SQL is executed?
When using HintShardingAlgorithm, you need to set HintManagerHolder before
executing SQL. I execute multiple SQL in Dao, or execute multiple Dao methods
in a service. I want to set HintManagerHolder in the form of AOP interception
before calling the method. Execute clear after the end, there is no way to
achieve
|
## Question
Why DriverJDBCExecutor#executor do not handle the error result of branch
thread execution?
DriverJDBCExecutor#executor:

JDBCExecutorCallback#execute:

In this way, the result of the overall sql execution is inconsistent with the
expected result.
eg:
insert into xx_table (id,xxx,xxx) values (1,xx,xxx),(2,xxx,xxx);
This statement will route to two datasources, ds0, ds1.
If there is a row with primary key 1 in ds1, it will throw exception:
Duplicate entry '1' for key 'PRIMARY';
The final result is: the sql routed to ds0 is successfully executed, and the
sql execution of ds1 fails.
Why is it not processed as all failures, which leads to data inconsistency,
and distributed transactions will not be rolled back
| 0 |
Hi
I have a select box that has to contain the following options. Mention the
same label for the options "audi" and "ford".
<select>
<option value="volvo">Volvo</option>
<option value="saab">Saab</option>
<option value="opel">Opel</option>
<option value="audi">Audi</option>
<option value="ford">Audi</option>
</select>
When I try to render this in Symfony 2.7, my FormType looks like this.
$builder->add('brand', 'choice', array(
'choices' => array(
'volvo' => 'Volvo',
'saab' => 'Saab',
'opel' => 'Opel',
'audi' => 'Audi',
'ford' => 'Audi'
)
));
I would assume all 5 fields are going to get rendered. In fact, only 4 get
rendered and the view looks like this:
<select id="...">
<option value="volvo">Volvo</option>
<option value="saab">Saab</option>
<option value="opel">Opel</option>
<option value="ford">Audi</option>
</select>
It seems the option with value "audi" is overriden by the option with value
"ford". I don't know if this is standard behaviour or a bug, but it's quite
annoying. Can any of you help me?
Thanks in advance!
| Q | A
---|---
Bug report? | no
Feature request? | yes
BC Break report? | ?
RFC? | ?
Symfony version | 3.2.7
In the documentation for data transformers, the input is validated in the
IssueToNumberTransformer class, and the error message is set in the TaskType
class. What if there are multiple types of invalid input, can there be a way
to have multiple error messages?
I know this can be done with Symfony\Component\Validator\Constraints in the
Entity class, but since validation must also take place in the data
transformer, I think there should be a way to have more control over the error
messages.
| 0 |
_Please make sure that this is a build/installation issue. As per ourGitHub
Policy, we only address code/doc bugs, performance issues, feature requests
and build/installation issues on GitHub. tag:build_template_
System information
OS : Windows 10
TensorFlow installed from (source or binary): from pip
TensorFlow version: 1.11.0
Python version: 3.6
Installed using virtualenv? pip? conda?: conda
Bazel version (if compiling from source): No
GCC/Compiler version (if compiling from source):
CUDA/cuDNN version: 10
GPU model and memory: Nvidia 1050Ti
**Describe the problem**
Recently I tried updating my tensor flow and later on, my tensorflow-gpu
stopped working.
Now I have downgraded to 1.11.0 but then still my tensorflow-gpu is not
working.
nvidia-smi is working fine.
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 416.34 Driver Version: 416.34 CUDA Version: 10.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name TCC/WDDM | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 105... WDDM | 00000000:01:00.0 Off | N/A |
| N/A 48C P8 N/A / N/A | 78MiB / 4096MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|==================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
tf.test.is_gpu_available(
cuda_only=False,
min_cuda_compute_capability=None
)
I ran this, its coming false
then ,
I ran this sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
Device mapping: no known devices.
2018-11-26 17:27:10.277030: I
tensorflow/core/common_runtime/direct_session.cc:291] Device mapping:
it's coming empty, I read lot of threads usually people had error with cuda,
by installing that it worked for lot of people.
But then in my case things were working fine earlier, now it got messed up and
cuda is installed properly.
why should I do?
|
Hi:
my program have bug in these code line:
word_embeddings = tf.scatter_nd_update(var_output, error_word_f, sum_all)
word_embeddings_2 = tf.nn.dropout(word_embeddings, self.dropout_pl)
# The error hint as follows:
ValueError: Tensor conversion requested dtype float32_ref for Tensor with
dtype float32: 'Tensor("dropout:0", shape=(), dtype=float32)
it looks like word_embeddings 's dtype is float32_ref but actual the function
tf.nn.dropout need word_embeddings dtype float32 ,how can i convert
word_embeddings's dtype float32_ref to float32 before run
tf.nn.dropout(word_embeddings, self.dropout_pl)?
| 0 |
ERROR: type should be string, got "\n\nhttps://babeljs.io/repl/#?experimental=true&evaluate=true&loose=false&spec=false&playground=false&code=%40test(()%20%3D%3E%20123)%0Aclass%20A%20%7B%7D%0A%0A%40test(()%20%3D%3E%20%7B%20return%20123%20%7D)%0Aclass%20B%20%7B%7D%0A%0A%40test(function()%20%7B%20return%20123%20%7D)%0Aclass%20%D0%A1%20%7B%7D\n\n" |
Compare the output of these two to show the difference.
https://babeljs.io/repl/#?experimental=true&evaluate=true&loose=false&spec=false&code=%40decorate((arg)%20%3D%3E%20null)%0Aclass%20Example1%20%7B%0A%7D%0A%0A%40decorate(arg%20%3D%3E%20null)%0Aclass%20Example2%20%7B%0A%7D
@decorate((arg) => null)
class Example1 {
}
@decorate(arg => null)
class Example2 {
}
| 1 |
Challenge using-the-justifycontent-property-in-the-tweet-embed has an issue.
User Agent is: `Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML,
like Gecko) Ubuntu Chromium/55.0.2883.87 Chrome/55.0.2883.87 Safari/537.36`.
Please describe how to reproduce this issue, and include links to screenshots
if possible.
It makes me pass the challenge even if I have not inserted the css statement
justify-content:center;
<style>
body {
font-family: Arial, sans-serif;
}
header, footer {
display: flex;
flex-direction: row;
}
header .profile-thumbnail {
width: 50px;
height: 50px;
border-radius: 4px;
}
header .profile-name {
display: flex;
flex-direction: column;
margin-left: 10px;
}
header .follow-btn {
display: flex;
margin: 0 0 0 auto;
}
header .follow-btn button {
border: 0;
border-radius: 3px;
padding: 5px;
}
header h3, header h4 {
display: flex;
margin: 0;
}
#inner p {
margin-bottom: 10px;
font-size: 20px;
}
#inner hr {
margin: 20px 0;
border-style: solid;
opacity: 0.1;
}
footer .stats {
display: flex;
font-size: 15px;
}
footer .stats strong {
font-size: 18px;
}
footer .stats .likes {
margin-left: 10px;
}
footer .cta {
margin-left: auto;
}
footer .cta button {
border: 0;
background: transparent;
}
</style>
<header>
<img src="https://pbs.twimg.com/profile_images/378800000147359764/54dc9a5c34e912f34db8662d53d16a39_400x400.png" alt="Quincy Larson's profile picture" class="profile-thumbnail">
<div class="profile-name">
<h3>Quincy Larson</h3>
<h4>@ossia</h4>
</div>
<div class="follow-btn">
<button>Follow</button>
</div>
</header>
<div id="inner">
<p>How would you describe to a layperson the relationship between Node, Express, and npm in a single tweet? An analogy would be helpful.</p>
<span class="date">7:24 PM - 17 Aug 2016</span>
<hr>
</div>
<footer>
<div class="stats">
<div class="retweets">
<strong>56,203</strong> RETWEETS
</div>
<div class="likes">
<strong>84,703</strong> LIKES
</div>
</div>
<div class="cta">
<button class="share-btn">Share</button>
<button class="retweet-btn">Retweet</button>
<button class="like-btn">Like</button>
</div>
</footer>
```
|
Challenge [Steamroller](https://www.freecodecamp.com/challenges/steamroller
has an issue.
**User Agent is** :
`Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_0) AppleWebKit/537.36 (KHTML,
like Gecko) Chrome/52.0.2743.116 Safari/537.36`.
**Issue Description** :
Global Variables value is not getting flushed.
My code:
var tempArr = [];
function steamrollArray(arr) {
// I'm a steamroller, baby
tempArr = []; // To fix the global var issue , we need to reset that global var
arr = arr.map(function(val,index){
return checkForArray(val);
});
console.log('tempArr',tempArr);
return tempArr;
}
function checkForArray(val_arr){
if(!Array.isArray(val_arr)) {tempArr.push(val_arr);return val_arr;}
val_arr.map(function(val,index){
if(Array.isArray(val)){
return checkForArray(val);
}
else {
tempArr.push(val);
return val;
}
});
}
steamrollArray([1, {}, [3, [[4]]]]);
| 0 |
Hi,
Just installed deno and try to run welcome.ts file which is mentioned in docs,
i am facing this issue. Details are below.
D:\deno>deno run https://deno.land/welcome.ts
Downloading https://deno.land/welcome.ts
WARN RS - Sending fatal alert BadCertificate
an error occurred trying to connect: invalid certificate: UnknownIssuer
an error occurred trying to connect: invalid certificate: UnknownIssuer
D:\deno>deno version
deno: 0.4.0
v8: 7.6.53
typescript: 3.4.1
Anybody faced this issue.
|
There should be a way to allow insecure https requests, using `window.fetch()`
for example.
It should disable certificate validation for all requests made from the
program.
As of implementation, there are two optoins that come to my mind: environment
variable and flag. Below are some examples of how it's done in other web-
related software, so we can come up with intuitive name and approach.
Examples:
* In NodeJS, there is a `NODE_TLS_REJECT_UNAUTHORIZED` environment variable that can be set to `0` to disable TLS certificate validation. It takes name from tls.connect options.rejectUnauthorized
* In curl, there's a `-k` or `--insecure` flag, which allows insecure server connections when using SSL
* In Chrome, there's a `--unsafely-treat-insecure-origin-as-secure="https://example.com"` for the same purposes
* For WebDrivers, there's acceptInsecureCerts capability. It can allow self-signed or otherwise invalid certificates to be implicitly trusted by the browser.
That said, I'd like to see `--accept-insecure-certs` flag option, because deno
already uses flags for a lot of things, including permissions.
* * *
My use-case: I'm using a corporate Windows laptop with network monitoring.
AFAIK, all https requests are also monitored, so there's a custom SSL-
certificate installed system-wide. So, most of the software, that uses system
CA storage, works just fine. But some uses custom or bundled CA, and it seems
like it's a case with deno. Anyway, it downloads deps just fine, but fails to
perfprm any https request.
Issue created as a follow up to gitter conversation: December 18, 2018 1:24 PM
| 1 |
### Problem description
When props.animated is true, Popover calls setState from within a setTimeout.
// in componentWillReceiveProps
if (nextProps.animated) {
this.setState({ closing: true });
this.timeout = setTimeout(function () {
_this2.setState({
open: false
});
}, 500);
}
Because componentWillReceiveProps doesn't mean that props changed, Popover has
the potential to call setState from within the setTimeout multiple times, when
no props have changed. For me, this is consistently causing an error where
setState is called on an unmounted component.
### Steps to reproduce
Rapidly re-render Popover (animated=true) after having changed open to false.
### Versions
* Material-UI: 0.15.3
* React: 15.3.0
* Browser: Chrome Version 52.0.2743.116 (64-bit)
|
## Problem Description
On occasion clicking the SelectField will result in the label text
disappearing and the dropdown menu not displaying. Clicking the SelectField
again will cause the label text to reappear and clicking again brings up the
dropdown menu as expected. The dropdown seems to briefly load before
disappearing.
I'm encountering this issue on both my own app and the material-ui examples
for the SelectField component. I've attached a gif of the bug on the material-
ui examples.

## Versions
* Material-UI: 0.14.4
* React: 0.14.8
* Browser: Chrome 50.0.2661.94 (64-bit) and Safari 9.1
| 1 |
We need a shorcut for move windows aplications to another virtual desktop. I
use MoveToDesktop by Eun: MoveToDesktop, ~~but it cannot be configured~~.
This tool provide combination for move application to next virtual desktop
using "win+alt+arrow" powerstroke.
|
Such as press twice 'Alt' to open PowerToys Run, Alfred can do this, this is
powerful, and will help people who want to migrate from MacOS just like me,
thanks.
| 0 |
#### Twitch TV challenge
https://www.freecodecamp.com/challenges/use-the-twitchtv-json-api
#### Issue Description
Your example site https://codepen.io/FreeCodeCamp/full/Myvqmo/ is not working.
link to each Twitch Channel is 404. Also, channel information is not displayed
properley - it just says "Account Closed" for all the channels.
#### Browser Information
Chrome on MAC desktop
|
#### Challenge Name
https://www.freecodecamp.com/challenges/use-the-twitchtv-json-api
#### Issue Description
JSONP calls to the channels API result in a bad request. This is due to an
update to the Twitchtv API which now requires a client id, which means
creating an account and registering your application with the service. Details
can be read from the official developer blog:
https://blog.twitch.tv/client-id-required-for-kraken-api-calls-
afbb8e95f843#.pm46cq40d
I confirmed this by console logging the response object from the channels API
on my application and comparing it with the example provided by Free Code Camp
(see screenshots).
#### Browser Information
N/A
#### Your Code
#### Screenshot


| 1 |
## Bug Report
* I would like to work on a fix!
**Current behavior**
A clear and concise description of the behavior.
`generate()` produces incorrect code for arrow function expression.
const generate = require('@babel/generator').default;
const node = t.arrowFunctionExpression( [], t.objectExpression( [] ) );
console.log( generate( node ) );
Output:
() => {}
Output should be:
() => ({})
**Babel Configuration (babel.config.js, .babelrc, package.json#babel, cli
command, .eslintrc)**
No config used. The above is the complete reproduction case.
**Environment**
System:
OS: macOS Mojave 10.14.6
Binaries:
Node: 14.9.0 - ~/.nvm/versions/node/v14.9.0/bin/node
npm: 6.14.8 - ~/.nvm/versions/node/v14.9.0/bin/npm
npmPackages:
@babel/core: ^7.11.6 => 7.11.6
@babel/generator: ^7.11.6 => 7.11.6
@babel/helper-module-transforms: ^7.11.0 => 7.11.0
@babel/parser: ^7.11.5 => 7.11.5
@babel/plugin-transform-modules-commonjs: ^7.10.4 => 7.10.4
@babel/plugin-transform-react-jsx: ^7.10.4 => 7.10.4
@babel/register: ^7.11.5 => 7.11.5
@babel/traverse: ^7.11.5 => 7.11.5
@babel/types: ^7.11.5 => 7.11.5
babel-jest: ^26.3.0 => 26.3.0
babel-plugin-dynamic-import-node: ^2.3.3 => 2.3.3
eslint: ^7.8.1 => 7.8.1
jest: ^26.4.2 => 26.4.2
|
> Issue originally made by @also
### Bug information
* **Babel version:** 6.2.0
* **Node version:** 4.1.2
* **npm version:** 3.4.1
### Options
none
### Input code
Dependencies in package.json:
{
"dependencies": {
"babel-runtime": "6.2.0"
},
"devDependencies": {
"babel-cli": "6.2.0",
"babel-plugin-transform-runtime": "6.1.18",
"babel-preset-es2015": "6.1.18"
}
}
### Description
I have a relatively simple `package.json` (here).
Using npm 3.4 and Node.js 4.1, `babel-doctor` complains about 43 duplicate
`babel-runtime` packages, even after running `npm dedupe`.
Output from Travis CI
$ babel-doctor
Babel Doctor
Running sanity checks on your system. This may take a few minutes...
✔ Found config at /home/travis/build/also/babel-6-runtime-test/.babelrc
✖ Found these duplicate packages:
- babel-runtime x 43
Recommend running `npm dedupe`
✔ All babel packages appear to be up to date
✔ You're on npm >=3.3.0
Found potential issues on your machine :(
It seems that it is possible to work around the issue by switching to the
version of `babel-runtime` being duplicated, currently `5.8.34`.
| 0 |
Hello,
I've been creating notifications with
https://github.com/electron/electron/blob/master/docs/tutorial/desktop-
environment-integration.md#notifications-windows-linux-macos
Can someone please confirm what I suspect:
* There is no way to bind some kind of click handler to a notification, all clicking ever does it close the notification
* There is no way to add actions/buttons to a notification.
If these are true then I think as they stand, notifications are not very
useful. If this functionality is not possible to implement, maybe it would be
better to provide a standardised BrowserWindow implementation?
As an aside - can anyone recommend a third party package for implementing
cross-platform rich notifications?
Thanks!
|
It would be really great if Electron supported the "notification actions"
feature added in Chrome 48:
https://www.chromestatus.com/features/5906566364528640
https://developers.google.com/web/updates/2016/01/notification-actions?hl=en
Essentially, you say:
new Notification("123", {title: "123", silent: true, actions: [{action: 'A', title: 'A'}, {action: 'B', title: 'B'}]})
To show the notification, and then listen for 'notificationclick' DOM event,
which gives you the name of the chosen action.
| 1 |
### Description
Ability to clear or mark task groups as success/failure and have that
propagate to the tasks within that task group. Sometimes there is a need to
adjust the status of tasks within a task group, which can get unwieldy
depending on the number of tasks in that task group. A great quality of life
upgrade, and something that seems like an intuitive feature, would be the
ability to clear or change the status of all tasks at their taskgroup level
through the UI.
### Use case/motivation
In the event a large number of tasks, or a whole task group in this case, need
to be cleared or their status set to success/failure this would be a great
improvement. For example, a manual DAG run triggered through the UI or the API
that has a number of task sensors or tasks that otherwise don't matter for
that DAG run - instead of setting each one as success by hand, doing so for
each task group would be great.
### Related issues
_No response_
### Are you willing to submit a PR?
* Yes I am willing to submit a PR!
### Code of Conduct
* I agree to follow this project's Code of Conduct
|
### Description
Hi,
It would be very interesting to be able to filter DagRuns by using the state
field. That would affect the following methods:
* /api/v1/dags/{dag_id}/dagRuns
* /api/v1/dags/~/dagRuns/list
Currently accepting the following query/body filter parameters:
* execution_date_gte
* execution_date_lte
* start_date_gte
* start_date_lte
* end_date_gte
* end_date_lte
Our proposal is to add the state parameter ("queued", "running", "success",
"failed") to be able to filter by the field "state" of the DagRun.
Thanks.
### Use case/motivation
Being able to filter DagRuns by their state value.
### Related issues
_No response_
### Are you willing to submit a PR?
* Yes I am willing to submit a PR!
### Code of Conduct
* I agree to follow this project's Code of Conduct
| 0 |
Profiling vet on a large corpus shows about >10% time spent in syscalls initiated by
gcimporter.(*parser).next. Many of these reads are avoidable; there is high import
overlap across packages, particularly within a given project.
Concretely, instrumenting calls to Import (in gcimporter.go) and then running 'go vet'
on camlistore yields these top duplicate imports:
153 fmt.a
147 testing.a
120 io.a
119 strings.a
113 os.a
108 bytes.a
99 time.a
97 errors.a
82 io/ioutil.a
80 log.a
76 sync.a
70 strconv.a
64 net/http.a
56 path/filepath.a
51 camlistore.org/pkg/blob.a
44 runtime.a
39 sort.a
39 flag.a
35 reflect.a
35 net/url.a
These 20 account for 1627 of the 2750 import reads.
Hacking in a quick LRU that simply caches the raw data in the files cuts 'go vet' user
time for camlistore by ~10%. I'm not sure that that is the right long-term approach,
though.
|
Currently gc generates the following code:
var s1 string
400c19: 48 c7 44 24 08 00 00 movq $0x0,0x8(%rsp)
400c22: 48 c7 44 24 10 00 00 movq $0x0,0x10(%rsp)
s2 := ""
400c2b: 48 8d 1c 25 20 63 42 lea 0x426320,%rbx
400c33: 48 8b 2b mov (%rbx),%rbp
400c36: 48 89 6c 24 18 mov %rbp,0x18(%rsp)
400c3b: 48 8b 6b 08 mov 0x8(%rbx),%rbp
400c3f: 48 89 6c 24 20 mov %rbp,0x20(%rsp)
s3 = ""
400c44: 48 8d 1c 25 20 63 42 lea 0x426320,%rbx
400c4c: 48 8b 2b mov (%rbx),%rbp
400c4f: 48 89 2c 25 f0 34 46 mov %rbp,0x4634f0
400c57: 48 8b 6b 08 mov 0x8(%rbx),%rbp
400c5b: 48 89 2c 25 f8 34 46 mov %rbp,0x4634f8
Ideally it is:
var s1 string
400c19: 48 c7 44 24 08 00 00 movq $0x0,0x8(%rsp)
400c22: 48 c7 44 24 10 00 00 movq $0x0,0x10(%rsp)
s2 := ""
400c19: 48 c7 44 24 08 00 00 movq $0x0,0x8(%rsp)
400c22: 48 c7 44 24 10 00 00 movq $0x0,0x10(%rsp)
s3 = ""
400c19: 48 c7 44 24 08 00 00 movq $0x0,0x8(%rsp)
400c22: 48 c7 44 24 10 00 00 movq $0x0,0x10(%rsp)
For := "", compiler can just remove the initializer.
For = "", compiler can recognize "" and store zeros.
| 0 |
In the bonfire problem, it states:
> Return the number of total permutations of the provided string that don't
> have repeated consecutive letters.
> For example, 'aab' should return 2 because it has 6 total permutations, but
> only 2 of them don't have the same letter (in this case 'a') repeating.
I believe this is incorrect.
The possible permutations of 'aab' are 'aab' 'aba' and 'baa' which should be
calculated as 3! / 2! (= 3), not 3! (= 6) as what was done in the description.
The total number of permutations should be 3 not 6. Then you want to eliminate
cases wherein identical letters are adjacent, namely, 'aab' and 'baa'. This
leaves 'aba' which means the answer ought to be 1, not 2.
Let me give another example. For the string 'aabb', the possible permutations
ought not be calculated as 4! (=24) then eliminate the ones with adjacent
identical characters. This leads to an incorrect answer of 8.
The possible permutations of 'aabb' is actually just 6, namely 'aabb', 'abab',
'baba', 'bbaa', 'abba', 'baab'. This can also be calculated 4! / (2! * 2!)
(=6). Then when the strings wherein there are adjacent identical characters
are eliminated, this leaves 'abab' and 'baba' which means the answer is 2 (not
8 as the test case indicates).
Please advise. Apologies if this is deemed a spam
|
I selected the buttons as checked but the waypoint is not noticing that the
buttons are checked.
| 0 |
Typscript typings for ChipProps includes tabIndex of type: number | string
which is incompartible with the overriding tabIndex property extended from
HTMLElement via HTMLDivElement.
* I have searched the issues of this repository and believe that this is not a duplicate.
## Expected Behavior
No error. Property tabIndex?: number | string on interface ChipProps should
either be tabIndex?: number or removed completely since its inherited by
virtue of extension.
## Current Behavior
Receive the error message
`ERROR in <path-to-project-root>/node_modules/material-ui/Chip/Chip.d.ts
(4,18): error TS2430: Interface 'ChipProps' incorrectly extends interface
'HTMLAttributes<HTMLDivElement>'. Types of property 'tabIndex' are
incompatible. Type 'ReactText' is not assignable to type 'number'. Type
'string' is not assignable to type 'number'.`
## Steps to Reproduce (for bugs)
Add Chip to any typescript project and compile with tsc
## Your Environment
Tech | Version
---|---
Material-UI | 1.0.0-beta.9
React | 15.6.1
|
When use the Card component and also SSR, React 16 gives the following
message:
> Did not expect server HTML to contain a <div> in <div>.
* I have searched the issues of this repository and believe that this is not a duplicate.
## Expected Behavior
A Card component should render on the serve side, so the client side doesn't
have to do anything with it and should return no warning.
## Current Behavior
When I use the Card component with React 16 like this:
<Card className={classes.card}>
<CardContent>
<Typography type="headline" component="h2" >
Login
</Typography>
<TextField
id="username"
label="Username"
autoComplete="username"
className={classes.input}
/>
<TextField
id="password"
label="Password"
type="password"
autoComplete="current-password"
margin="normal"
className={classes.input}
/>
</CardContent>
<CardActions className={classes.cardActions}>
<Button onClick={() => props.handleLogin()}>Login</Button>
</CardActions>
</Card>
it gives the following error on hydration:
Warning: Did not expect server HTML to contain a <div> in <div>.
printWarning @ bundle.js:sourcemap:423
warning @ bundle.js:sourcemap:447
warnForDeletedHydratableElement$1 @ bundle.js:sourcemap:16895
didNotHydrateInstance @ bundle.js:sourcemap:17573
deleteHydratableInstance @ bundle.js:sourcemap:11892
popHydrationState @ bundle.js:sourcemap:12099
completeWork @ bundle.js:sourcemap:11067
completeUnitOfWork @ bundle.js:sourcemap:12568
performUnitOfWork @ bundle.js:sourcemap:12670
workLoop @ bundle.js:sourcemap:12724
callCallback @ bundle.js:sourcemap:2978
invokeGuardedCallbackDev @ bundle.js:sourcemap:3017
invokeGuardedCallback @ bundle.js:sourcemap:2874
renderRoot @ bundle.js:sourcemap:12802
performWorkOnRoot @ bundle.js:sourcemap:13450
performWork @ bundle.js:sourcemap:13403
requestWork @ bundle.js:sourcemap:13314
scheduleWorkImpl @ bundle.js:sourcemap:13168
scheduleWork @ bundle.js:sourcemap:13125
scheduleTopLevelUpdate @ bundle.js:sourcemap:13629
updateContainer @ bundle.js:sourcemap:13667
(anonymous) @ bundle.js:sourcemap:17658
unbatchedUpdates @ bundle.js:sourcemap:13538
renderSubtreeIntoContainer @ bundle.js:sourcemap:17657
hydrate @ bundle.js:sourcemap:17719
(anonymous) @ bundle.js:sourcemap:66465
The description of the warning does not say much, so I couldn't investigate
further. However, when I change the above code to a simple `<div>` tag, the
warning disappears, so I suspect that there is something wrong with the sever-
side rendering of the Card component.
If there's a way to get more information about the warning I could look into
it, I just have no idea how :).
## Context
## Your Environment
Tech | Version
---|---
Material-UI | 1.0.0-beta.25
React | 16.2.0
browser | Google Chrome 63.0.3239.108 (Official Build) (64-bit)
| 0 |
### Preflight Checklist
* I have read the Contributing Guidelines for this project.
* I agree to follow the Code of Conduct that this project adheres to.
* I have searched the issue tracker for an issue that matches the one I want to file, without success.
### Issue Details
* **Electron Version:**
* 9.1.0
* **Operating System:**
* Windows 10 (19041, 18363, 18362, 16299)
* **Last Known Working Electron version:**
* N/A
### Expected Behavior
The application should not crash.
### Actual Behavior
The application crashes.
### To Reproduce
This seems happening sporadically when application exits (all windows get
closed, destroyed and app is quit).
It happens only on Windows, MacOS does not seem to affected.
My code snippet:
app.removeAllListeners('window-all-closed')
BrowserWindow.getAllWindows().forEach((browserWindow) => {
browserWindow.close()
browserWindow.destroy()
})
app.quit()
### Stack Trace
Pastebin: https://pastebin.com/raw/MYvbpWe2
Snippet:
Fatal Error: EXCEPTION_ACCESS_VIOLATION_READ
Thread 10196 Crashed:
0 MyApp.exe 0x7ff75f43a632 [inlined] base::internal::UncheckedObserverAdapter::IsEqual (observer_list_internal.h:30)
1 MyApp.exe 0x7ff75f43a632 [inlined] base::ObserverList<T>::RemoveObserver::<T>::operator() (observer_list.h:283)
2 MyApp.exe 0x7ff75f43a632 [inlined] std::__1::find_if (algorithm:933)
3 MyApp.exe 0x7ff75f43a632 base::ObserverList<T>::RemoveObserver (observer_list.h:281)
4 MyApp.exe 0x7ff76101f28c extensions::ProcessManager::Shutdown (process_manager.cc:289)
5 MyApp.exe 0x7ff761e3a08d [inlined] DependencyManager::ShutdownFactoriesInOrder (dependency_manager.cc:127)
6 MyApp.exe 0x7ff761e3a08d DependencyManager::DestroyContextServices (dependency_manager.cc:83)
7 MyApp.exe 0x7ff75f2e8de6 electron::ElectronBrowserContext::~ElectronBrowserContext (electron_browser_context.cc:158)
8 MyApp.exe 0x7ff75f2e9abf electron::ElectronBrowserContext::~ElectronBrowserContext (electron_browser_context.cc:154)
9 MyApp.exe 0x7ff75f287a9d [inlined] base::RefCountedDeleteOnSequence<T>::Release (ref_counted_delete_on_sequence.h:52)
10 MyApp.exe 0x7ff75f287a9d [inlined] scoped_refptr<T>::Release (scoped_refptr.h:322)
11 MyApp.exe 0x7ff75f287a9d [inlined] scoped_refptr<T>::~scoped_refptr (scoped_refptr.h:224)
12 MyApp.exe 0x7ff75f287a9d electron::api::Session::~Session (electron_api_session.cc:313)
13 MyApp.exe 0x7ff75f28cdaf electron::api::Session::~Session (electron_api_session.cc:294)
14 MyApp.exe 0x7ff75f2eb641 [inlined] base::OnceCallback<T>::Run (callback.h:98)
15 MyApp.exe 0x7ff75f2eb641 electron::ElectronBrowserMainParts::PostMainMessageLoopRun (electron_browser_main_parts.cc:545)
16 MyApp.exe 0x7ff760a01750 content::BrowserMainLoop::ShutdownThreadsAndCleanUp (browser_main_loop.cc:1095)
17 MyApp.exe 0x7ff760a031a6 content::BrowserMainRunnerImpl::Shutdown (browser_main_runner_impl.cc:178)
18 MyApp.exe 0x7ff7609fee69 content::BrowserMain (browser_main.cc:49)
19 MyApp.exe 0x7ff76090405c content::RunBrowserProcessMain (content_main_runner_impl.cc:530)
20 MyApp.exe 0x7ff760904c00 content::ContentMainRunnerImpl::RunServiceManager (content_main_runner_impl.cc:980)
21 MyApp.exe 0x7ff7609048c2 content::ContentMainRunnerImpl::Run (content_main_runner_impl.cc:879)
22 MyApp.exe 0x7ff761870252 service_manager::Main (main.cc:454)
23 MyApp.exe 0x7ff75fcd8265 content::ContentMain (content_main.cc:19)
24 MyApp.exe 0x7ff75f23140a wWinMain (electron_main.cc:210)
25 MyApp.exe 0x7ff7646a6e91 [inlined] invoke_main (exe_common.inl:118)
26 MyApp.exe 0x7ff7646a6e91 __scrt_common_main_seh (exe_common.inl:288)
27 KERNEL32.DLL 0x7ffed8a97033 BaseThreadInitThunk
28 ntdll.dll 0x7ffed97dcec0 RtlUserThreadStart
### Additional Information
This seems a crash on shutdown/destroy of windows (from
`base::Process::Terminate (process_win.cc)`, not sure if it's chromium-related
(https://chromium.googlesource.com/chromium/src/+/master/docs/shutdown.md)
|
Detected in CI:
https://app.circleci.com/pipelines/github/electron/electron/31161/workflows/e022a2a8-d5fb-47b4-806f-84f69ecf0a8a/jobs/686542
Received signal 11 SEGV_MAPERR ffffffffffffffff
0 Electron Framework 0x0000000114bcb869 base::debug::CollectStackTrace(void**, unsigned long) + 9
1 Electron Framework 0x0000000114ac6203 base::debug::StackTrace::StackTrace() + 19
2 Electron Framework 0x0000000114bcb731 base::debug::(anonymous namespace)::StackDumpSignalHandler(int, __siginfo*, void*) + 2385
3 libsystem_platform.dylib 0x00007fff6fcda5fd _sigtramp + 29
4 ??? 0x00007ff9fa10d6e0 0x0 + 140711618991840
5 Electron Framework 0x0000000114710a51 extensions::ProcessManager::Shutdown() + 33
6 Electron Framework 0x000000011651d4fc DependencyManager::DestroyContextServices(void*) + 140
7 Electron Framework 0x000000010f34d708 electron::ElectronBrowserContext::~ElectronBrowserContext() + 184
8 Electron Framework 0x000000010f34d9fe electron::ElectronBrowserContext::~ElectronBrowserContext() + 14
9 Electron Framework 0x000000010f35160d std::__1::__tree<std::__1::__value_type<electron::ElectronBrowserContext::PartitionKey, std::__1::unique_ptr<electron::ElectronBrowserContext, std::__1::default_delete<electron::ElectronBrowserContext> > >, std::__1::__map_value_compare<electron::ElectronBrowserContext::PartitionKey, std::__1::__value_type<electron::ElectronBrowserContext::PartitionKey, std::__1::unique_ptr<electron::ElectronBrowserContext, std::__1::default_delete<electron::ElectronBrowserContext> > >, std::__1::less<electron::ElectronBrowserContext::PartitionKey>, true>, std::__1::allocator<std::__1::__value_type<electron::ElectronBrowserContext::PartitionKey, std::__1::unique_ptr<electron::ElectronBrowserContext, std::__1::default_delete<electron::ElectronBrowserContext> > > > >::destroy(std::__1::__tree_node<std::__1::__value_type<electron::ElectronBrowserContext::PartitionKey, std::__1::unique_ptr<electron::ElectronBrowserContext, std::__1::default_delete<electron::ElectronBrowserContext> > >, void*>*) + 61
10 Electron Framework 0x000000010f3515f6 std::__1::__tree<std::__1::__value_type<electron::ElectronBrowserContext::PartitionKey, std::__1::unique_ptr<electron::ElectronBrowserContext, std::__1::default_delete<electron::ElectronBrowserContext> > >, std::__1::__map_value_compare<electron::ElectronBrowserContext::PartitionKey, std::__1::__value_type<electron::ElectronBrowserContext::PartitionKey, std::__1::unique_ptr<electron::ElectronBrowserContext, std::__1::default_delete<electron::ElectronBrowserContext> > >, std::__1::less<electron::ElectronBrowserContext::PartitionKey>, true>, std::__1::allocator<std::__1::__value_type<electron::ElectronBrowserContext::PartitionKey, std::__1::unique_ptr<electron::ElectronBrowserContext, std::__1::default_delete<electron::ElectronBrowserContext> > > > >::destroy(std::__1::__tree_node<std::__1::__value_type<electron::ElectronBrowserContext::PartitionKey, std::__1::unique_ptr<electron::ElectronBrowserContext, std::__1::default_delete<electron::ElectronBrowserContext> > >, void*>*) + 38
11 Electron Framework 0x000000010f35121b electron::ElectronBrowserMainParts::PostMainMessageLoopRun() + 219
12 Electron Framework 0x00000001134a7477 content::BrowserMainLoop::ShutdownThreadsAndCleanUp() + 647
13 Electron Framework 0x00000001134a9510 content::BrowserMainRunnerImpl::Shutdown() + 224
14 Electron Framework 0x00000001134a3c57 content::BrowserMain(content::MainFunctionParams const&) + 279
15 Electron Framework 0x00000001132bd757 content::ContentMainRunnerImpl::RunServiceManager(content::MainFunctionParams&, bool) + 1191
16 Electron Framework 0x00000001132bd283 content::ContentMainRunnerImpl::Run(bool) + 467
17 Electron Framework 0x00000001110e92be content::RunContentProcess(content::ContentMainParams const&, content::ContentMainRunner*) + 2782
18 Electron Framework 0x00000001110e93ac content::ContentMain(content::ContentMainParams const&) + 44
19 Electron Framework 0x000000010f2285a9 ElectronMain + 137
20 Electron 0x0000000108e99631 main + 289
21 libdyld.dylib 0x00007fff6faddcc9 start + 1
[end of stack trace]
| 1 |
I am trying to figure out how to center an item in the navbar. I would
essentially like to be able to have two navs, one left and one right. Then
have an element (logo or CSS styled div) in the middle of the nav. It would be
nice to have something similar to .pull-right, a .pull-center or something to
assist with this. I have been trying to override the css and write it in
myself, but for the life of me can't get it to work right.
| 1 | |
Is there a supported way to write an image using the `clipboard` API? I've
tried writing a data URL using `writeText` (similar to this) but that isn't
cutting it. Perhaps the `type` parameter is involved but the documentation
isn't clear as to what that should be.
|
Is it possible to use `clipboard.read()` to access an image copied to OS X's
pasteboard?
| 1 |
Hi,
again a bug which doesn't need fixing right now, but in a long run.
It would be good to be more FHS and distributions friendly, that means at least:
- honor libexecdir (binary architecture files) and datadir (documentation, etc.)
- allow to have standard library files to be read only (mostly goinstall issues)
- install additional packages to /usr/local/golang (f.e.)
- allow to have more than one path in $GOROOT (something like $PYTHONPATH, etc.), so the
users can install packages/libraries to their home directories (and godoc, goinstall,
etc. would know about that).
There's probably more, but it will probably pop-up during the time.
|
What steps will reproduce the problem?
1. visit the documentation for any type (e.g.
http://golang.org/pkg/net/http/#CanonicalHeaderKey)
2. click on the title of the section, to go see the corresponding code
What is the expected output? What do you see instead?
I expect to get to a page where I can directly see the code for the clicked element.
Instead I get to the first line of the file containing the code I'm interested in.
Please use labels and text to provide additional information.
| 0 |
# Bug report
**What is the current behavior?**
Destructuring DefinePlugin variables causes runtime error `Uncaught
ReferenceError: process is not defined`
**If the current behavior is a bug, please provide the steps to reproduce.**
1. have DefinePlugin plugin defined in the webpack config like this:
plugins: [
new webpack.DefinePlugin({
'process.env.NODE_ENV': JSON.stringify(process.env.NODE_ENV)
})
]
2. try to access `NODE_ENV` via destructuring: `const { NODE_ENV } = process.env;`
3. you get runtime failure `Uncaught ReferenceError: process is not defined`
However!
if you access NODE_ENV like `const NODE_ENV = process.env.NODE_ENV;` instead
of `const { NODE_ENV } = process.env;` you don't get runtime error and
everything works like expected.
**What is the expected behavior?**
Both ways of accessing the variable should be equal and should not cause
runtime error.
It is pretty confusing and hard to debug problem and also problematic due to
widely used `prefer-destructuring` eslint rule.
**Other relevant information:**
webpack version: 5.64.4
Node.js version: v16.13.0
Operating System: MacOS Big Sur
Additional tools:
|
# Bug report
**What is the current behavior?**
Hi, I am working with webpack 4 and yarn berry workspaces.
whenever I try to import a "package (workspace)" the browser console gives me
a warning:
"export 'IAppError' was not found in './src/reduce_request_error'
The app however works, and no other places throw this warning. It happens to
all of the packages that we import and export in our index files `index.ts`
Thanks,
Best regards.
TC
**If the current behavior is a bug, please provide the steps to reproduce.**



**What is the expected behavior?**
No warnings... because app works.
Do you think its an issue on your side?
Thanks in advance,
best regards,
TC
**Other relevant information:**
webpack version: "^4.43.0",
Node.js version: 12.10
Operating System: macos
Additional tools: yarn berry
| 0 |
**Context:**
* Playwright Version: playwright-chromium@1.2.1
* Operating System: Windows 10 Build 18363.900
* Node.js version: 12.18.3
* Browser: MS Edge Chromium Version 84.0.522.52 (Official build) (64-bit)
**Code Snippet**
const EDGE_PATH = require("edge-paths").getEdgePath();
const chromium = require('playwright-chromium').chromium;
const test = async () => {
console.warn('Starting browser');
const browser = await chromium.launch({
executablePath: EDGE_PATH,
headless: true,
// logger: {
// isEnabled: (name, severity) => {
// return name === 'protocol';
// },
// log: (name, severity, message, args) => console.log(`${name} [${severity}] ${message}`)
// }
});
const pageUrl = 'https://google.com';
const closeBrowser = async timeout => {
return new Promise(resolve => {
setTimeout(async () => {
console.log('Closing browser');
await browser.close();
resolve();
}, timeout);
});
};
const context = await browser.newContext();
const page = await context.newPage();
await page.goto(pageUrl);
await closeBrowser(1000);
};
test();
**Describe the bug**
Hi!
When I tried to use MS Edge Chromium for testing it crashes on browser
closing.
I provided simple script that reproduces my problem.
After browser closing I have console error. This error is due to the
"...\AppData\Local\Temp\playwright_chromiumdev_profile-8VPUjL\CrashpadMetrics-
active.pma" can't be unlinked.
[Error: EPERM: operation not permitted, unlink '....\AppData\Local\Temp\playwright_chromiumdev_profile-8VPUjL\CrashpadMetrics-active.pma'] {
errno: -4048,
code: 'EPERM',
syscall: 'unlink',
path: '....\\AppData\\Local\\Temp\\playwright_chromiumdev_profile-8VPUjL\\CrashpadMetrics-active.pma'
}
I suppose when playwright-chromium cleans temp folder MS Edge writes crash
report there.
The same error appears when browser running in headless mode inside Windows
Docker container.
Additionally I see system window with error. This window sais that MS Edge has
stopped working.

In case when browser is started with headless=false this problem is not
reproduced.
Other interesting thing: when logger with 'protocol' level is attached to the
playwright, code often stops on the page.goto() command without any errors.
|
**Context:**
* Playwright Version: tested on versions 1.6.1, 1.3.0 and 1.4.0
* Operating System:
Edition Windows 10 Enterprise
Version 20H2
Installed on 8/20/2020
OS build 19042.630
Experience Windows Feature Experience Pack 120.2212.31.0
* Node.js version: v12.10.0
* Browser: MS Edge Chromium Version 86.0.622.69
* Extra: Jest v. 23.6 on Visual Studio Code
**Code Snippet**
Help us help you! Put down a short code snippet that illustrates your bug and
that we can run and debug locally. For example:
import { chromium } from "playwright";
import { getEdgePath } from "edge-paths";
jest.setTimeout(10000);
test("Basic Test", async () => {
const browser = await chromium.launch({
headless: false,
executablePath: getEdgePath()
});
const context = await browser.newContext();
const page = await context.newPage();
await page.goto("https://www.google.com");
await page.close();
await context.close();
await browser.close();
});
**Describe the bug**
If you run this test it will pass but then you will see

console.error node_modules/playwright/lib/server/helper.js:59
[Error: EPERM: operation not permitted, unlink
'[userpath]\AppData\Local\Temp\playwright_chromiumdev_profile-
IAueWz\CrashpadMetrics-active.pma'] {
errno: -4048,
code: 'EPERM',
syscall: 'unlink',
path: '[userpath]\AppData\Local\Temp\playwright_chromiumdev_profile-
IAueWz\CrashpadMetrics-active.pma'
}
This error won't occur if you run the test in head mode.
_And if you open task manager you will see two msedge.exe processes that
haven't ended_ <\-- This issue is specific to Edge browser I believe
| 1 |
I'm not super awesome at Python, so I'd appreciate any help you can provide to
help nail down this issue.
Newer versions of Python 3 on Windows appear to have an issue when doing an
ssl handshake. This appears to be a problem with urllib3 on Windows python
packages only. I was able to successfully use Python 3.3.2 and 3.2.4 on Ubuntu
without this issue.
Effected Versions: Windows Python 3.3.x, 3.2.x
Works in: Non-Windows Python 3.3.x, 3.2.x, Windows 3.1.x
While I'm almost certain here the bug is actually in Python's urllib3 and not
Requests, I'm not super experienced, so I feel a little over my head. Of
course, this bug effects anyone who uses Requests too. It appears related to
this bug: http://bugs.python.org/issue16361
In order to communicate with the server, I am using Lantern:
https://github.com/dechols/lantern
v = lantern.AbstractAPI(username, password)
v.get_app_list()
Which causes the failure.
But even using a very simple GET with requests directly, causes the same
issue:
r = requests.get(url="https://analysiscenter.veracode.com/api/4.0/getapplist.do", auth=(username, password))
Please help! I'd love to get this fixed, where-ever it might need fixing.
Traceback (most recent call last):
File "C:\Python33\lib\site-packages\requests\packages\urllib3\connectionpool.py", line 428, in urlopen
body=body, headers=headers)
File "C:\Python33\lib\site-packages\requests\packages\urllib3\connectionpool.py", line 280, in _make_request
conn.request(method, url, **httplib_request_kw)
File "C:\Python33\lib\http\client.py", line 1061, in request
self._send_request(method, url, body, headers)
File "C:\Python33\lib\http\client.py", line 1099, in _send_request
self.endheaders(body)
File "C:\Python33\lib\http\client.py", line 1057, in endheaders
self._send_output(message_body)
File "C:\Python33\lib\http\client.py", line 902, in _send_output
self.send(msg)
File "C:\Python33\lib\http\client.py", line 840, in send
self.connect()
File "C:\Python33\lib\site-packages\requests\packages\urllib3\connectionpool.py", line 107, in connect
ssl_version=resolved_ssl_version)
File "C:\Python33\lib\site-packages\requests\packages\urllib3\util.py", line 369, in ssl_wrap_socket
return context.wrap_socket(sock, server_hostname=server_hostname)
File "C:\Python33\lib\ssl.py", line 210, in wrap_socket
_context=self)
File "C:\Python33\lib\ssl.py", line 310, in __init__
raise x
File "C:\Python33\lib\ssl.py", line 306, in __init__
self.do_handshake()
File "C:\Python33\lib\ssl.py", line 513, in do_handshake
self._sslobj.do_handshake()
ConnectionResetError: [WinError 10054] An existing connection was forcibly closed by the remote host
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Python33\lib\site-packages\requests\adapters.py", line 292, in send
timeout=timeout
File "C:\Python33\lib\site-packages\requests\packages\urllib3\connectionpool.py", line 474, in urlopen
raise MaxRetryError(self, url, e)
[Finished in 61.3s with exit code 1]requests.packages.urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='analysiscenter.veracode.com', port=443): Max retries exceeded with url: /api/4.0/getapplist.do (Caused by <class 'ConnectionResetError'>: [WinError 10054] An existing connection was forcibly closed by the remote host)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\tfs\Vertafore_TFSDev\CQ\veracode\pythontestscriptonation.py", line 11, in <module>
r = requests.get(url="https://analysiscenter.veracode.com/api/4.0/getapplist.do", auth=(username, password))
File "C:\Python33\lib\site-packages\requests\api.py", line 55, in get
return request('get', url, **kwargs)
File "C:\Python33\lib\site-packages\requests\api.py", line 44, in request
return session.request(method=method, url=url, **kwargs)
File "C:\Python33\lib\site-packages\requests\sessions.py", line 335, in request
resp = self.send(prep, **send_kwargs)
File "C:\Python33\lib\site-packages\requests\sessions.py", line 438, in send
r = adapter.send(request, **kwargs)
File "C:\Python33\lib\site-packages\requests\adapters.py", line 327, in send
raise ConnectionError(e)
requests.exceptions.ConnectionError: HTTPSConnectionPool(host='analysiscenter.veracode.com', port=443): Max retries exceeded with url: /api/4.0/getapplist.do (Caused by <class 'ConnectionResetError'>: [WinError 10054] An existing connection was forcibly closed by the remote host)
|
I'm getting a strange error when using Requests in Python 3.3 (other flavors
of Python 3 do not get this error):
Traceback (most recent call last):
File "C:\Python33\lib\site-packages\requests\packages\urllib3\connectionpool.py", line 421, in urlopen
body=body, headers=headers)
File "C:\Python33\lib\site-packages\requests\packages\urllib3\connectionpool.py", line 273, in _make_request
conn.request(method, url, **httplib_request_kw)
File "C:\Python33\lib\http\client.py", line 1049, in request
self._send_request(method, url, body, headers)
File "C:\Python33\lib\http\client.py", line 1087, in _send_request
self.endheaders(body)
File "C:\Python33\lib\http\client.py", line 1045, in endheaders
self._send_output(message_body)
File "C:\Python33\lib\http\client.py", line 890, in _send_output
self.send(msg)
File "C:\Python33\lib\http\client.py", line 828, in send
self.connect()
File "C:\Python33\lib\site-packages\requests\packages\urllib3\connectionpool.py", line 104, in connect
ssl_version=resolved_ssl_version)
File "C:\Python33\lib\site-packages\requests\packages\urllib3\util.py", line 329, in ssl_wrap_socket
return context.wrap_socket(sock, server_hostname=server_hostname)
File "C:\Python33\lib\ssl.py", line 210, in wrap_socket
_context=self)
File "C:\Python33\lib\ssl.py", line 310, in __init__
raise x
File "C:\Python33\lib\ssl.py", line 306, in __init__
self.do_handshake()
File "C:\Python33\lib\ssl.py", line 513, in do_handshake
self._sslobj.do_handshake()
ConnectionResetError: [WinError 10054] An existing connection was forcibly closed by the remote host
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Python33\lib\site-packages\requests\adapters.py", line 211, in send
timeout=timeout
File "C:\Python33\lib\site-packages\requests\packages\urllib3\connectionpool.py", line 465, in urlopen
raise MaxRetryError(self, url, e)
requests.packages.urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='analysiscenter.veracode.com', port=443): Max retries exceeded with url: /api/2.0/getappbuilds.do (Caused by <class 'ConnectionResetError'>: [WinError 10054] An existing connection was forcibly closed by the remote host)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\tfs\Vertafore_TFSDev\CQ\python\veracode\pythontestscriptonation.py", line 33, in <module>
print(v.get_app_builds())
File "C:\tfs\Vertafore_TFSDev\CQ\python\veracode\apiwrapper.py", line 184, in get_app_builds
{}
File "C:\tfs\Vertafore_TFSDev\CQ\python\veracode\apiwrapper.py", line 57, in request
r = requests.get(URL, params=data, auth=username_password)
File "C:\Python33\lib\site-packages\requests\api.py", line 55, in get
return request('get', url, **kwargs)
File "C:\Python33\lib\site-packages\requests\api.py", line 44, in request
return session.request(method=method, url=url, **kwargs)
File "C:\Python33\lib\site-packages\requests\sessions.py", line 354, in request
resp = self.send(prep, **send_kwargs)
File "C:\Python33\lib\site-packages\requests\sessions.py", line 460, in send
r = adapter.send(request, **kwargs)
File "C:\Python33\lib\site-packages\requests\adapters.py", line 246, in send
raise ConnectionError(e)
requests.exceptions.ConnectionError: HTTPSConnectionPool(host='analysiscenter.veracode.com', port=443): Max retries exceeded with url: /api/2.0/getappbuilds.do (Caused by <class 'ConnectionResetError'>: [WinError 10054] An existing connection was forcibly closed by the remote host)
[Finished in 61.0s with exit code 1]
This automation has run for months on Python 3.2 and these errors have never
occurred. I don't really know enough about requests to investigate this issue,
but I'd be happy to help recreate the issue or debug if someone else can.
Perhaps there's a bug in how Requests is handling HTTPS requests with Python
3.3? (Did 3.3 change how urllib works?...I don't know offhand...)
Again, I'm not getting any of these issues in Python 3.2 or Python 3.1. Please
help! :)
| 1 |
The current implementation of autowiring seems to force unnecessary coupling
and to be instable in case of change in a vendor's service implementation.
Example 1:
interface IA
interface IB extends IA
*service* class C implemets IB
I can autowire using the type hint `IB`, but not `IA`.
Example 2:
interface I
class CA implements I
*service* class CB extends CA
I cannot type hint on `I`.
This means, autowiring would break if an intermediary interface or an abstract
class were added. Nothing in the documentation
(http://symfony.com/doc/current/components/dependency_injection/autowiring.html)
mentions this limitation and it seems highly counter-intuitive.
If you agree this should be addressed, I am ready to provide a PR to fix this.
|
With a fresh 4.1.1 install, I override the framework's router:
namespace App\Service;
use Symfony\Bundle\FrameworkBundle\Routing\Router;
class AdminRouter extends Router
{
}
And make a service with:
App\Service\AdminRouter:
arguments:
$resource: 'resource'
$options: []
Which generates a rather ominous error message:
The service ".service_locator.G69Xsbl.App\Service\AdminRouter"
has a dependency on a non-existent service "Symfony\Component\Config\Loader\LoaderInterface".
LoaderInterface? Where the heck does that come from? Looking at the
framework's router:
# Symfony\Bundle\FrameworkBundle\Routing\Router;
public function __construct(
ContainerInterface $container,
$resource,
array $options = array(),
RequestContext $context = null,
ContainerInterface $parameters = null,
LoggerInterface $logger = null)
{
Notice the two ContainerInterfaces. Someone got a bit "clever" with the
parameters injection.
Changing the service definition fixes the problem:
App\Service\AdminRouter:
arguments:
$container: '@service_container'
$parameters: '@service_container'
$resource: 'resource'
$options: []
Is this a bug or some sort of feature? Obviously injecting the same type twice
is a fairly rare thing. If it is a bug then is it even worth fixing?
| 0 |
### Describe your issue.
When the `hybr` method doesn't converge, it may sometimes report erroneously
`success = True`. For example, looking at the function $f(x) = x^2-2x$ for
initial value $x_0=1$ (or values very close to $1$), this method reports
convergence on `1.01` .
When looking at other methods available in `root`, `hybr` is the only one to
falsely report success while being clearly not converged.
I would propose adding a last function call to `hybr` and setting `success`
accordingly
### Reproducing Code Example
>>> root(lambda x: x**2-2*x, x0=1.)
fjac: array([[-1.]])
fun: array([-0.9999])
message: 'The solution converged.'
nfev: 6
qtf: array([0.9999])
r: array([-0.02000001])
status: 1
success: True
x: array([1.01])
>>> methods = ['hybr', 'lm', 'broyden1', 'broyden2',
... 'anderson', 'linearmixing', 'diagbroyden', 'excitingmixing',
... 'krylov', 'df-sane']
>>> for m in methods:
... r = root(lambda x: x**2-2*x, x0=1., method=m)
... print(f"{m}: {r.success = }, {r.x = }")
...
hybr: r.success = True, r.x = array([1.01])
lm: r.success = True, r.x = array([2.])
broyden1: r.success = True, r.x = array(4.46196389e-07)
broyden2: r.success = True, r.x = array(4.46196389e-07)
anderson: r.success = True, r.x = array(1.4289777e-06)
linearmixing: r.success = True, r.x = array(4.65661287e-10)
diagbroyden: r.success = True, r.x = array(4.46196389e-07)
excitingmixing: r.success = True, r.x = array(1.18976516e-07)
krylov: r.success = False, r.x = array(1.)
df-sane: r.success = True, r.x = array(2.)
### Error message
-
### SciPy/NumPy/Python version information
1.9.3 1.23.4 sys.version_info(major=3, minor=10, micro=6,
releaselevel='final', serial=0)
|
Running
from scipy.optimize import root
print(root(lambda x: (x - 1) ** 2 - 1, 1))
print(root(lambda x: (x - 1.6667898)**2 - 2, 1.66678977))
gives the ridiculous results
fjac: array([[-1.]])
fun: array([-0.9999])
message: 'The solution converged.'
nfev: 6
qtf: array([ 0.9999])
r: array([-0.02000001])
status: 1
success: True
x: array([ 1.01])
and
fjac: array([[-1.]])
fun: array([-1.99985602])
message: 'The solution converged.'
nfev: 6
qtf: array([ 1.99985602])
r: array([ 0.02399826])
status: 1
success: True
x: array([ 1.65479065])
These should not happen.
### Scipy/Numpy/Python version information:
0.19.0, 1.12.1, version_info(major=2, minor=7, micro=13, releaselevel='final', serial=0)
| 1 |
##### Description of the problem
There seems to be a bug when copying from a buffer geometry into another
existing buffer geometry, which causes groups to be appended, instead of
replaced.
I don't think this is working as intended.
Problematic code
I think this needs a clearGroups() call before starting to add groups.
##### Three.js version
* Dev
* r82
* ...
##### Browser
* All of them
* Chrome
* Firefox
* Internet Explorer
##### OS
* All of them
* Windows
* Linux
* Android
* IOS
|
I was loading an .obj with ~70k vertices, and it was taking ~20s to load the
model. The slowness was being caused by `mergeVertices` call here:
https://github.com/mrdoob/three.js/blob/master/examples/js/loaders/OBJMTLLoader.js#L101
I commented it out just to see what would happen, and it took about ~1s to
load the model and there was no visible change. Any ideas?
| 0 |
I think this should be marked as feature request, but it's not as much a
request as a curious question (and a scream of pain, yes). From now on I'm
going to talk about neo4j-embedded API, not Cypher.
Since most methods on most entities return Iterables or Iterators, further
processing of the results turns into a fairly painful procedure. I don't know
whether I'm one from just a few or there are a lot of people with the same
problem, but if you could, please consider (if not replacing, at least
providing along) some methods that return Streams. Wouldn't it be great if
end-users could so something like this:
`node.getRelationships(type, direction).map(mapper).collect(collector)`?
IMHO, switching to Stream API might even turn out to be a good choice since
there all kinds of facilities: lazy loading of huge amounts of data, short-
circuiting on some operations and so on.
Currently we are using helpers like the following across out neo4j projects:
`StreamSupport.stream(iterable.spliterator(), false);`
It works, but introduces some amount of boilerplate and duplicated code and,
what's more important, it's just a trivial wrapper. Knowing that Stream API
provides a fair amount of additional `Characteristic`s for optimizing
operations on streams, it seems to me that bringing this functionality to
neo4j kernel would be more effective.
P.S. By the way, let me thank you for being that close to community. Seeing as
your developers kindly address issues here and on stackoverflow/google groups
and closely communicating with user base is quite an inspiration, really.
P.P.S. This it _totally_ unrelated, but is there any possibility of
introducing a typed `getProperty()` method that could infer it's return type?
Now we have just this:`String s = (String)node.getProperty("key")`, but it
would be (again, not that important, but probably just nice) `String s =
node.getProperty("key")` or (dummy example)
`System.out.println(node.<String>getProperty(someKey))`. Here's an excerpt
from the code we use for this:
public <T> T getProperty(final String key, final T deflt) {
Objects.requireNonNull(key);
return (T)node.getProperty(key, deflt);
}
|
when our neo4j backup runs daily, it seems to be generating inconsistency
checker report files in the /root folder filling up space
**Neo4j Version:** 3.5.7
**Operating System:** Ubuntu 18.04.2 LTS
### Steps to reproduce
1. Install neo4j 3.5.7
2. Take backups nightly until problems arise
### Expected behavior
Backups should work, consistency checker should not be getting
NullPointerExceptions
### Actual behavior
/root/ filling up with these NullPointerExceptions
Example:
`ERROR: Failed to check record: java.lang.NullPointerException at
org.neo4j.consistency.checking.full.PropertyReader.propertyValue(PropertyReader.java:107)
at
org.neo4j.consistency.checking.full.PropertyAndNodeIndexedCheck.getPropertyValues(PropertyAndNodeIndexedCheck.java:217)
at
org.neo4j.consistency.checking.full.PropertyAndNodeIndexedCheck.matchIndexesToNode(PropertyAndNodeIndexedCheck.java:117)
at
org.neo4j.consistency.checking.full.PropertyAndNodeIndexedCheck.check(PropertyAndNodeIndexedCheck.java:83)
at
org.neo4j.consistency.checking.full.PropertyAndNodeIndexedCheck.check(PropertyAndNodeIndexedCheck.java:59)
at
org.neo4j.consistency.report.ConsistencyReporter.dispatch(ConsistencyReporter.java:116)
at
org.neo4j.consistency.report.ConsistencyReporter.forNode(ConsistencyReporter.java:384)
at
org.neo4j.consistency.checking.full.PropertyAndNode2LabelIndexProcessor.process(PropertyAndNode2LabelIndexProcessor.java:63)
at
org.neo4j.consistency.checking.full.PropertyAndNode2LabelIndexProcessor.process(PropertyAndNode2LabelIndexProcessor.java:39)
at
org.neo4j.consistency.checking.full.RecordCheckWorker.run(RecordCheckWorker.java:77)
at`
Note that prior to some date, around Nov 19, we are seeing much smaller
"inconsistency" report files containing details such as
`Node[6042535,used=true,rel=-1,prop=13355857,labels=Inline(0x1000000002:[2]),light,secondaryUnitId=-1]
ERROR: The property chain contains multiple properties that have the same
property key id, which means that the entity has at least one duplicate
property.` and at around Nov 19 the NullPointerException appears which causes
growth in file sizes on the disk
Not aware of any process that may have changed
| 0 |
I have simple dropdown buttons we built with 2.04
links are A tags, proper quotes, href='#'
Used to work fine.
Upgraded today to 2.1 and the links in any dropdown button don't work. The
dropdown menu opens, but clicking on a link closes the menu without any
action. tested on Android 2.3 and iOS 5
Rolledback to 2.04 and everything works again. Anyone else has this issue?
|
Heya!
We downloaded the server backend for the custom bootstrap builds. I would like
to add a custom prefix as a field to the customizer frontend.
In general, would you merge a PR for the customizer frontend and the node
backend with the possibility to prefix the css classes?
| 0 |
##### ISSUE TYPE
* Bug Report
##### ANSIBLE VERSION
ansible 2.1.2.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
##### CONFIGURATION
No changes to default ansible.cfg
##### OS / ENVIRONMENT
Ubuntu
Directory Structure:
devops
ansible
farscape
group_vars
all
qa
qa-ui
inventories
ec2.ini
ec2.py
qa
security
keys
platform
qa
key.pem
qa-ui
ansible
site.yml
roles
qa-ui
##### SUMMARY
Ansible can't connect to host after upgrading from 2.1.0.0 to 2.1.2.0.
##### STEPS TO REPRODUCE
site.yml:
- name: Test Ping
hosts: "{{ hosts }}"
gather_facts: no
tags: test
tasks:
- name: Test Ping
ping:
qa group var file:
ansible_ssh_user: ubuntu
ansible_ssh_private_key_file: ../../../security/keys/platform/qa/key.pem
Executed from the devops -> ansible -> farscape directory (see directory structure above):
ansible-playbook -i inventories/ ../../../qa-ui/ansible/site.yml -t test -e "hosts=qa-ui"
##### EXPECTED RESULTS
Successful connection and ping result.
##### ACTUAL RESULTS
Using /etc/ansible/ansible.cfg as config file
Loaded callback default of type stdout, v2.0
PLAYBOOK: site.yml *************************************************
2 plays in ../../../qa-ui/ansible/site.yml
PLAY [Test Ping] ***************************************************************
TASK [Test Ping] ***************************************************************
task path: /opt/teamcity-agent/work/56ef8995252d7315/qa-ui/ansible/site.yml:6
<10.2.10.13> ESTABLISH SSH CONNECTION FOR USER: None
<10.2.10.12> ESTABLISH SSH CONNECTION FOR USER: None
<10.2.10.13> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/ubuntu/.ansible/cp/ansible-ssh-%h-%p-%r 10.2.10.13 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1477325466.67-100845165082436 `" && echo ansible-tmp-1477325466.67-100845165082436="` echo $HOME/.ansible/tmp/ansible-tmp-1477325466.67-100845165082436 `" ) && sleep 0'"'"''
<10.2.10.12> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/ubuntu/.ansible/cp/ansible-ssh-%h-%p-%r 10.2.10.12 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1477325466.67-37577927945882 `" && echo ansible-tmp-1477325466.67-37577927945882="` echo $HOME/.ansible/tmp/ansible-tmp-1477325466.67-37577927945882 `" ) && sleep 0'"'"''
fatal: [10.2.10.13]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh.", "unreachable": true}
fatal: [10.2.10.12]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh.", "unreachable": true}
PLAY RECAP *********************************************************************
10.2.10.12 : ok=0 changed=0 unreachable=1 failed=0
10.2.10.13 : ok=0 changed=0 unreachable=1 failed=0
This error happens across all our ansible tasks after upgrading from 2.1.0.0
to 2.1.2.0.
|
##### ISSUE TYPE
* Bug Report
##### COMPONENT NAME
ec2_group
##### ANSIBLE VERSION
ansible 2.3.0.0
config file = /home/con5cience/git/Ansible/ansible.cfg
configured module search path = [u'/usr/share/ansible', u'library']
python version = 2.7.13 (default, Jan 12 2017, 17:59:37) [GCC 6.3.1 20161221 (Red Hat 6.3.1-1)]
##### CONFIGURATION
[defaults]
inventory=inventory
library=/usr/share/ansible:library
roles_path=roles
vault_password_file = vault_pass
forks=20
ask_sudo_pass=yes
nocows=0
cow_selection = random
host_key_checking = False
[ssh_connection]
pipelining=True
ssh_args = -o ForwardAgent=yes -o ControlMaster=auto -o ControlPersist=30m
control_path = ~/.ssh/ansible-%%r@%%h:%%p
##### OS / ENVIRONMENT
From: Fedora 25
To: Amazon EC2 API
##### SUMMARY
Managing the rules of EC2 security groups using the `ec2_group` module is not
idempotent.
##### STEPS TO REPRODUCE
1. Use Ansible to configure a security group and add rules to the group.
2. Run Ansible again.
ec2_group:
name: "{{ security_group_name|default( inventory_hostname ~ '-' ~ env ~ '-security-group' ) }}"
description: "{{ security_group_name|default( inventory_hostname ~ '-' ~ env ~ 'Security Group' ) }}"
vpc_id: "{{ ec2_vpc_id }}"
region: "{{ ec2_region }}"
rules: "{{ common_rules_ingress + rules_ingress }}"
rules_egress: "{{ common_rules_egress + rules_egress }}"
delegate_to: localhost
Where `common_rules_ingress`, `common_rules_egress`, `rules_ingress`, and
`rules_egress` are lists of port rules in the style of:
- proto: tcp
from_port: 443
to_port: 443
cidr_ip: 0.0.0.0/0
- proto: tcp
from_port: 80
to_port: 80
cidr_ip: 0.0.0.0/0
##### EXPECTED RESULTS
Task exits with 0 and playbook continues if the rule already exists and no
changes are made.
##### ACTUAL RESULTS
Task exits with 1 (or 2, not entirely sure) and playbook run immediately bails
when a rule already exists and no changes are made.
The full traceback is:
Traceback (most recent call last):
File "/tmp/ansible_em0SmM/ansible_module_ec2_group.py", line 487, in <module>
main()
File "/tmp/ansible_em0SmM/ansible_module_ec2_group.py", line 439, in main
cidr_ip=thisip)
File "/usr/lib/python2.7/site-packages/boto/ec2/connection.py", line 3245, in authorize_security_group_egress
params, verb='POST')
File "/usr/lib/python2.7/site-packages/boto/connection.py", line 1227, in get_status
raise self.ResponseError(response.status, response.reason, body)
boto.exception.EC2ResponseError: EC2ResponseError: 400 Bad Request
<?xml version="1.0" encoding="UTF-8"?>
<Response><Errors><Error><Code>InvalidPermission.Duplicate</Code><Message>the specified rule "peer: 0.0.0.0/0, TCP, from port: 443, to port: 443, ALLOW" already exists</Message></Error></Errors><RequestID>871be7ab-957c-4c41-8982-706e7a5dc64c</RequestID></Response>
fatal: [remotehost -> localhost]: FAILED! => {
"changed": false,
"failed": true,
"module_stderr": "Traceback (most recent call last):\n File \"/tmp/ansible_em0SmM/ansible_module_ec2_group.py\", line 487, in <module>\n main()\n File \"/tmp/ansible_em0SmM/ansible_module_ec2_group.py\", line 439, in main\n cidr_ip=thisip)\n File \"/usr/lib/python2.7/site-packages/boto/ec2/connection.py\", line 3245, in authorize_security_group_egress\n params, verb='POST')\n File \"/usr/lib/python2.7/site-packages/boto/connection.py\", line 1227, in get_status\n raise self.ResponseError(response.status, response.reason, body)\nboto.exception.EC2ResponseError: EC2ResponseError: 400 Bad Request\n<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<Response><Errors><Error><Code>InvalidPermission.Duplicate</Code><Message>the specified rule \"peer: 0.0.0.0/0, TCP, from port: 443, to port: 443, ALLOW\" already exists</Message></Error></Errors><RequestID>871be7ab-957c-4c41-8982-706e7a5dc64c</RequestID></Response>\n",
"module_stdout": "",
"msg": "MODULE FAILURE",
"rc": 1
}
See also:
#5643 (arbitrarily closed)
ansible/ansible-modules-core#2576 (open but in locked repo)
| 0 |
# Environment
Windows build number: 10.0.19041.264
PowerToys version: 0.18.2
PowerToy module for which you are reporting the bug (if applicable): FanzyZones (Keyboard Manager?)
# Steps to reproduce
1. In FancyZones enable "Override Windows Snap hotkeys (Win + arrow) to move windows between zones"
2. In Keyboard Manager use "Remap a key" to swap Alt (Left) and Win (Left), by mapping:
* Win (Left) to Alt (Left)
* Alt (Left) to Win (Left)
3. Hold `Alt (Left)` (mapped to Win) press `Left` (arrow key).
# Expected behavior
Window moves between zones.
# Actual behavior
Window uses normal Windows Snap behavior.
# Additional test scenario
This re-binding of Win (Left) to another key seems to cascade to mapped
shortcuts as well.
## Steps to reproduce
1. Follow steps to reproduce described above
2. In Keyboard Manager use "Remap shortcuts" to give a VIM style movement binding:
* Win (Left), Shift (Left), H to Win (Left), Left
3. Hold `Alt (Left)` (mapped to Win Left) and `Shift (Left)` then press `H` key
## Expected behavior
Window moves between zones.
## Actual behavior
Window uses normal Windows Snap behavior.
# Work around
Now that I understand the issue, I'm working around this by leaving Win/Alt
unmodified and instead creating shortcut mappings for any hotkeys I'd like to
use with the key next to spacebar. This seems to work well enough and means I
can do things such as map Alt-L which was not possible with Win-L.
If the issue reported isn't reasonably achievable, my request would be to put
a note in Keyboard Manager that rebinding Win (Left) may cause unpredictable
behavior.
|
KBM currently does not always work consistenly with FZ and Shortcut guide if
the Win key is remapped. The current workaround for a user would be to disable
and enable KBM (since this will restart the hook), however we should
automatically do this for v1.
| 1 |
##### ISSUE TYPE
* Bug Report
##### ANSIBLE VERSION
ansible 2.1.0.0
config file = /path/to/whatever/ansible.cfg
configured module search path = Default w/o overrides
##### CONFIGURATION
[ssh_connection]
pipelining=True
##### OS / ENVIRONMENT
N/A
##### SUMMARY
Unix domain socket created by ansible was too long, therefore ssh fails.
The error message is very vague:
this-is-a-long-hostname.example.org | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh.",
"unreachable": true
}
Giving the same ssh command as ansible, the error was more descriptive:
unix_listener: "/Users/user12/.ansible/cp/ansible-ssh-this-is-a-long-hostname.example.org-1022-user12345.ulAxrSDBy3jA13KO" too long for Unix domain socket
I was able to solve this by putting in my `ansible.cfg`
control_path = %(directory)s/%%h-%%r
The socket name that ansible sets is clearly way too long. This means that, as
of now, long hostnames don't work with Ansible, and it can be quite
restricting.
##### STEPS TO REPRODUCE
* Have a long hostname in your inventory (e.g. `this-is-a-long-hostname.example.org`)
* Try a simple ping command (with `ansible -vvvvvvvvvvvvv -m ping` to that host)
##### EXPECTED RESULTS
You can connect to the machine
##### ACTUAL RESULTS
* You cannot connect to the machine
* SSH error is very vague (maybe addressed by #16649 )
|
##### ISSUE TYPE
Feature Idea
##### COMPONENT NAME
ssh control persist
##### ANSIBLE VERSION
2.0
##### SUMMARY
When trying to use the ec2 plugin, ssh fails with this error:
`SSH Error: unix_listener: "/Users/luke/.ansible/cp/ansible-ssh-
ec2-255-255-255-255.compute-1.amazonaws.com-22-ubuntu.CErvOvRE5U0urCgm" too
long for Unix domain socket`
Here's the full example:
$ ansible -vvvv -i ec2.py -u ubuntu us-east-1 -m ping
<ec2-255-255-255-255.compute-1.amazonaws.com> ESTABLISH CONNECTION FOR USER: ubuntu
<ec2-255-255-255-255.compute-1.amazonaws.com> REMOTE_MODULE ping
<ec2-255-255-255-255.compute-1.amazonaws.com> EXEC ssh -C -tt -vvv -o ControlMaster=auto -o ControlPersist=60s -o ControlPath="/Users/luke/.ansible/cp/ansible-ssh-%h-%p-%r" -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 ec2-255-255-255-255.compute-1.amazonaws.com /bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1436458336.4-21039895766180 && chmod a+rx $HOME/.ansible/tmp/ansible-tmp-1436458336.4-21039895766180 && echo $HOME/.ansible/tmp/ansible-tmp-1436458336.4-21039895766180'
ec2-255-255-255-255.compute-1.amazonaws.com | FAILED => SSH Error: unix_listener: "/Users/luke/.ansible/cp/ansible-ssh-ec2-255-255-255-255.compute-1.amazonaws.com-22-ubuntu.CErvOvRE5U0urCgm" too long for Unix domain socket
while connecting to 255.255.255.255:22
It is sometimes useful to re-run the command using -vvvv, which prints SSH debug output to help diagnose the issue.
I've changed some of the sensitive info in here like the IP etc.
| 1 |
Currently applications are not staying in the fancy zones, when the monitor is
powered off and on again.
All applications pile up in the upper left corner.
Would be nice to keep the state/position of the applications after the screen
is powered again. Same as when the computer is returning from sleep mode.
|
This was mentioned in the README so I'm creating the issue as it sounds like a
neat idea to me
I'm assuming this would allow for more flexible window arrangements than the
current quadrant snapping, etc.
| 1 |
### What problem does this feature solve?
Allow resolution of promises inside templates.
### What does the proposed API look like?
`{{ await someFunction(value) }}`
|
### Version
2.3.4
### Reproduction link
https://github.com/geekdada/vue-hackernews-2.0
### Steps to reproduce
$ yarn
$ MICRO_CACHE=false node --expose-gc --inspect server
$ ab -n 50 -c 20 http://127.0.0.1:8080/
Stimulate a garbage collection before dump memory heap.
### What is expected?
All `Vue$3` instances not being in use are garbage collected.
### What is actually happening?
As opposite.

* * *
My heap dump file can be found here: http://d.pr/f/H0ypK
| 0 |
I hit an ICE - I've reduced a reproducible test case here: http://is.gd/M5LB6P
pub struct Foo<T, P>
where P: DerefMut<Target=Bar<T>>
{
bar: P,
}
pub struct Bar<T> {
nzp: NonZero<*mut Option<T>>
}
impl<T, P> Foo<T, P>
where P: DerefMut<Target=Bar<T>>
{
fn fun(&mut self) {
let p: *mut Option<T> = *self.bar.nzp;
match unsafe {*p} {
None => (),
Some(t) => (),
}
}
}
error is
<anon>:22:23: 22:25 error: internal compiler error: this path should not cause illegal move
<anon>:22 match unsafe {*p} {
^~
I'm not even sure what the workaround for this is. Any ideas?
|
Input:
struct T(u8);
fn t() -> *mut T {
unsafe { 0u8 as *mut T }
}
fn main() {
let a = unsafe { *t() };
}
Output:
$ rustc ../test.rs
../test.rs:9:19: 9:23 error: internal compiler error: this path should not cause illegal move
../test.rs:9 let a = unsafe { *t() };
^~~~
note: the compiler unexpectedly panicked. this is a bug.
note: we would appreciate a bug report: http://doc.rust-lang.org/complement-bugreport.html
note: run with `RUST_BACKTRACE=1` for a backtrace
thread 'rustc' panicked at 'Box<Any>', /Users/John/Documents/dev/rust/src/libsyntax/diagnostic.rs:123
stack backtrace:
1: 0x10f825b75 - sys::backtrace::write::h757d4037fec4513elCt
2: 0x10f84800f - failure::on_fail::he99e1d2cd81a67a80Hz
3: 0x10f7b3c8a - rt::unwind::begin_unwind_inner::hede15ebc165353e0Qpz
4: 0x10d508707 - rt::unwind::begin_unwind::h5150449308391082809
5: 0x10d50869c - diagnostic::SpanHandler::span_bug::h1cc7aa850b4525b9nQF
6: 0x10c93bc1d - session::Session::span_bug::h9dff6f0c981e0b95mRq
7: 0x10c547999 - borrowck::build_borrowck_dataflow_data::hbfab9f3785e58ec8QRe
8: 0x10c5432fb - borrowck::borrowck_fn::h9d4d5a57ec1e26a2cPe
9: 0x10c5440f2 - borrowck::borrowck_item::hd3de64f0b51b624a9Ne
10: 0x10c54461f - borrowck::check_crate::hab49ad1d67fb67e9ZIe
11: 0x10c09e8aa - driver::phase_3_run_analysis_passes::h3bf5eb3f470c8788gwa
12: 0x10c082d90 - driver::compile_input::h63293298907e332cxba
13: 0x10c14e7ba - monitor::unboxed_closure.22558
14: 0x10c14cf15 - thunk::F.Invoke<A, R>::invoke::h15985566512806182469
15: 0x10c14bcf0 - rt::unwind::try::try_fn::h5957420952141477940
16: 0x10f8b1189 - rust_try_inner
17: 0x10f8b1176 - rust_try
18: 0x10c14c3ec - thunk::F.Invoke<A, R>::invoke::h12578415658120090831
19: 0x10f835814 - sys::thread::thread_start::he6c5dcba45c95bf2drw
20: 0x7fff933e22fc - _pthread_body
21: 0x7fff933e2279 - _pthread_body
| 1 |
## Feature request
**What is the expected behavior?**
The documentation says that
> The `[contenthash]` substitution will add a unique hash based on the content
> of an asset. When the asset's content changes, `[contenthash]` will change
> as well.
However, the `[contenthash]` does **not** actually match the hash of the file,
i.e. the result you would get if you ran `md5sum` or `sha256sum` on the file.
This StackOverflow answer explains why: the `[contenthash]` is computed before
minification.
The feature request is to compute the `[contenthash]` _after_ minification/any
other post-processing steps, or at least provide an option to do so.
**What is motivation or use case for adding/changing the behavior?**
This would have nice properties, such as allowing proxies to validate the
integrity of a file just by comparing its hash to the filename.
**How should this be implemented in your opinion?**
Change the point in the process where the `[contenthash]` is computed. I don't
know anything about webpack's architecture so I have no idea how hard this is.
**Are you willing to work on this yourself?**
I'm not a webpack developer but if there was some guidance from an expert I
could take a stab at it.
|
## Feature request
Please natively support output chunk hashing that is based on chunks _output_
rather than their _input_ (i.e., `output.filename = "[name]-[chunkhash].js"`).
_Why?_ you may ask? Because hashing is mainly used for production asset
caching. Right now, there are at least three ways the content of a chunk can
change, and only one of those causes the `[chunkhash]` value to change.
When you deploy a new version of your code, you want browsers not to use old,
stale assets. In theory, hashes should be based on the content of the files,
such that when the files changes, the hash changes, and you can leverage the
asset manifest to write out a new `<script>` or `<link>` tag with the new
hash. And obviously, the advantage of `[chunkhash]` vs `[hash]` is that if you
make a change that only changes a single chunk, you do not "invalidate" the
cache of unchanged chunks, thus improving the performance for end users who
have already downloaded unchanged chunks.
Going back to those three ways a chunk's content can change:
1. You make a change to your entrypoint or its dependencies.
2. You make a change to the webpack config (e.g. adding/removing/changing a plugin/loader).
3. You upgrade a loader/plugin version.
Right now, only **1** is supported, which leaves a pretty glaring hole. You
may, say, add source maps ( **2** ) only to discover that your CDN is still
serving a stale version of your code without source maps because the
`[chunkhash]` was not updated.
It seems like tools like https://github.com/erm0l0v/webpack-md5-hash _may_
address this, but this seems like a pretty huge flaw in the expected behavior
out-of-the-box.
**What is the expected behavior?**
The expected behavior is that when the content of a chunk changes, the hash
for that chunk should change too.
**What is motivation or use case for adding/changing the behavior?**
As explained above, the motivation is the principle of least surprise. Right
now, it's surprising that changing configuration, which may have profound
effects on the output, silently slips by as an output asset with the same name
as a stale version of the asset.
**How should this be implemented in your opinion?**
I think there are a few options:
1. Reimplement `[chunkhash]`, though I seem to remember reading issues about challenges with sourcemaps.
2. Implement a new token value (e.g. `output.filename = "[name]-[chunkhash]-[webpackhash].js"`) such that changing anything about your webpack config or its dependencies allows you to bust the cache.
**Other Considerations**
To hack around this in the mean time, I've employed a custom hashing function
that uses the final JSONified value of the webpack config as hash salt:
// webpack.config.js
const config = {};
class Hasher {
constructor() {
const hash = require("crypto").createHash("sha256");
hash.update(JSON.stringify(config));
return hash;
}
}
Object.assign(config, {
output: {
filename: isProd ? "[name]-[chunkhash].js" : "[name].js",
hashFunction: Hasher
},
...
})
module.exports = config;
This creates a custom hashing function that injects a JSONified version of the
webpack config such that changes to webpack's configuration cause the hash to
change. In theory, we could use `output.hashSalt`, but that cannot be lazily
evaluated once the entire webpack config has been constructed. Furthermore,
output.hashSalt does not get used for MiniCssExtractPlugin's [contenthash],
but (confusingly) output.hashFunction does. Finally, this only accounts for
changes in the webpack config itself—it does not account for underlying
changes in plugins/loaders due to, e.g., version upgrades.
**Are you willing to work on this yourself?**
Yes! But I think I need help.
| 1 |
## Environment info
* Python 3.7.7 (default, Mar 26 2020, 15:48:22)
* [GCC 7.3.0] :: Anaconda, Inc. on linux
* Operating System: Ubuntu 16.04.6 LTS / running on Docker
* CPU/GPU model: CPU Intel(R) Core(TM) i7-7700K CPU @ 4.20GHz
* C++ compiler version:
* CMake version:
#### Steps/Code to Reproduce
Suppose you create a class like the following to store variables passed in
arguments as temporary local attributes for later use.
At version `scikit-learn==0.22.2.post1`, this original estimator `MyClass`
print works fine.
>>> from sklearn.base import BaseEstimator
>>>
>>> class MyClass(BaseEstimator):
... def __init__(self, objective='logloss', **kwrgs):
... self.objective = objective
... self._other_params = kwrgs
...
... def get_params(self, deep=True):
... params = super().get_params(deep)
... params.update(self._other_params)
... return params
...
>>> clf = MyClass(metric='auc')
>>>
>>> print(clf)
MyClass(metric='auc', objective='logloss')
But In scikit-learn==0.23.0, I got the following `KeyError` that the instance
has not metric key.
KeyErrorTraceback (most recent call last)
<ipython-input-31-2e0afdc6afe0> in <module>
14 clf = MyClass(metric='auc')
15
---> 16 print(clf)
/opt/conda/lib/python3.7/site-packages/sklearn/base.py in __repr__(self, N_CHAR_MAX)
277 n_max_elements_to_show=N_MAX_ELEMENTS_TO_SHOW)
278
--> 279 repr_ = pp.pformat(self)
280
281 # Use bruteforce ellipsis when there are a lot of non-blank characters
/opt/conda/lib/python3.7/pprint.py in pformat(self, object)
142 def pformat(self, object):
143 sio = _StringIO()
--> 144 self._format(object, sio, 0, 0, {}, 0)
145 return sio.getvalue()
146
/opt/conda/lib/python3.7/pprint.py in _format(self, object, stream, indent, allowance, context, level)
159 self._readable = False
160 return
--> 161 rep = self._repr(object, context, level)
162 max_width = self._width - indent - allowance
163 if len(rep) > max_width:
/opt/conda/lib/python3.7/pprint.py in _repr(self, object, context, level)
391 def _repr(self, object, context, level):
392 repr, readable, recursive = self.format(object, context.copy(),
--> 393 self._depth, level)
394 if not readable:
395 self._readable = False
/opt/conda/lib/python3.7/site-packages/sklearn/utils/_pprint.py in format(self, object, context, maxlevels, level)
168 def format(self, object, context, maxlevels, level):
169 return _safe_repr(object, context, maxlevels, level,
--> 170 changed_only=self._changed_only)
171
172 def _pprint_estimator(self, object, stream, indent, allowance, context,
/opt/conda/lib/python3.7/site-packages/sklearn/utils/_pprint.py in _safe_repr(object, context, maxlevels, level, changed_only)
412 recursive = False
413 if changed_only:
--> 414 params = _changed_params(object)
415 else:
416 params = object.get_params(deep=False)
/opt/conda/lib/python3.7/site-packages/sklearn/utils/_pprint.py in _changed_params(estimator)
96 init_params = {name: param.default for name, param in init_params.items()}
97 for k, v in params.items():
---> 98 if (repr(v) != repr(init_params[k]) and
99 not (is_scalar_nan(init_params[k]) and is_scalar_nan(v))):
100 filtered_params[k] = v
KeyError: 'metric'
Such a mechanism is also used in other famous libraries such as lightGBM.
Concretely, it is used to apply aliases to variable names at fit method. (ex.
convert metric -> metrics)
You can refer the detail description in lightgbm case I reported at the url.
microsoft/LightGBM#3100
Since the error cause only when you print or eval as string and there are no
bad effect at fit (predict) method, I think it might be a good idea that the
print behavior revert to the older version's.
|
I'm opening the issue just to have a reference
from sklearn import set_config
from lightgbm import LGBMClassifier
set_config(print_changed_only=True)
print(LGBMClassifier(metric='auc'))
will fail because `metric` is not part of the signature of init, it's part of
a kwargs parameter.
Arguably, that's LightGBM not being super complient, but there's nothing in
our docs explicitly preventing this.
Fixed by #17205
| 1 |
[root@pcbsd-7889] /home/jc/atom# script/build
Node: v0.10.32
npm: v1.4.28
Installing build modules...
Installing apm...
events.js:72
throw er; // Unhandled 'error' event
^
Error: incorrect header check
at Zlib._binding.onerror (zlib.js:295:17)
npm ERR! atom-package-manager@0.133.0 install: `node ./script/download-
node.js`
npm ERR! Exit status 8
npm ERR!
npm ERR! Failed at the atom-package-manager@0.133.0 install script.
npm ERR! This is most likely a problem with the atom-package-manager package,
npm ERR! not with npm itself.
npm ERR! Tell the author that this fails on your system:
npm ERR! node ./script/download-node.js
npm ERR! You can get their info via:
npm ERR! npm owner ls atom-package-manager
npm ERR! There is likely additional logging output above.
npm ERR! System FreeBSD 10.1-RELEASE-p8
npm ERR! command "node" "/usr/home/jc/atom/build/node_modules/.bin/npm" "--
userconfig=/usr/home/jc/atom/.npmrc" "install" "--loglevel" "error"
npm ERR! cwd /usr/home/jc/atom/apm
npm ERR! node -v v0.10.32
npm ERR! npm -v 1.4.28
npm ERR! code ELIFECYCLE
npm ERR! not ok code 0
Any Idea on how to solve this?
|
I followed the build instructions given for the FreeBSD (10.0 x86_64) here-
https://github.com/atom/atom/blob/master/docs/build-instructions/freebsd.md
I compiled node and npm from sources (FreeBSD ports) but still no luck. Here
is the full error log:
http://pastebin.com/WQx8UXWs
| 1 |
**Description**
Add the option for the celery worker to create a new virtual env, install some
packages, and run airflow run command inside it (based on `executor_config`
params).
Really nice to have - have reusable virtual env that can be shared between
tasks with the same param (based on user configuration).
**Use case / motivation**
Once getting to a point when you want to create cluster for different types of
python tasks and you've multiple teams working on the same cluster, you need
to start splitting into different python packages the business login code to
allow better versioning control and avoid the need of restarting the workers
when deploying new util code.
I think it would be amazing if we can allow creating new virtual envs as part
of Airflow and control the package versions.
I know that `PythonVirtualenvOperator` exists, but:
1. Creating env related thing feels like an executor job to me, the coder should not use specific operators for it.
2. The big downside to it is that if I want to use `ShortCircuitOperator` or `BranchPythonOperator` or any kind of new python based operator, I've to create a new operator that will inherit from `PythonVirtualenvOperator` and duplicate the desired functionality.
**Are you willing to submit a PR?**
Yes, would love to.
**Related Issues**
Not that I can find.
|
### Apache Airflow version
2.3.0 (latest released)
### What happened
After upgrading Airflow from 2.2.4 to 2.3.0, the Airflow webserver encounters
the following errors:
[2022-05-10 05:04:43,530] {manager.py:543} INFO - Removed Permission View: can_create on Users
[2022-05-10 05:04:43,639] {manager.py:585} INFO - Removed Permission menu access on Permissions to role Admin
[2022-05-10 05:04:43,645] {manager.py:585} INFO - Removed Permission menu access on Permissions to role Admin
[2022-05-10 05:04:43,650] {manager.py:543} INFO - Removed Permission View: menu_access on Permissions
[2022-05-10 05:04:43,656] {manager.py:543} INFO - Removed Permission View: menu_access on Permissions
[2022-05-10 05:04:43,718] {manager.py:508} INFO - Created Permission View: menu access on Permissions
[2022-05-10 05:04:43,727] {manager.py:568} INFO - Added Permission menu access on Permissions to role Admin
[2022-05-10 05:04:43,728] {manager.py:570} ERROR - Add Permission to Role Error: (psycopg2.errors.UniqueViolation) duplicate key value violates unique constraint "ab_permission_view_role_permission_view_id_role_id_key"
DETAIL: Key (permission_view_id, role_id)=(410, 1) already exists.
[SQL: INSERT INTO ab_permission_view_role (id, permission_view_id, role_id) VALUES (nextval('ab_permission_view_role_id_seq'), %(permission_view_id)s, %(role_id)s) RETURNING ab_permission_view_role.id]
[parameters: {'permission_view_id': 410, 'role_id': 1}]
(Background on this error at: http://sqlalche.me/e/14/gkpj)
[2022-05-10 05:04:45,666] {manager.py:508} INFO - Created Permission View: can create on Users
[2022-05-10 05:04:45,666] {manager.py:511} ERROR - Creation of Permission View Error: (psycopg2.errors.UniqueViolation) duplicate key value violates unique constraint "ab_permission_view_permission_id_view_menu_id_key"
DETAIL: Key (permission_id, view_menu_id)=(5, 51) already exists.
[SQL: INSERT INTO ab_permission_view (id, permission_id, view_menu_id) VALUES (nextval('ab_permission_view_id_seq'), %(permission_id)s, %(view_menu_id)s) RETURNING ab_permission_view.id]
[parameters: {'permission_id': 5, 'view_menu_id': 51}]
(Background on this error at: http://sqlalche.me/e/14/gkpj)
[2022-05-10 05:04:45 -0400] [650935] [ERROR] Exception in worker process
Traceback (most recent call last):
File "/home/etlprod/venv/airflow-3141031@1652080547/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1705, in _execute_context
self.dialect.do_execute(
File "/home/etlprod/venv/airflow-3141031@1652080547/lib/python3.8/site-packages/sqlalchemy/engine/default.py", line 716, in do_execute
cursor.execute(statement, parameters)
psycopg2.errors.UniqueViolation: duplicate key value violates unique constraint "ab_permission_view_role_permission_view_id_role_id_key"
DETAIL: Key (permission_view_id, role_id)=(412, 1) already exists.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/etlprod/venv/airflow-3141031@1652080547/lib/python3.8/site-packages/gunicorn/arbiter.py", line 589, in spawn_worker
worker.init_process()
File "/home/etlprod/venv/airflow-3141031@1652080547/lib/python3.8/site-packages/gunicorn/workers/base.py", line 134, in init_process
self.load_wsgi()
File "/home/etlprod/venv/airflow-3141031@1652080547/lib/python3.8/site-packages/gunicorn/workers/base.py", line 146, in load_wsgi
self.wsgi = self.app.wsgi()
File "/home/etlprod/venv/airflow-3141031@1652080547/lib/python3.8/site-packages/gunicorn/app/base.py", line 67, in wsgi
self.callable = self.load()
File "/home/etlprod/venv/airflow-3141031@1652080547/lib/python3.8/site-packages/gunicorn/app/wsgiapp.py", line 58, in load
return self.load_wsgiapp()
File "/home/etlprod/venv/airflow-3141031@1652080547/lib/python3.8/site-packages/gunicorn/app/wsgiapp.py", line 48, in load_wsgiapp
return util.import_app(self.app_uri)
File "/home/etlprod/venv/airflow-3141031@1652080547/lib/python3.8/site-packages/gunicorn/util.py", line 412, in import_app
app = app(*args, **kwargs)
File "/home/etlprod/venv/airflow-3141031@1652080547/lib/python3.8/site-packages/airflow/www/app.py", line 158, in cached_app
app = create_app(config=config, testing=testing)
File "/home/etlprod/venv/airflow-3141031@1652080547/lib/python3.8/site-packages/airflow/www/app.py", line 146, in create_app
sync_appbuilder_roles(flask_app)
File "/home/etlprod/venv/airflow-3141031@1652080547/lib/python3.8/site-packages/airflow/www/app.py", line 68, in sync_appbuilder_roles
flask_app.appbuilder.sm.sync_roles()
File "/home/etlprod/venv/airflow-3141031@1652080547/lib/python3.8/site-packages/airflow/www/security.py", line 580, in sync_roles
self.update_admin_permission()
File "/home/etlprod/venv/airflow-3141031@1652080547/lib/python3.8/site-packages/airflow/www/security.py", line 562, in update_admin_permission
self.get_session.commit()
File "<string>", line 2, in commit
File "/home/etlprod/venv/airflow-3141031@1652080547/lib/python3.8/site-packages/sqlalchemy/orm/session.py", line 1423, in commit
self._transaction.commit(_to_root=self.future)
File "/home/etlprod/venv/airflow-3141031@1652080547/lib/python3.8/site-packages/sqlalchemy/orm/session.py", line 829, in commit
self._prepare_impl()
File "/home/etlprod/venv/airflow-3141031@1652080547/lib/python3.8/site-packages/sqlalchemy/orm/session.py", line 808, in _prepare_impl
self.session.flush()
File "/home/etlprod/venv/airflow-3141031@1652080547/lib/python3.8/site-packages/sqlalchemy/orm/session.py", line 3255, in flush
self._flush(objects)
File "/home/etlprod/venv/airflow-3141031@1652080547/lib/python3.8/site-packages/sqlalchemy/orm/session.py", line 3395, in _flush
transaction.rollback(_capture_exception=True)
File "/home/etlprod/venv/airflow-3141031@1652080547/lib/python3.8/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__
compat.raise_(
File "/home/etlprod/venv/airflow-3141031@1652080547/lib/python3.8/site-packages/sqlalchemy/util/compat.py", line 211, in raise_
raise exception
File "/home/etlprod/venv/airflow-3141031@1652080547/lib/python3.8/site-packages/sqlalchemy/orm/session.py", line 3355, in _flush
flush_context.execute()
File "/home/etlprod/venv/airflow-3141031@1652080547/lib/python3.8/site-packages/sqlalchemy/orm/unitofwork.py", line 453, in execute
rec.execute(self)
File "/home/etlprod/venv/airflow-3141031@1652080547/lib/python3.8/site-packages/sqlalchemy/orm/unitofwork.py", line 576, in execute
self.dependency_processor.process_saves(uow, states)
File "/home/etlprod/venv/airflow-3141031@1652080547/lib/python3.8/site-packages/sqlalchemy/orm/dependency.py", line 1182, in process_saves
self._run_crud(
File "/home/etlprod/venv/airflow-3141031@1652080547/lib/python3.8/site-packages/sqlalchemy/orm/dependency.py", line 1245, in _run_crud
connection.execute(statement, secondary_insert)
File "/home/etlprod/venv/airflow-3141031@1652080547/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1200, in execute
return meth(self, multiparams, params, _EMPTY_EXECUTION_OPTS)
File "/home/etlprod/venv/airflow-3141031@1652080547/lib/python3.8/site-packages/sqlalchemy/sql/elements.py", line 313, in _execute_on_connection
return connection._execute_clauseelement(
File "/home/etlprod/venv/airflow-3141031@1652080547/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1389, in _execute_clauseelement
ret = self._execute_context(
File "/home/etlprod/venv/airflow-3141031@1652080547/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1748, in _execute_context
self._handle_dbapi_exception(
File "/home/etlprod/venv/airflow-3141031@1652080547/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1929, in _handle_dbapi_exception
util.raise_(
File "/home/etlprod/venv/airflow-3141031@1652080547/lib/python3.8/site-packages/sqlalchemy/util/compat.py", line 211, in raise_
raise exception
File "/home/etlprod/venv/airflow-3141031@1652080547/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1705, in _execute_context
self.dialect.do_execute(
File "/home/etlprod/venv/airflow-3141031@1652080547/lib/python3.8/site-packages/sqlalchemy/engine/default.py", line 716, in do_execute
cursor.execute(statement, parameters)
sqlalchemy.exc.IntegrityError: (psycopg2.errors.UniqueViolation) duplicate key value violates unique constraint "ab_permission_view_role_permission_view_id_role_id_key"
DETAIL: Key (permission_view_id, role_id)=(412, 1) already exists.
[SQL: INSERT INTO ab_permission_view_role (id, permission_view_id, role_id) VALUES (nextval('ab_permission_view_role_id_seq'), %(permission_view_id)s, %(role_id)s) RETURNING ab_permission_view_role.id]
[parameters: {'permission_view_id': 412, 'role_id': 1}]
(Background on this error at: http://sqlalche.me/e/14/gkpj)
[2022-05-10 05:04:45 -0400] [650935] [INFO] Worker exiting (pid: 650935)
[2022-05-10 05:04:45 -0400] [650934] [ERROR] Exception in worker process
Traceback (most recent call last):
File "/home/etlprod/venv/airflow-3141031@1652080547/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1705, in _execute_context
self.dialect.do_execute(
File "/home/etlprod/venv/airflow-3141031@1652080547/lib/python3.8/site-packages/sqlalchemy/engine/default.py", line 716, in do_execute
cursor.execute(statement, parameters)
psycopg2.errors.UniqueViolation: duplicate key value violates unique constraint "ab_permission_view_role_permission_view_id_role_id_key"
DETAIL: Key (permission_view_id, role_id)=(412, 1) already exists.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/etlprod/venv/airflow-3141031@1652080547/lib/python3.8/site-packages/gunicorn/arbiter.py", line 589, in spawn_worker
worker.init_process()
File "/home/etlprod/venv/airflow-3141031@1652080547/lib/python3.8/site-packages/gunicorn/workers/base.py", line 134, in init_process
self.load_wsgi()
File "/home/etlprod/venv/airflow-3141031@1652080547/lib/python3.8/site-packages/gunicorn/workers/base.py", line 146, in load_wsgi
self.wsgi = self.app.wsgi()
File "/home/etlprod/venv/airflow-3141031@1652080547/lib/python3.8/site-packages/gunicorn/app/base.py", line 67, in wsgi
self.callable = self.load()
File "/home/etlprod/venv/airflow-3141031@1652080547/lib/python3.8/site-packages/gunicorn/app/wsgiapp.py", line 58, in load
return self.load_wsgiapp()
File "/home/etlprod/venv/airflow-3141031@1652080547/lib/python3.8/site-packages/gunicorn/app/wsgiapp.py", line 48, in load_wsgiapp
return util.import_app(self.app_uri)
File "/home/etlprod/venv/airflow-3141031@1652080547/lib/python3.8/site-packages/gunicorn/util.py", line 412, in import_app
app = app(*args, **kwargs)
File "/home/etlprod/venv/airflow-3141031@1652080547/lib/python3.8/site-packages/airflow/www/app.py", line 158, in cached_app
app = create_app(config=config, testing=testing)
File "/home/etlprod/venv/airflow-3141031@1652080547/lib/python3.8/site-packages/airflow/www/app.py", line 146, in create_app
sync_appbuilder_roles(flask_app)
File "/home/etlprod/venv/airflow-3141031@1652080547/lib/python3.8/site-packages/airflow/www/app.py", line 68, in sync_appbuilder_roles
flask_app.appbuilder.sm.sync_roles()
File "/home/etlprod/venv/airflow-3141031@1652080547/lib/python3.8/site-packages/airflow/www/security.py", line 580, in sync_roles
self.update_admin_permission()
File "/home/etlprod/venv/airflow-3141031@1652080547/lib/python3.8/site-packages/airflow/www/security.py", line 562, in update_admin_permission
self.get_session.commit()
File "<string>", line 2, in commit
File "/home/etlprod/venv/airflow-3141031@1652080547/lib/python3.8/site-packages/sqlalchemy/orm/session.py", line 1423, in commit
self._transaction.commit(_to_root=self.future)
File "/home/etlprod/venv/airflow-3141031@1652080547/lib/python3.8/site-packages/sqlalchemy/orm/session.py", line 829, in commit
self._prepare_impl()
File "/home/etlprod/venv/airflow-3141031@1652080547/lib/python3.8/site-packages/sqlalchemy/orm/session.py", line 808, in _prepare_impl
self.session.flush()
File "/home/etlprod/venv/airflow-3141031@1652080547/lib/python3.8/site-packages/sqlalchemy/orm/session.py", line 3255, in flush
self._flush(objects)
File "/home/etlprod/venv/airflow-3141031@1652080547/lib/python3.8/site-packages/sqlalchemy/orm/session.py", line 3395, in _flush
transaction.rollback(_capture_exception=True)
File "/home/etlprod/venv/airflow-3141031@1652080547/lib/python3.8/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__
compat.raise_(
File "/home/etlprod/venv/airflow-3141031@1652080547/lib/python3.8/site-packages/sqlalchemy/util/compat.py", line 211, in raise_
raise exception
File "/home/etlprod/venv/airflow-3141031@1652080547/lib/python3.8/site-packages/sqlalchemy/orm/session.py", line 3355, in _flush
flush_context.execute()
File "/home/etlprod/venv/airflow-3141031@1652080547/lib/python3.8/site-packages/sqlalchemy/orm/unitofwork.py", line 453, in execute
rec.execute(self)
File "/home/etlprod/venv/airflow-3141031@1652080547/lib/python3.8/site-packages/sqlalchemy/orm/unitofwork.py", line 576, in execute
self.dependency_processor.process_saves(uow, states)
File "/home/etlprod/venv/airflow-3141031@1652080547/lib/python3.8/site-packages/sqlalchemy/orm/dependency.py", line 1182, in process_saves
self._run_crud(
File "/home/etlprod/venv/airflow-3141031@1652080547/lib/python3.8/site-packages/sqlalchemy/orm/dependency.py", line 1245, in _run_crud
connection.execute(statement, secondary_insert)
File "/home/etlprod/venv/airflow-3141031@1652080547/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1200, in execute
return meth(self, multiparams, params, _EMPTY_EXECUTION_OPTS)
File "/home/etlprod/venv/airflow-3141031@1652080547/lib/python3.8/site-packages/sqlalchemy/sql/elements.py", line 313, in _execute_on_connection
return connection._execute_clauseelement(
File "/home/etlprod/venv/airflow-3141031@1652080547/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1389, in _execute_clauseelement
ret = self._execute_context(
File "/home/etlprod/venv/airflow-3141031@1652080547/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1748, in _execute_context
self._handle_dbapi_exception(
File "/home/etlprod/venv/airflow-3141031@1652080547/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1929, in _handle_dbapi_exception
util.raise_(
File "/home/etlprod/venv/airflow-3141031@1652080547/lib/python3.8/site-packages/sqlalchemy/util/compat.py", line 211, in raise_
raise exception
File "/home/etlprod/venv/airflow-3141031@1652080547/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1705, in _execute_context
self.dialect.do_execute(
File "/home/etlprod/venv/airflow-3141031@1652080547/lib/python3.8/site-packages/sqlalchemy/engine/default.py", line 716, in do_execute
cursor.execute(statement, parameters)
sqlalchemy.exc.IntegrityError: (psycopg2.errors.UniqueViolation) duplicate key value violates unique constraint "ab_permission_view_role_permission_view_id_role_id_key"
DETAIL: Key (permission_view_id, role_id)=(412, 1) already exists.
[SQL: INSERT INTO ab_permission_view_role (id, permission_view_id, role_id) VALUES (nextval('ab_permission_view_role_id_seq'), %(permission_view_id)s, %(role_id)s) RETURNING ab_permission_view_role.id]
[parameters: {'permission_view_id': 412, 'role_id': 1}]
(Background on this error at: http://sqlalche.me/e/14/gkpj)
[2022-05-10 05:04:45 -0400] [650934] [INFO] Worker exiting (pid: 650934)
[2022-05-10 05:04:46 -0400] [650932] [INFO] Worker exiting (pid: 650932)
[2022-05-10 05:04:46 -0400] [650933] [INFO] Worker exiting (pid: 650933)
Traceback (most recent call last):
File "/home/etlprod/venv/airflow-3141031@1652080547/lib/python3.8/site-packages/gunicorn/arbiter.py", line 209, in run
self.sleep()
File "/home/etlprod/venv/airflow-3141031@1652080547/lib/python3.8/site-packages/gunicorn/arbiter.py", line 357, in sleep
ready = select.select([self.PIPE[0]], [], [], 1.0)
File "/home/etlprod/venv/airflow-3141031@1652080547/lib/python3.8/site-packages/gunicorn/arbiter.py", line 242, in handle_chld
self.reap_workers()
File "/home/etlprod/venv/airflow-3141031@1652080547/lib/python3.8/site-packages/gunicorn/arbiter.py", line 525, in reap_workers
raise HaltServer(reason, self.WORKER_BOOT_ERROR)
gunicorn.errors.HaltServer: <HaltServer 'Worker failed to boot.' 3>
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/home/etlprod/venv/airflow-3141031@1652080547/lib/python3.8/site-packages/gunicorn/__main__.py", line 7, in <module>
run()
File "/home/etlprod/venv/airflow-3141031@1652080547/lib/python3.8/site-packages/gunicorn/app/wsgiapp.py", line 67, in run
WSGIApplication("%(prog)s [OPTIONS] [APP_MODULE]").run()
File "/home/etlprod/venv/airflow-3141031@1652080547/lib/python3.8/site-packages/gunicorn/app/base.py", line 231, in run
super().run()
File "/home/etlprod/venv/airflow-3141031@1652080547/lib/python3.8/site-packages/gunicorn/app/base.py", line 72, in run
Arbiter(self).run()
File "/home/etlprod/venv/airflow-3141031@1652080547/lib/python3.8/site-packages/gunicorn/arbiter.py", line 229, in run
self.halt(reason=inst.reason, exit_status=inst.exit_status)
File "/home/etlprod/venv/airflow-3141031@1652080547/lib/python3.8/site-packages/gunicorn/arbiter.py", line 342, in halt
self.stop()
File "/home/etlprod/venv/airflow-3141031@1652080547/lib/python3.8/site-packages/gunicorn/arbiter.py", line 393, in stop
time.sleep(0.1)
File "/home/etlprod/venv/airflow-3141031@1652080547/lib/python3.8/site-packages/gunicorn/arbiter.py", line 242, in handle_chld
self.reap_workers()
File "/home/etlprod/venv/airflow-3141031@1652080547/lib/python3.8/site-packages/gunicorn/arbiter.py", line 525, in reap_workers
raise HaltServer(reason, self.WORKER_BOOT_ERROR)
gunicorn.errors.HaltServer: <HaltServer 'Worker failed to boot.' 3>
### What you think should happen instead
Obviously, the Airflow webserver is trying to fix some permissions issues,
every time it starts.
Unfortunately, when this process fails, it leaves the webserver hanging
without any active gunicorn workers, thus preventing it from serving the UI
### How to reproduce
upgrade from 2.2.4 to 2.3.0
### Operating System
Ubuntu 20.04.3 LTS
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon @ file:///home/airflow/deploy/wheelfreeze/wheels/apache_airflow_providers_amazon-3.3.0-py3-none-any.whl
apache-airflow-providers-ftp @ file:///home/airflow/deploy/wheelfreeze/wheels/apache_airflow_providers_ftp-2.1.2-py3-none-any.whl
apache-airflow-providers-http @ file:///home/airflow/deploy/wheelfreeze/wheels/apache_airflow_providers_http-2.1.2-py3-none-any.whl
apache-airflow-providers-imap @ file:///home/airflow/deploy/wheelfreeze/wheels/apache_airflow_providers_imap-2.2.3-py3-none-any.whl
apache-airflow-providers-mongo @ file:///home/airflow/deploy/wheelfreeze/wheels/apache_airflow_providers_mongo-2.3.3-py3-none-any.whl
apache-airflow-providers-mysql @ file:///home/airflow/deploy/wheelfreeze/wheels/apache_airflow_providers_mysql-2.2.3-py3-none-any.whl
apache-airflow-providers-pagerduty @ file:///home/airflow/deploy/wheelfreeze/wheels/apache_airflow_providers_pagerduty-2.1.3-py3-none-any.whl
apache-airflow-providers-postgres @ file:///home/airflow/deploy/wheelfreeze/wheels/apache_airflow_providers_postgres-4.1.0-py3-none-any.whl
apache-airflow-providers-sendgrid @ file:///home/airflow/deploy/wheelfreeze/wheels/apache_airflow_providers_sendgrid-2.0.4-py3-none-any.whl
apache-airflow-providers-slack @ file:///home/airflow/deploy/wheelfreeze/wheels/apache_airflow_providers_slack-4.2.3-py3-none-any.whl
apache-airflow-providers-sqlite @ file:///home/airflow/deploy/wheelfreeze/wheels/apache_airflow_providers_sqlite-2.1.3-py3-none-any.whl
apache-airflow-providers-ssh @ file:///home/airflow/deploy/wheelfreeze/wheels/apache_airflow_providers_ssh-2.4.3-py3-none-any.whl
apache-airflow-providers-vertica @ file:///home/airflow/deploy/wheelfreeze/wheels/apache_airflow_providers_vertica-2.1.3-py3-none-any.whl
### Deployment
Virtualenv installation
### Deployment details
python 3.8.10
### Anything else
_No response_
### Are you willing to submit PR?
* Yes I am willing to submit a PR!
### Code of Conduct
* I agree to follow this project's Code of Conduct
| 0 |
include: playbook.yml a=2 b=3 c=4
we can do it with tasks, why not?
see also: handlers
make sure with_items also works with these includes as it now works with task
includes
|
Unable to connect to ec2-host (errors with permission denied errors) in AWS.
1. Have the following entry in inventory file
2. Following command was executed
ansible -vvv test_ansible --user ansible --private-
key=/home/ansible/.ssh/id_rsa -m ping --become-user=ansible
It is trying to connect as ec2-user@ec2_remotehost |FAILED> SSH Error:
Permission denied (publickey).
while connecting to ec2_remotehost:22
If this is already fixed, let me know the bug number and I can close this as
duplicate.
| 0 |
#### Code Sample, a copy-pastable example if possible
dat = pd.DataFrame(
{'number': [2, 2, 3], 'string': ['a', 'a', 'b']},
index=pd.date_range('2018-01-01', periods=3, freq='1s')
)
dat.rolling('2min').apply(lambda x: len(np.unique(x)))
number string
2018-01-01 00:00:00 1.0 a
2018-01-01 00:00:01 1.0 a
2018-01-01 00:00:02 2.0 b
#### Problem description
What I am trying to do is counting how many unique values in a rolling window.
This works well for the numeric column _dat.number_ but the string column
_dat.string_ simply only returns what it was.
In the above example, I expect to see the two columns in the output are the
same as the number of unique values are 1, 1, 2 starting from the first row.
However the string column returns a, a, b.
#### Expected Output
number string
2018-01-01 00:00:00 1.0 1.0
2018-01-01 00:00:01 1.0 1.0
2018-01-01 00:00:02 2.0 2.0
#### Output of `pd.show_versions()`
## INSTALLED VERSIONS
commit: None
python: 3.6.5.final.0
python-bits: 64
OS: Linux
OS-release: 4.13.0-38-generic
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: None
LANG: en_AU.UTF-8
LOCALE: en_AU.UTF-8
pandas: 0.22.0
pytest: None
pip: 9.0.3
setuptools: 39.0.1
Cython: None
numpy: 1.14.2
scipy: 1.0.1
pyarrow: None
xarray: None
IPython: 6.3.0
sphinx: 1.7.2
patsy: 0.5.0
dateutil: 2.7.2
pytz: 2018.3
blosc: None
bottleneck: None
tables: 3.4.2
numexpr: 2.6.4
feather: None
matplotlib: 2.2.2
openpyxl: None
xlrd: None
xlwt: None
xlsxwriter: None
lxml: None
bs4: None
html5lib: 1.0.1
sqlalchemy: None
pymysql: None
psycopg2: None
jinja2: 2.10
s3fs: None
fastparquet: None
pandas_gbq: None
pandas_datareader: None
|
Hi the `Pandas` dream team.
I think it would be nice if `rolling` could accept `strings` as well (see
https://stackoverflow.com/questions/52657429/rolling-with-string-variables)
With the abundance of textual data nowadays, we want `Pandas` to stay at the
top of the curve!
import pandas as pd
import numpy as np
df = pd.DataFrame({'mytime' : [pd.to_datetime('2018-01-01 14:34:12.340'),
pd.to_datetime('2018-01-01 14:34:13.0'),
pd.to_datetime('2018-01-01 14:34:15.342'),
pd.to_datetime('2018-01-01 14:34:16.42'),
pd.to_datetime('2018-01-01 14:34:28.742')],
'myvalue' : [1,2,np.NaN,3,1],
'mychart' : ['a','b','c','d','e']})
df.set_index('mytime', inplace = True)
df
Out[15]:
mychart myvalue
mytime
2018-01-01 14:34:12.340 a 1.0
2018-01-01 14:34:13.000 b 2.0
2018-01-01 14:34:15.342 c NaN
2018-01-01 14:34:16.420 d 3.0
2018-01-01 14:34:28.742 e 1.0
Here I want to concatenate the strings in mychart using the values in the last
2 seconds (not the last two observations).
Unfortunately, both attempts below fail miserably
df.mychart.rolling(window = '2s', closed = 'right').apply(lambda x: ' '.join(x), raw = False)
df.mychart.rolling(window = '2s', closed = 'right').apply(lambda x: (x + ' ').cumsum(), raw = False)
TypeError: cannot handle this type -> object
What do you think?
Thanks!
| 1 |
**Steps to reproduce:**
1. Start Terminal.
2. Press `Win`+`Left` key to dock Terminal window to the left half of my monitor. The Terminal window starts at (0|0) left top position.
3. Exit Terminal.
4. Start Terminal.
**Actual results:**
After step 4, the Terminal window starts at some probably random (X|Y)
position.
Now I have to repeat above step 2:
1. Press `Win`+`Left` key to dock the Terminal window to the left half of my monitor.
This is really annoying.
**Expected results:**
The Terminal window should start pixel perfect at the very same position where
I closed it the last time. No offset or anything. Just start with the same
**size** and the same **position**. No matter whether I manually resized the
window or whether being docked before.
Google Chrome does this very well, Microsoft Edge based on Chromium does this
very well, too, same does e.g. Visual Studio Code. I hope Terminal can do
better, too.
**Additional information:**
I've described a similar issue some years back over at Super User, together
with some UI mockups.
|
# Summary of the new feature/enhancement
When the window is closed, remember its position/size for next time, either
automatically or with some sort of explicit "remember" option.
Alternatively, as a minimal implementation, be able to manually specify
initial position (similar to how you already can with column and row size).
Already covered by #1043.
| 1 |
Firstly, thanks for such an awesome project!
When trying to add Javascript to customise onClick, Tooltips, etc. in the
deck.gl visualisations, the text box in which you enter the code text behaves
erratically. It is also unclear what is actually persisted onto the
visualisation, it doesn't appear to be the code as entered.
#### How to reproduce the bug
(I've given instructions for the deck.gl Scatterplot, but seems to apply to
all deck.gl visualisations)
1. Go to Charts, Add new chart
2. Select a dataset with spatial attributes, choose deck.gl Scatterplot, click on Create New Chart
3. Configure Chart to display some data (i.e. configure the Lat-Long values)
4. Under Data configuration pane on left of screen, expand the "Advanced" collapse. Attempt to enter text in the "Javascript Tooltip Generator". It should enter duplicate values.
5. In the browser console, there should be `Uncaught TypeError: (validationErrors || []).forEach is not a function`
### Expected results
Text typed in the Javascipt fields in deck.gl visualisations to appear as
typed. For the code entered to be executed in the context of the
visualisation.
### Actual results
Text appearing in field does not match what was typed. Custom JS code doesn't
appear to be executed in map.
Following error appearing in browser console per character typing:
explore.fd9dbc001a8d2b732fc9.entry.js:624 Uncaught TypeError: (validationErrors || []).forEach is not a function
at Object.SET_FIELD_VALUE (explore.fd9dbc001a8d2b732fc9.entry.js:624:32)
at exploreReducer (explore.fd9dbc001a8d2b732fc9.entry.js:694:39)
at combination (vendors.866d9853ec9ca701f3b8.entry.js:198222:29)
at dispatch (vendors.866d9853ec9ca701f3b8.entry.js:197988:22)
at 3236.54993c7b99382ace8b98.entry.js:242:12
at 1844.8922f8dcb86356245bf9.entry.js:1075:16
at vendors.866d9853ec9ca701f3b8.entry.js:198240:12
at Object.onChange (7173.0ceb268407a17642e1ec.entry.js:12551:61)
at ReactAce.push.93946.ReactAce.onChange (437abb94798b28dd8787.chunk.js:25959:24)
at Editor.EventEmitter._signal (600b0291f89941e46ffa.chunk.js:3870:21)
#### Screenshots
This is what appeared after typing a single `d` character:

This is after typing `d =>`:

### Environment
* browser type and version:
* Mozilla Firefox 98.0 (64-bit)
* Google Chrome Version 99.0.4844.51 (Official Build) (64-bit)
(built off `apache/superset:1.4.0` Docker image tag)
* superset version: `1.4.0`
* python version: `python 3.8.12`
* node.js version: _sorry, not sure how to get the node version inside the running docker container_?
* any feature flags active:
* `ENABLE_TEMPLATE_PROCESSING`
* `ALERT_REPORTS`
* `THUMBNAILS`
* `LISTVIEWS_DEFAULT_CARD_VIEW`
* `DASHBOARD_NATIVE_FILTERS`
* `ENABLE_JAVASCRIPT_CONTROLS` (in `tooltips` config section)
### Checklist
Make sure to follow these steps before submitting your issue - thank you!
* I have checked the superset logs for python stacktraces and included it here as text if there are any.
* I have reproduced the issue with at least the latest released version of superset.
* I have checked the issue tracker for the same issue and I haven't found one similar.
### Additional context
I'm aware this functionality is fairly old (#4173), so I wonder if maybe a
subsequent change has broken the in-browser JS parsing?
|
Current size of dashboard.entry.js is aroung 7+ MB which causes slight
sluggishness in rendering.
Are there any optimizations that I can do to reduce its size and improve
overall rendering speed?
| 0 |
* I have searched the issues of this repository and believe that this is not a duplicate.
* * *
**Previous Behavior**
In version 1.0.0.beta31, you could put anything inside a `<Fade>` component
and it would get faded.
**Current Behavior**
As of version 1.0.0.beta32, it seems that `Fade` relies on the child component
doing the right thing with the `style` prop.
Here's a CodeSandbox example:
https://codesandbox.io/s/7j8y2rqyn6
I am not sure if this is intended behavior or not, but the change caught me
off guard and I figured I should mention it.
|
* [y] I have searched the issues of this repository and believe that this is not a duplicate.
I have multiple `Paper` components, and each of them need to have a different
background color.
Do i simply wrap each of the components around a `MuiThemeProvider` with
custom theme? Is there any major disadvantage to this approach?
Or is `MuiThemeProvider` meant to be one per application (like Redux)? If so,
what's the easiest way to meet my requirement?
| 0 |
So Babel is compiling a `for of` loop into code that uses Symbol - which is an
ES6 specific feature. Example:
for (let i of x) {
console.log(i)
}
Babel output:
"use strict";
var _iteratorNormalCompletion = true;
var _didIteratorError = false;
var _iteratorError = undefined;
try {
for (var _iterator = x[Symbol.iterator](), _step; !(_iteratorNormalCompletion = (_step = _iterator.next()).done); _iteratorNormalCompletion = true) {
var i = _step.value;
console.log(i);
}
} catch (err) {
_didIteratorError = true;
_iteratorError = err;
} finally {
try {
if (!_iteratorNormalCompletion && _iterator["return"]) {
_iterator["return"]();
}
} finally {
if (_didIteratorError) {
throw _iteratorError;
}
}
}
|
I have some code like:
for (var n of geoHashes) {
console.log(n);
}
Only in firefox it throws an error: ReferenceError: Symbol is not defined
| 1 |
The select component in `1.0-beta.27` does not change selection to when you
type
* I have searched the issues of this repository and believe that this is not a duplicate.
## Expected Behavior
When you have a select field selected and begin to type, it should change
selection to match what you typed.
## Current Behavior
Typing while a select is open does nothing.
## Steps to Reproduce (for bugs)
https://codesandbox.io/s/20xx3owq8n
## Context
Need to be able to quickly select from a large list
## Your Environment
Tech | Version
---|---
Material-UI | 1.0-beta.27
React | 15,6
browser | Chrome (latest)
|
* I have searched the issues of this repository and believe that this is not a duplicate.
## Expected Behavior
I would expect native and non-native Selects to left justify their text with
the same text indent as an Input.
## Current Behavior
Currently on firefox on Windows and Linux the native select (the middle one in
the screenshot below) has a text indent where the others don't:

## Steps to Reproduce (for bugs)
https://codesandbox.io/s/zn4k1yq153
## Context
I'm making lists of Inputs and native Selects, and it looks odd when they have
different text indents
## Your Environment
Tech | Version
---|---
Material-UI | 1.0.0 beta 19
React | 15.5.3
browser | firefox
etc |
| 0 |
I just updated to the latest version of webpack. Running `webpack -h` cuts off
halfway through the process. Previous versions had more options and output.

|

And I found the official node manual not recommend to use:
if (someConditionNotMet()) {
printUsageToStdout();
process.exit(1);
}
I think this is the reason why the help message was truncated.
| 1 |
I was just messing around with some toy problems and discovered that you could
create multiple methods with the same signature when using concrete types in
parametric methods:
julia> foo(a::Int64, b::Int64) = a + b
foo (generic function with 1 method)
julia> foo{T<:Int64}(a::T, b::T) = a - b
foo (generic function with 2 methods)
julia> foo{A<:Int64,B<:Int64}(a::A, b::B) = a * b
foo (generic function with 3 methods)
julia> methods(foo)
#3 methods for generic function "foo":
foo{T<:Int64}(a::T, b::T) at none:1
foo{A<:Int64,B<:Int64}(a::A, b::B) at none:1
foo(a::Int64, b::Int64) at none:1
If a parametric type variable uses a concrete type shouldn't that type replace
the type variable in the signature?
|
@jakebolewski, @jiahao and I have been thinking through some type system
improvements and things are starting to materialize. I believe this will
consist of:
* Use `DataType` to implement tuple types, which will allow for more efficient tuples, and make tuple types less of a special case in various places. This might also open the door for user-defined covariant types in the future. (DONE)
* Add `UnionAll` types around all uses of TypeVars. Each `UnionAll` will bind one TypeVar, and we will nest them to allow for triangular dispatch. So far we've experimented with just sticking TypeVars wherever, and it has not really worked. We don't have a great syntax for this yet, but for now will probably use `@UnionAll T<:Int Array{T}`. These are mostly equivalent to existential types.
* Unspecialized parametric types will actually be UnionAll types. For example `Complex` will be bound to `@UnionAll T<:Real _Complex{T}` where `_Complex` is an internal type constructor.
* Technically, `(Int,String)` could be a subtype of `@UnionAll T (T,T)` if `T` were `Union(Int,String)`, or `Any`. However dispatch doesn't work this way because it's not very useful. We effectively have a rule that if a method parameter `T` only appears in covariant position, it ranges over only concrete types. We should apply this rule explicitly by marking TypeVars as appropriate. So far this is the best way I can think of to model this behavior, but suggestions are sought.
* TypeVars should only be compared by identity, and are not types themselves. We will not have syntax for introducing lone TypeVars; `@UnionAll` should suffice.
This is being prototyped in `examples/juliatypes.jl`. I hope this will fix
most of the following issues:
method lookup and sorting issues:
#8959 #8915 #7221 #8163
method ambiguity issues:
#8652 #8045 #6383 #6190 #6187 #6048 #5460 #5384 #3025 #1631 #270
misc. type system:
#8920 #8625 #6984 #6721 #3738 #2552 #8470
In particular, type intersection needs to be trustworthy enough that we can
remove the ambiguity warning and generate a runtime error instead.
| 1 |
I don't know who's making the bug, so reporting here. I'll move to correct
repo if someone helps me debug it.
**Do you want to request a _feature_ or report a _bug_?**
Bug
**What is the current behavior?**
Please watch below screencast:
https://drive.google.com/file/d/1KMP44qsZ4y3MwrLLDdnOzPZ8z5mMElFP/view
1. Goto https://react-devtools-tutorial.now.sh/editing-props-and-state
2. Change the last ListItem prop to isComplete from `false` to `true`.
3. Click the checkbox in the view to change the state again from `true` to `false`.
**What is the expected behavior?**
It should just change the state of that ListItem. Instead, it's adding 3 more
in the list with duplicate keys.
**Which versions of React, and which browser / OS are affected by this issue?
Did this work in previous versions of React?**
Latest React.
Mac, Chrome Version 75.0.3770.142 (Official Build) (64-bit)
|
Where does proptypes doc live? It's confusing it's still on React object but
inaccessible from the website through search or table of contents. Let's
either reinstantiate it or move it fully to the standalone repo. Also it's
confusing we're deprecating createClass but it still lives in the docs ("React
without ES6") whereas we're just "moving" PropTypes but they're gone from the
docs.
| 0 |
(This may well be a duplicate, but I didn't find a specific duplicate and I'm
not clueful enough to figure out if it's included in any of the others.)
# Environment
Windows build number: 10.0.18945.1001
Windows Terminal version (if applicable): 0.3.2142.0
Any other software?
Reproduced using Debian Buster; also, font in use in both Terminal and console is Consolas.
# Steps to reproduce
1. Ensure that you are configured to use the UTF-8 locale and UTF-8 codeset; i.e., that the output of the `locale` command is as follows:
LANG=en_US.UTF-8 LANGUAGE=en_US.UTF-8 LC_CTYPE="en_US.UTF-8" LC_NUMERIC="en_US.UTF-8" LC_TIME="en_US.UTF-8" LC_COLLATE="en_US.UTF-8" LC_MONETARY="en_US.UTF-8" LC_MESSAGES="en_US.UTF-8" LC_PAPER="en_US.UTF-8" LC_NAME="en_US.UTF-8" LC_ADDRESS="en_US.UTF-8" LC_TELEPHONE="en_US.UTF-8" LC_MEASUREMENT="en_US.UTF-8" LC_IDENTIFICATION="en_US.UTF-8" LC_ALL=en_US.UTF-8
2. Run `pstree`, or another command which detects and uses UTF-8 line-drawing characters.
Note that this is not the default configuration of Debian for the en_US.UTF-8
locale, which leaves LC_ALL set to C; `sudo update-locale LC_ALL=en_US.UTF-8`
is required.
# Expected behavior
# Actual behavior
Expected behavior is line-drawing characters, as seen in console on the right.
Actual behavior is all such characters rendering as "underline", as seen in
Terminal on the left.

|
# Environment
Windows build number: Microsoft Windows [Version 10.0.18362.267]
Windows Terminal version (if applicable): 0.3.2171.0
Any other software?
# Steps to reproduce
1. Edit config.
2. For any keybinding, add an additional key definition to the "keys" list.
3. Close and re-open Terminal
4. See that the entire command has been removed from the "keybindings" section of the config
In my case, I was trying to add shift+insert as an additional paste
keybinding. At first, I though the problem was an incorrect key specification
on my part, but using "shift+insert" as the sole keybinding worked fine.
# Expected behavior
At the very least, I would expect both the keybinding element, and the first
item in the "keys" list, to be maintained.
Ideally, all keybindings in the list would work to activate the function
(assuming no conflicts with keybindings in other functions).
# Actual behavior
The entire keybinding element is removed
| 0 |
#### Code Sample, a copy-pastable example if possible
import pandas as pd
df = pd.DataFrame({
'Foo64': [1.2, 2.4, 7.24],
})
df['Foo32'] = df['Foo64'].astype('float32')
df.eval('-1 + Foo64') # Works
df.eval('-1 + Foo32') # Throws Exception
Traceback (most recent call last):
File "C:\Users\vmuriart\Desktop\bug_eval.py", line 10, in <module>
df.eval('-1 + Foo32') # Throws Exception
File "C:\Anaconda\lib\site-packages\pandas\core\frame.py", line 2186, in eval
return _eval(expr, inplace=inplace, **kwargs)
File "C:\Anaconda\lib\site-packages\pandas\core\computation\eval.py", line 262, in eval
truediv=truediv)
File "C:\Anaconda\lib\site-packages\pandas\core\computation\expr.py", line 727, in __init__
self.terms = self.parse()
File "C:\Anaconda\lib\site-packages\pandas\core\computation\expr.py", line 744, in parse
return self._visitor.visit(self.expr)
File "C:\Anaconda\lib\site-packages\pandas\core\computation\expr.py", line 313, in visit
return visitor(node, **kwargs)
File "C:\Anaconda\lib\site-packages\pandas\core\computation\expr.py", line 319, in visit_Module
return self.visit(expr, **kwargs)
File "C:\Anaconda\lib\site-packages\pandas\core\computation\expr.py", line 313, in visit
return visitor(node, **kwargs)
File "C:\Anaconda\lib\site-packages\pandas\core\computation\expr.py", line 322, in visit_Expr
return self.visit(node.value, **kwargs)
File "C:\Anaconda\lib\site-packages\pandas\core\computation\expr.py", line 313, in visit
return visitor(node, **kwargs)
File "C:\Anaconda\lib\site-packages\pandas\core\computation\expr.py", line 417, in visit_BinOp
left, right = self._maybe_downcast_constants(left, right)
File "C:\Anaconda\lib\site-packages\pandas\core\computation\expr.py", line 365, in _maybe_downcast_constants
name = self.env.add_tmp(np.float32(left.value))
AttributeError: 'UnaryOp' object has no attribute 'value'
#### Problem description
`df.eval(...)` operations fail if one of the columns involved is of type
`float32` and some `unary` operations are involved. In the example above
`df.eval('-1 + Foo32')` failed, but `df.eval('-Foo32')` will succeed, and
`df.eval('-1*Foo32')` will also fail.
Originally I tested fixing this by just adding `self.value = operand.value` to
the `UnaryOp` object, before I realized that this issue only affects `float32`
objects. I haven't looked to see where the code branches off to cause this
misbehavior.
This was tested and reproduced on both `0.20.3` and `0.22.0`
#### Expected Output
No exception thrown.
#### Output of `pd.show_versions()`
## INSTALLED VERSIONS
commit: None
python: 3.6.3.final.0
python-bits: 64
OS: Windows
OS-release: 10
machine: AMD64
processor: Intel64 Family 6 Model 79 Stepping 1, GenuineIntel
byteorder: little
LC_ALL: None
LANG: None
LOCALE: None.None
pandas: 0.22.0
pytest: 3.2.1
pip: 9.0.1
setuptools: 36.5.0.post20170921
Cython: 0.26.1
numpy: 1.12.1
scipy: 0.19.1
pyarrow: 0.7.1
xarray: None
IPython: 6.1.0
sphinx: 1.6.3
patsy: 0.4.1
dateutil: 2.6.1
pytz: 2017.2
blosc: None
bottleneck: 1.2.1
tables: 3.4.2
numexpr: 2.6.2
feather: None
matplotlib: 2.1.0
openpyxl: 2.4.8
xlrd: 1.1.0
xlwt: 1.3.0
xlsxwriter: 1.0.2
lxml: 4.1.0
bs4: 4.6.0
html5lib: 0.999999999
sqlalchemy: 1.1.13
pymysql: None
psycopg2: None
jinja2: 2.9.6
s3fs: None
fastparquet: None
pandas_gbq: None
pandas_datareader: None
|
#### Code Sample, a copy-pastable example if possible
def test_unary():
df = pd.DataFrame({'x': np.array([0.11, 0], dtype=np.float32)})
res = df.eval('(x > 0.1) | (x < -0.1)')
assert np.array_equal(res, np.array([True, False])), res
#### Problem description
This is related to #11235.
on python 3.6, pandas 20.1, this raises an error the traceback ends with:
File ".../envs/py3/lib/python3.6/site-packages/pandas/core/computation/expr.py", line 370, in _maybe_downcast_constants
name = self.env.add_tmp(np.float32(right.value))
AttributeError: 'UnaryOp' object has no attribute 'value'
In that case the right is -(0.1)
INSTALLED VERSIONS \------------------ commit: None python: 3.6.1.final.0
python-bits: 64 OS: Linux OS-release: 4.8.0-49-generic machine: x86_64
processor: x86_64 byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE:
en_US.UTF-8
pandas: 0.20.1
pytest: None
pip: 9.0.1
setuptools: 27.2.0
Cython: 0.25.2
numpy: 1.12.1
scipy: 0.19.0
xarray: None
IPython: 6.0.0
sphinx: None
patsy: None
dateutil: 2.6.0
pytz: 2017.2
blosc: None
bottleneck: None
tables: None
numexpr: None
feather: None
matplotlib: 2.0.2
openpyxl: None
xlrd: None
xlwt: None
xlsxwriter: None
lxml: None
bs4: None
html5lib: None
sqlalchemy: None
pymysql: None
psycopg2: None
jinja2: None
s3fs: None
pandas_gbq: None
pandas_datareader: None
Another example:
>>> df = pd.DataFrame({'x':[1,2,3,4,5]})
>>> df.eval('x.shift(-1)')
| 1 |
This is a regression from 0.6.2 which affects DataFramesMeta
(JuliaData/DataFramesMeta.jl#88). Under some very particular circumstances,
expression interpolation reverses the order of keys in a dictionary, compared
with the order obtained in other places inside the same function. This is a
problem when the function relies on the order of keys matching that of values
(as with DataFrames macros).
In the following example, `:a` appears before `:b` in the two first lines
inside the quoted block, but that's the contrary for the third line.
function with_helper()
membernames = Dict{Symbol,Int}(:a => 1, :b => 2)
quote
$(keys(membernames)...)
$(map(key -> :($key), keys(membernames))...)
$(map(key -> :(somedict[$key]), keys(membernames))...)
end
end
julia> with_helper()
quote
#= REPL[1]:4 =#
a
b
#= REPL[1]:5 =#
a
b
#= REPL[1]:6 =#
somedict[b]
somedict[a]
end
|
When computing the matrix product of a transposed real matrix with a complex
matrix, I encountered a performance issue.
Consider the following computations:
julia> A=randn(1000,1000);
julia> B=randn(1000,1000);
julia> Z=complex(randn(1000,1000),randn(1000,1000));
julia> # pre-compiling
julia> A'*B;
julia> A'*Z;
julia> complex(A'*real(Z),A'*imag(Z));
julia> # timing
julia> @time A'*B;
0.025224 seconds (130 allocations: 7.637 MB)
julia> @time A'*Z;
1.899244 seconds (6 allocations: 15.259 MB)
julia> @time complex(A'*real(Z),A'*imag(Z));
0.062084 seconds (16 allocations: 45.777 MB, 8.73% gc time)
The latter two computations are mathematically identical, however the last
(more complex) expression is ~30x faster (and only ~2 times slower than the
corresponding real-real product, as one would expect) than the straightforward
expression `A'*Z`. Shouldn't both expressions be comparable in performance?
I am running Julia 0.5.2 on Ubuntu 14.04.
sysinfo() output:
Julia Version 0.5.2
Commit f4c6c9d4bb (2017-05-06 16:34 UTC)
Platform Info:
OS: Linux (x86_64-linux-gnu)
CPU: Intel(R) Core(TM) i7-4702HQ CPU @ 2.20GHz
WORD_SIZE: 64
BLAS: libopenblas (NO_LAPACK NO_LAPACKE DYNAMIC_ARCH NO_AFFINITY Haswell)
LAPACK: liblapack.so.3
LIBM: libopenlibm
LLVM: libLLVM-3.7.1 (ORCJIT, haswell)
| 0 |
Continuing #3059.
See also #3092
* allow dupe columns when they are in the same block/dtype
* Perhaps figure out a way to handle that case as well.
|
#### Code Sample, a copy-pastable example if possible
>>> import pandas as pd
>>> s = pd.Series()
>>> ts = pd.Timestamp('2016-01-01')
>>> s['a'] = None
>>> s['b'] = ts
>>> s
a None
b 1451606400000000000
dtype: object
OK, no worries, we got coerced to integer. Now let's just redo the **same
assignment** :
>>> s['b'] = ts
>>> s
a None
b 2016-01-01 00:00:00
dtype: object
That's ... suprising. This is probably just an unfortunate feature of a type
inference algorithm, but it's awfully shocking.
#### Related examples for testing
#18410
#21143
#### Expected Output
The two outputs above would be identical; I'd prefer that they were both the
second form (with timestamp information preserved), but anything consistent
would be better than the current state.
#### output of `pd.show_versions()`
>>> pd.show_versions()
INSTALLED VERSIONS
------------------
commit: None
python: 2.7.11.final.0
python-bits: 64
OS: Darwin
OS-release: 15.6.0
machine: x86_64
processor: i386
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
pandas: 0.18.1
nose: None
pip: 8.1.2
setuptools: 25.1.4
Cython: None
numpy: 1.11.1
scipy: None
statsmodels: None
xarray: None
IPython: None
sphinx: None
patsy: None
dateutil: 2.5.3
pytz: 2016.6.1
blosc: None
bottleneck: None
tables: None
numexpr: None
matplotlib: None
openpyxl: None
xlrd: None
xlwt: None
xlsxwriter: None
lxml: None
bs4: None
html5lib: None
httplib2: None
apiclient: None
sqlalchemy: None
pymysql: None
psycopg2: None
jinja2: None
boto: None
pandas_datareader: None
| 0 |
Trying to implement a tree-like structure:
extern mod core(vers = "0.5");
priv struct Node<K, V> {
left: core::Option<Node<K, V>>
}
I hit the following compiler error:
rust: task 7f3b6c105dd0 ran out of stack
/usr/local/bin/../lib/librustrt.so(_ZN9rust_task13begin_failureEPKcS1_m+0x4b)[0x7f3b73b0e45b]
/usr/local/bin/../lib/librustrt.so(_ZN9rust_task9new_stackEm+0x158)[0x7f3b73b0e928]
/usr/local/bin/../lib/librustrt.so(+0x30229)[0x7f3b73b22229]
/usr/local/bin/../lib/librustrt.so(upcall_new_stack+0x280)[0x7f3b73b12040]
/usr/local/bin/../lib/librustc-c84825241471686d-0.5.so(+0x767fa2)[0x7f3b742bbfa2]
/usr/local/bin/../lib/librustc-c84825241471686d-0.5.so(+0x2f2c59)[0x7f3b73e46c59]
/usr/local/bin/../lib/librustc-c84825241471686d-0.5.so(_ZN6middle2ty14__extensions__10meth_3526010iter_bytes15_abfbbb3dea17df3_05E+0xc3f)[0x7f3b73e1bc3f]
/usr/local/bin/../lib/librustc-c84825241471686d-0.5.so(_ZN6middle2ty14__extensions__10meth_3525510iter_bytes15_abfbbb3dea17df3_05E+0x85)[0x7f3b73e1adb5]
/usr/local/bin/../lib/librustc-c84825241471686d-0.5.so(+0x2d6ebf)[0x7f3b73e2aebf]
/usr/local/bin/../lib/librustc-c84825241471686d-0.5.so(+0x2d62ef)[0x7f3b73e2a2ef]
/usr/local/bin/../lib/librustc-c84825241471686d-0.5.so(_ZN6middle2ty9mk_struct16_beb59be6919256b3_05E+0x11e)[0x7f3b73e311de]
/usr/local/bin/../lib/librustc-c84825241471686d-0.5.so(_ZN6middle2ty19fold_regions_and_ty17_4c9b12843e648dc73_05E+0x93a)[0x7f3b73e33dfa]
/usr/local/bin/../lib/librustc-c84825241471686d-0.5.so(+0x2e1f54)[0x7f3b73e35f54]
/usr/local/bin/../lib/librustc-c84825241471686d-0.5.so(+0x111321)[0x7f3b73c65321]
/usr/local/bin/../lib/librustc-c84825241471686d-0.5.so(+0x2e03ea)[0x7f3b73e343ea]
/usr/local/bin/../lib/librustc-c84825241471686d-0.5.so(_ZN6middle2ty19fold_regions_and_ty17_4c9b12843e648dc73_05E+0x26a)[0x7f3b73e3372a]
/usr/local/bin/../lib/librustc-c84825241471686d-0.5.so(+0x2e1f54)[0x7f3b73e35f54]
/usr/local/bin/../lib/librustc-c84825241471686d-0.5.so(_ZN6middle2ty5subst16_b2fc4a1db6bca5d3_05E+0x2fb)[0x7f3b73d0772b]
/usr/local/bin/../lib/librustc-c84825241471686d-0.5.so(_ZN6middle2ty17lookup_field_type17_f6b5a6d1e0e9fa5f3_05E+0x1bb)[0x7f3b73d6472b]
|
This is an ICE on some obviously bogus input code.
struct A<X>;
fn foo<I:A<&'self int>>() { }
fn main() { }
Transcript:
% RUST_LOG=rustc=1 rustc --version /tmp/baz2.rs
/Users/pnkfelix/opt/rust-dbg/bin/rustc 0.8-pre (dd5c737 2013-09-08 12:05:55 -0700)
host: x86_64-apple-darwin
% RUST_LOG=rustc=1 rustc /tmp/baz2.rs
task <unnamed> failed at 'assertion failed: rp.is_none()', /Users/pnkfelix/Dev/Mozilla/rust.git/src/librustc/middle/typeck/collect.rs:1108
error: internal compiler error: unexpected failure
note: the compiler hit an unexpected failure path. this is a bug
note: try running with RUST_LOG=rustc=1 to get further details and report the results to github.com/mozilla/rust/issues
task <unnamed> failed at 'explicit failure', /Users/pnkfelix/Dev/Mozilla/rust.git/src/librustc/rustc.rs:376
%
| 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.