text1 stringlengths 2 269k | text2 stringlengths 2 242k | label int64 0 1 |
|---|---|---|
I downloaded neo4j-master from github, today.
Within neo4j-master there is a directory :
neo4j-master/community/io/src/main/java/org/neo4j/io
I also obtained the community edition 2.16, which should contain all the jars
for neo4j projects.
However, there is no neo4j.io.jar corresponding to the neo4j-master code
above.
I tried to compile neo4j-master/community/embedded-
examples/src/main/java/org/neo4j/examples/NewMatrix.java
The last import :: "import org.neo4j.io.fs.FileUtils;" was missing.
I obtained neo4j-io-2.2.0-M02.jar from
http://mvnrepository.com/artifact/org.neo4j/neo4j-io/2.2.0-M02
and added it to my project. The project then compiled and ran fine...
Further, the online javadoc for neo4j 2.16 does not cover package org.neo4j.io
(http://neo4j.com/docs/2.1.6/javadocs/)
It would be nice if there was a community version of neo4j which would compile
the examples, out of the box.
|
I have a total of 200,001 nodes out of which one node joins with 200,000 other
nodes. so 200,000 relationships total.
All these nodes are coming from Kafka so my Kafka Consumer reads a set of
nodes(a batch) from Kafka and applies the following operation.
MATCH (a:Dense1) where a.id <> "1"
WITH a
MATCH (b:Dense1) where b.id = "1"
WITH a,b
WHERE a.key = b.key
MERGE (a)-[:PARENT_OF]->(b)
And this takes forever to build 200,001 relationships both with index or
without index on id and key. If I change `Merge` to `CREATE` then it is super
quick(the fastest)! however, since I have to read a batch from kafka and apply
the same operation incrementally the relationships are getting duplicated if I
use `CREATE` for every batch.
Ideally, if a relationship exists I don't want Neo4j to do anything or even
better throw an exception or something to the client driver that way
application can do something useful with it. I also tried changing `MERGE` to
`CREATE UNIQUE` in the above code it is 50% faster than `MERGE` but still slow
compared to `CREATE`.
If `MERGE` is this slow due to double locking as explained here Then it almost
becomes unusable.
Any approach to make it better would be great!
| 0 |
It would be awesome if there is an option for an off-canvas navigation. There
is nothing wrong with the current dropdown style navigation in mobile and I'm
not against it, I just like the off-canvas style more.
Of course this would not be applicable for every website, but same as the
dropdown style either. So it would be good if we can have an option depending
on what kind of website we are working on. In my own opinion, the off-canvas
style is more "mobile" than the dropdown because we can see it more often in
mobile apps.
Here is a very good example:
http://designmodo.github.io/startup-
demo/framework/samples/sample-04/index.html
As you can see the links automatically becomes the off-canvas navigation when
in small screen.
Anyone with me?
|
Please make lateral the navbar when it collapses on mobile devices such as
foundation or purecss.
Tks
| 1 |
## Feature request
add "hot module introspection"
as a feedback channel from browser to editor
**situation in my code editor**
class Class1 {constructor() {this.key1 = 'val1'}};
class Class2 {constructor() {this.key2 = 'val2'}};
obj1 = new Class1();
obj2 = new Class2();
obj1.k
// ^
// at this point i want "hot code completion" in my code editor
// so only "key1" is suggested, but not "key2"
// code introspection at runtime
Object.keys(obj1)
// = [ 'key1' ]
// the node.js shell can do it
obj1.k
// ^
// the tab-key does the right completion to "key1"
**Ideal?**
**What is the expected behavior?**
obj1.k
// ^
// at this point i want "hot code completion" in my code editor
// so only "key1" is suggested, but not "key2"
as far as i know
all code editors fail at this point of "dynamic code analysis"
the best they can offer is: `key1` or `key2`
vscode and eclipse do this by default
vscode calls this "intellisense text suggestions"
but i dont want to be limited by "static code analysis"
when the program is started anyway, after every file change
**Solution?**
**How should this be implemented in your opinion?**
extend the "hot module replacement" system
to feed back "hot module introspection" data to the editor

this introspection-data can be sent over http
so the editor has a keep-alive connection
and is waiting for the server to push new introspection data
.... or use the browser as code editor, using CodeMirror
and show the program inside a frame, like on jsfiddle.net
[ the code completion function of codemirror seems broken to me ]
the data format should be optimized for machine-readability
for example by using length-prefixed lists and strings,
like in BSON, messagepack, python-pickle, EXI, flatbuffers, ....
in an ideal world, the javascript runtime does offer a fast way
to access the "internal representation" of the running program
**limits**
introspection requires a valid program,
so you must have a running "last version"
to provide introspection data for your not-running "current version"
**potential problems**
circular references must be detected and handled
like `object.child.parent.child.parent.child`....
**related**
recursive introspection in javascript
get inherited properties of javascript objects
VS Code to autocomplete JavaScript class 'this' properties automatically
\-- "doesn't work too well if you bind things to the class at runtime"
the ahaa moment of hot reloading in clojurescript/figwheel, by bruce hauman
**Why?**
**What is motivation or use case for adding/changing the behavior?**
this allows for "zero knowledge programming"
let the machine do the boring-precise part of
"how did i call this property? where is it hidden?"
and focus on the creative-fuzzy part of
"let me just add something like ...."
this also makes it much easier to learn new libraries.
instead of depending on good documentation,
you can make full use of the existing introspection functions.
for "distant properties" who are hidden in child/parent objects,
you can browse a "map of properties", like a mind-map.
also, why not? : P
**Are you willing to work on this yourself?**
no, not today.
i hope that the "insider people" can solve this much faster than me
and i can avoid digging into unfamiliar projects
**more keywords**
code hinting, runtime analysis, dynamic analysis, runtime introspection, live
object introspection, hierarchy of variable names, javascript object graph
|
# Bug report
**What is the current behavior?**
Many modules published to npm are using "auto" exports
(https://rollupjs.org/guide/en#output-exports-exports, but there is also a
popular babel plugin which adds this behaviour
https://github.com/59naga/babel-plugin-add-module-exports#readme) which is
supposed to ease interop with node (removing "pesky" `.default` for CJS
consumers when there is only a default export in the module).
And with that depending on a package authored **solely** in CJS (which still
is really common) which depends on a package authored using the mentioned
"auto" mode is dangerous and broken.
Why? Because webpack is using the "module" entry from package.json (thus using
real default export) without checking the requester module type (which is cjs
here). CJS requester did not use a `.default` when requiring the package with
auto mode, because from its perspective there was no such thing.
**If the current behavior is a bug, please provide the steps to reproduce.**
https://github.com/Andarist/webpack-module-entry-from-cjs-issue . Exported
value should be `"foobar42"` instead of `"foo[object Module]42"`
**What is the expected behavior?**
Webpack should deopt (ignoring .mjs & "module") its requiring behaviour based
on the requester type.
**Other relevant information:**
webpack version: latest
Node.js version: irrelevant
Operating System: irrelevant
Additional tools: irrelevant
Mentioning rollup team as probably its the tool outputting the most auto mode
libraries ( @lukastaegert @guybedford ) and @developit (who I think might be
interested in the discussion).
| 0 |
In a number of places, sklearn controls flow according to the existence of
some method on an estimator. For example: `*SearchCV.score` checks for `score`
on the estimator; `Scorer` and `multiclass` functions check for
`decision_function`; and it is used for validation in
`AdaBoostClassifier.fit`, `multiclass._check_estimator` and `Pipeline`; and
for testing in `test_common`.
Meta-estimators such as `*SearchCV`, `Pipeline`, `RFECV`, etc. should respond
to such `hasattr`s in agreement with their underlying estimators (or else the
`hasattr` approach should be avoided).
This is possible by implementing such methods with a `property` that returns
the correct method from the sub-estimator (or a closure around it), or raises
`AttributeError` if the sub-estimator is found lacking (see #1801). `hasattr`
would then function correctly. Caveats: the code would be less straightforward
in some cases; `help()`/`pydoc` won't show the methods as methods (with an
argument list, etc.), though the `property`'s docstring will show.
|
### Describe the bug
For some samples and requested n_clusters MiniBatchKMeans does not return a
proper clustering in terms of the number of clusters and consecutive labels.
The example given below shows, that when requesting 11 clusters the result
only consists of 9 and requesting 12 results in 11 clusters. Requesting 13
clusters then yields only 10 clusters.
When using KMeans instead of MiniBatchKMeans there is no such issue.
### Steps/Code to Reproduce
import numpy as np
from sklearn.cluster import MiniBatchKMeans
points = [
[-2636.705, 892.6364, 239.4284], [-2676.219, 922.741, 227.3839], [-2628.628, 902.6482, 245.5609], [-2612.497, 860.9032, 248.924],
[-2639.552, 993.8482, 211.2253], [-2602.453, 958.7801, 211.5786], [-2598.118, 1032.398, 177.4023], [-2582.155, 972.5088, 203.5048],
[-2548.377, 803.9934, 279.4388], [-2550.095, 979.9586, 222.6467], [-2746.966, 1021.456, 188.8456], [-2745.181, 984.1931, 199.6674],
[-2729.113, 973.8251, 201.8876], [-2720.765, 1014.262, 205.0213], [-2747.317, 1099.313, 146.2305], [-2739.32, 1005.173, 200.297]
]
for numClusters in range(7, 17):
model = MiniBatchKMeans(n_clusters=numClusters, random_state=0)
clusters = model.fit_predict(points)
unique = np.unique(clusters)
print("requested", str(numClusters).rjust(2), "clusters and result has", str(len(unique)).rjust(2), "clusters with labels", unique)
### Expected Results
requested 7 clusters and result has 7 clusters with labels [ 0 1 2 3 4 5 6]
requested 8 clusters and result has 8 clusters with labels [ 0 1 2 3 4 5 6 7]
requested 9 clusters and result has 9 clusters with labels [ 0 1 2 3 4 5 6 7 8]
requested 10 clusters and result has 10 clusters with labels [ 0 1 2 3 4 5 6 7 8 9]
requested 11 clusters and result has 11 clusters with labels [ 0 1 2 3 4 5 6 7 8 9 10]
requested 12 clusters and result has 12 clusters with labels [ 0 1 2 3 4 5 6 7 8 9 10 11]
requested 13 clusters and result has 13 clusters with labels [ 0 1 2 3 4 5 6 7 8 9 10 11 12]
requested 14 clusters and result has 14 clusters with labels [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13]
requested 15 clusters and result has 15 clusters with labels [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14]
requested 16 clusters and result has 16 clusters with labels [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15]
### Actual Results
requested 7 clusters and result has 7 clusters with labels [ 0 1 2 3 4 5 6]
requested 8 clusters and result has 7 clusters with labels [ 0 1 3 4 5 6 7]
requested 9 clusters and result has 9 clusters with labels [ 0 1 2 3 4 5 6 7 8]
requested 10 clusters and result has 10 clusters with labels [ 0 1 2 3 4 5 6 7 8 9]
requested 11 clusters and result has 9 clusters with labels [ 0 1 2 3 5 6 7 8 9]
requested 12 clusters and result has 11 clusters with labels [ 1 2 3 4 5 6 7 8 9 10 11]
requested 13 clusters and result has 10 clusters with labels [ 0 2 4 5 6 7 9 10 11 12]
requested 14 clusters and result has 12 clusters with labels [ 1 2 3 4 5 6 7 8 9 10 11 12]
requested 15 clusters and result has 11 clusters with labels [ 0 1 3 4 6 7 8 10 11 12 14]
requested 16 clusters and result has 13 clusters with labels [ 0 1 3 4 5 6 7 9 10 11 12 13 15]
### Versions
System:
python: 3.9.7 | packaged by conda-forge | (default, Sep 29 2021, 19:20:16) [MSC v.1916 64 bit (AMD64)]
executable: C:\Users\USER\.conda\envs\sklearn-env\python.exe
machine: Windows-10-10.0.22000-SP0
Python dependencies:
pip: 21.3.1
setuptools: 58.5.3
sklearn: 1.0.1
numpy: 1.21.4
scipy: 1.7.2
Cython: None
pandas: None
matplotlib: 3.4.3
joblib: 1.1.0
threadpoolctl: 3.0.0
Built with OpenMP: True
| 0 |
I was hoping to find a routine in base that would give me the first `k`
elements of `sortperm(v)`.
This would be much like `select(v, 1:k)`, but instead of returning the actual
elements, it would return the index where those elements can be found.
Does such a function exist?
|
One of the first things we need to do is make the runtime thread safe. This
work is on the `threads` branch, and this tracker predates the new GC. I
thought it is worth capturing a tracker that @StefanKarpinski prepared earlier
in an issue to ease the thread safety work.
This list is organized as `Variable; Approach`
# builtins.c
* extern size_t jl_page_size; constant
* extern int jl_in_inference; lock
* extern int jl_boot_file_loaded; constant
* int in_jl_ = 0; thread-local
# ccall.cpp
* static std::mapstd::stringstd::string sonameMap; lock
* static bool got_sonames = false; lock, write-once
* static std::mapstd::stringuv_lib_t* libMap; lock
* static std::mapstd::stringGlobalVariable* libMapGV; lock
* static std::mapstd::stringGlobalVariable* symMapGV; lock
* ~~static char *temp_arg_area; thread-local (will be deleted very soon)~~
* ~~static const uint32_t arg_area_sz = 4196; constant (will be deleted very soon)~~
* ~~static uint32_t arg_area_loc; thread-local (will be deleted very soon)~~
* ~~static void *temp_arg_blocks[N_TEMP_ARG_BLOCKS]; thread-local (will be deleted very soon)~~
* ~~static uint32_t arg_block_n = 0; thread-local (will be deleted very soon)~~
* ~~static Function *save_arg_area_loc_func; constant (will be deleted very soon)~~
* ~~static Function *restore_arg_area_loc_func; constant (will be deleted very soon)~~
# cgutils.cpp
* static std::map<const std::stringGlobalVariable*> stringConstants; lock
* static std::map<void*jl_value_llvm> jl_value_to_llvm; lock
* static std::map<Value _void_ > llvm_to_jl_value; lock
* static std::vector<Constant*> jl_sysimg_gvars; lock
* static std::map<intjl_value_t*> typeIdToType; lock
* jl_array_t *typeToTypeId; lock
* static int cur_type_id = 1; lock
# codegen.cpp
* void *__stack_chk_guard = NULL; thread-local (jwn: why is this on the list? it's a constant and not thread local)
# debuginfo.cpp
* extern "C" volatile int jl_in_stackwalk;
* JuliaJITEventListener *jl_jit_events;
* static obfiletype objfilemap;
* extern char *jl_sysimage_name; constant
* static logdata_t coverageData;
* static logdata_t mallocData;
# dump.c
* static jl_array_t *tree_literal_values=NULL; thread-local
* static jl_value_t *jl_idtable_type=NULL; constant
* static jl_array_t *datatype_list=NULL; thread 0 only
* jl_value_t ***sysimg_gvars = NULL; thread 0 only
* extern int globalUnique; thread 0 only
* static size_t delayed_fptrs_n = 0; thread 0 only
* static size_t delayed_fptrs_max = 0; thread 0 only
# gc.c
* static volatile size_t allocd_bytes = 0; thread-local
* static volatile int64_t total_allocd_bytes = 0; thread-local
* static int64_t last_gc_total_bytes = 0; thread-local
* static size_t freed_bytes = 0; barrier
* static uint64_t total_gc_time=0; barrier
* int jl_in_gc=0; * referenced from switchto task.c barrier
* static htable_t obj_counts; barrier
* static size_t total_freed_bytes=0; barrier
* static arraylist_t to_finalize; barrier
* static jl_value_t **mark_stack = NULL; barrier
* static size_t mark_stack_size = 0; barrier
* static size_t mark_sp = 0; barrier
* extern jl_module_t *jl_old_base_module; constant
* extern jl_array_t *typeToTypeId; barrier
* extern jl_array_t *jl_module_init_order; barrier
* static int is_gc_enabled = 1; atomic
* static double process_t0; constant
# init.c
* char *jl_stack_lo; thread-local
* char *jl_stack_hi; thread-local
* volatile sig_atomic_t jl_signal_pending = 0; thread-local
* volatile sig_atomic_t jl_defer_signal = 0; thread-local
* uv_loop_t *jl_io_loop; I/O thread ?
* static void *signal_stack; thread-local (see #9763 (comment))
* static mach_port_t segv_port = 0; constant
* extern void * __stack_chk_guard; thread-local (duplicate of above)
# jltypes.c
* int inside_typedef = 0; thread-local
* static int match_intersection_mode = 0; thread-local
* static int has_ntuple_intersect_tuple = 0; thread-local
* static int t_uid_ctr = 1; lock
# llvm-simdloop.cpp
* static unsigned simd_loop_mdkind = 0; constant
* static MDNode* simd_loop_md = NULL; constant
* char LowerSIMDLoop::ID = 0; lock
# module.c
* jl_module_t *jl_main_module=NULL; constant
* jl_module_t *jl_core_module=NULL; constant
* jl_module_t *jl_base_module=NULL; constant
* jl_module_t *jl_current_module=NULL; thread-local
* jl_array_t *jl_module_init_order = NULL; lock (this code is bady broken anyways: #9799)
# profile.c
* static volatile ptrint_t* bt_data_prof = NULL;
* static volatile size_t bt_size_max = 0;
* static volatile size_t bt_size_cur = 0;
* static volatile u_int64_t nsecprof = 0;
* static volatile int running = 0;
* volatile HANDLE hBtThread = 0;
* static pthread_t profiler_thread;
* static mach_port_t main_thread;
* clock_serv_t clk;
* static int profile_started = 0;
* static mach_port_t profile_port = 0;
* volatile static int forceDwarf = -2;
* volatile mach_port_t mach_profiler_thread = 0;
* static unw_context_t profiler_uc;
* mach_timespec_t timerprof;
* struct itimerval timerprof;
* static timer_t timerprof;
* static struct itimerspec itsprof;
# sys.c
* JL_STREAM *JL_STDIN=0; constant
* JL_STREAM *JL_STDOUT=0; constant
* JL_STREAM *JL_STDERR=0; constant
# task.c
* volatile int jl_in_stackwalk = 0; thread-local
* static size_t _frame_offset; constant
* DLLEXPORT jl_task_t * volatile jl_current_task; thread-local
* jl_task_t *jl_root_task; constant
* jl_value_t * volatile jl_task_arg_in_transit; thread-local
* jl_value_t *jl_exception_in_transit; thread-local
* __JL_THREAD jl_gcframe_t *jl_pgcstack = NULL; thread-local
* jl_jmp_buf * volatile jl_jmp_target; thread-local
* extern int jl_in_gc; barrier
* static jl_function_t *task__hook_func=NULL; constant
* ptrint_t bt_data[MAX_BT_SIZE+1]; thread-local
* size_t bt_size = 0; thread-local
* int needsSymRefreshModuleList; lock
* jl_function_t *jl_unprotect_stack_func; constant
# toplevel.c
* int jl_lineno = 0; thread-local
* jl_module_t *jl_old_base_module = NULL; constant
* jl_module_t *jl_internal_main_module = NULL; constant
* extern int jl_in_inference; lock
| 0 |
Comment by Kenneth Reitz:
> We need to support MultiDict. This is long overdue.
|
Duplicate #6261
Nate: Sorry i posted duplicate. Probably posted in the wrong place. Newbie and
my first post on the forum.
You closed #6314 as duplicate of #6261. I was not making a feature request but
seeking help.
I was hoping to get help in solving the issue I am experiencing with
RequestsDependencyWarning error messages.
| 0 |
In some case a DataFrame exported to excel present some bad values.
It's is not a problem of Excel reading (the data inside the sheet1.xml of the
.xlsx file is also incorrect).
The same DataFrame exported to ".csv" is correct.
The problem could be "solved" by renaming the column header as [col-1,
col-2,...]. Maybe an encoding problem ?
The issue is that there is no warning/error during the export. It's very easy
to miss it.
To reproduce:
import pandas as pd
df = pd.read_pickle('problematic_df.pkl')
df.to_excel('problematic_df.xlsx')
df.to_csv('problematic_df.csv')
with the file available here:
https://drive.google.com/file/d/0Bzz_ZaP_wS_HMFdlMkVzaTR0cjA/view?usp=sharing
Note that the content of cell M14 is different in both file (at least when run
on my computer)
Using:
* Python 3.4.3 |Anaconda 2.3.0 (64-bit)
* pandas 0.16.2
* Windows 7 64 bits
|
store_id_map
> <class 'pandas.io.pytables.HDFStore'>
> File path: C:\output\identifier_map.h5
> /identifier_map frame_table
> (typ->appendable_multi,nrows->26779823,ncols->9,indexers->[index],dc->[RefIdentifierID])
store_id_map.select('identifier_map')
> * * *
>
> KeyError Traceback (most recent call last)
> in ()
> \----> 1 store_id_map.select('identifier_map')
>
> C:\Python27\lib\site-packages\pandas\io\pytables.pyc in select(self, key,
> where, start, stop, columns, iterator, chunksize, auto_close, *_kwargs)
> 456 return TableIterator(self, func, nrows=s.nrows, start=start, stop=stop,
> chunksize=chunksize, auto_close=auto_close)
> 457
> \--> 458 return TableIterator(self, func, nrows=s.nrows, start=start,
> stop=stop, auto_close=auto_close).get_values()
> 459
> 460 def select_as_coordinates(self, key, where=None, start=None, stop=None,
> *_kwargs):
>
> C:\Python27\lib\site-packages\pandas\io\pytables.pyc in get_values(self)
> 982
> 983 def get_values(self):
> \--> 984 results = self.func(self.start, self.stop)
> 985 self.close()
> 986 return results
>
> C:\Python27\lib\site-packages\pandas\io\pytables.pyc in func(_start, _stop)
> 449 # what we are actually going to do for a chunk
> 450 def func(_start, _stop):
> \--> 451 return s.read(where=where, start=_start, stop=_stop,
> columns=columns, **kwargs)
> 452
> 453 if iterator or chunksize is not None:
>
> C:\Python27\lib\site-packages\pandas\io\pytables.pyc in read(self, columns,
> *_kwargs)
> 3259 columns.insert(0, n)
> 3260 df = super(AppendableMultiFrameTable, self).read(columns=columns,
> *_kwargs)
> -> 3261 df.set_index(self.levels, inplace=True)
> 3262 return df
> 3263
>
> C:\Python27\lib\site-packages\pandas\core\frame.pyc in set_index(self, keys,
> drop, append, inplace, verify_integrity)
> 2827 names.append(None)
> 2828 else:
> -> 2829 level = frame[col].values
> 2830 names.append(col)
> 2831 if drop:
>
> C:\Python27\lib\site-packages\pandas\core\frame.pyc in **getitem** (self,
> key)
> 2001 # get column
> 2002 if self.columns.is_unique:
> -> 2003 return self._get_item_cache(key)
> 2004
> 2005 # duplicate columns
>
> C:\Python27\lib\site-packages\pandas\core\generic.pyc in
> _get_item_cache(self, item)
> 665 return cache[item]
> 666 except Exception:
> \--> 667 values = self._data.get(item)
> 668 res = self._box_item_values(item, values)
> 669 cache[item] = res
>
> C:\Python27\lib\site-packages\pandas\core\internals.pyc in get(self, item)
> 1653 def get(self, item):
> 1654 if self.items.is_unique:
> -> 1655 _, block = self._find_block(item)
> 1656 return block.get(item)
> 1657 else:
>
> C:\Python27\lib\site-packages\pandas\core\internals.pyc in _find_block(self,
> item)
> 1933
> 1934 def _find_block(self, item):
> -> 1935 self._check_have(item)
> 1936 for i, block in enumerate(self.blocks):
> 1937 if item in block:
>
> C:\Python27\lib\site-packages\pandas\core\internals.pyc in _check_have(self,
> item)
> 1940 def _check_have(self, item):
> 1941 if item not in self.items:
> -> 1942 raise KeyError('no item named %s' % com.pprint_thing(item))
> 1943
> 1944 def reindex_axis(self, new_axis, method=None, axis=0, copy=True):
>
> KeyError: u'no item named None'
I am at a loss. What does that mean? I successfully saved the DataFrame but I
cannot read it back.
| 0 |
I've seen other similar posts, but I they seem not to apply to me. The role in
question worked fine in 2.2.
##### ISSUE TYPE
* Bug Report
##### COMPONENT NAME
Loop
##### ANSIBLE VERSION
ansible 2.3.0.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
python version = 2.7.12 (default, Nov 19 2016, 06:48:10) [GCC 5.4.0 20160609]
##### CONFIGURATION
[defaults]
host_key_checking = False
ansible_managed = DO NOT MODIFY by hand. This file is under control of Ansible on {host}.
vault_password_file = /var/lib/semaphore/.vpf
##### OS / ENVIRONMENT
N/A
##### SUMMARY
When trying to create a list of users, it throws the an error about an invalid
value error.
##### STEPS TO REPRODUCE
I am running this task
This is the task I am trying to run.
- name: create user
user:
name: "{{ item.username }}"
password: "{{ item.password|default(omit) }}"
shell: /bin/bash
become: true
with_items: '{{ users }}'
no_log: true
This is the `users` variable. I have truncated the vault values for brevity.
users:
- username: ptadmin
password: !vault-encrypted |
$ANSIBLE_VAULT;1.1;AES256
...
use_sudo: true
use_ssh: false
- username: ansibleremote
password: "{{ petra_ansibleremote_password }}"
use_sudo: true
use_ssh: true
public_key: !vault-encrypted |
$ANSIBLE_VAULT;1.1;AES256
...
- username: semaphore
password: "{{ petra_ansibleremote_password }}"
use_sudo: true
use_ssh: true
- username: frosty
password: !vault-encrypted |
$ANSIBLE_VAULT;1.1;AES256
...
use_sudo: true
use_ssh: true
public_key: !vault-encrypted |
$ANSIBLE_VAULT;1.1;AES256
...
- username: thebeardedone
password: !vault-encrypted |
$ANSIBLE_VAULT;1.1;AES256
...
use_sudo: true
use_ssh: true
public_key: !vault-encrypted |
$ANSIBLE_VAULT;1.1;AES256
...
- username: senanufc
password: !vault-encrypted |
$ANSIBLE_VAULT;1.1;AES256
...
use_sudo: true
use_ssh: true
public_key: !vault-encrypted |
$ANSIBLE_VAULT;1.1;AES256
...
I tried a `debug` before it and the results are in this gist.
I also tried with below and the same thing happens
- name: test loop
debug:
msg: "{{ item.username }}"
with_items: "{{ users }}"
##### EXPECTED RESULTS
Users created
##### ACTUAL RESULTS
fatal: [petra-hq-dev-master]: FAILED! => {"failed": true, "msg": "the field 'args' has an invalid value, which appears to include a variable that is undefined. The error was: 'ansible.vars.unsafe_proxy.AnsibleUnsafeText object' has no attribute 'username'\n\nThe error appears to have been in '/etc/ansible/roles/thedumbtechguy.manage-users/tasks/create_users.yml': line 6, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: create user\n ^ here\n"}
to retry, use: --limit @/var/lib/semaphore/repository_1/playbooks/setup_new_hosts.retry
|
##### ISSUE TYPE
* Bug Report
##### COMPONENT NAME
ansible-vault or jinja
##### ANSIBLE VERSION
ansible 2.3.0.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
python version = 2.7.12 (default, Nov 19 2016, 06:48:10) [GCC 5.4.0 20160609]
##### CONFIGURATION
No special configuration
##### OS / ENVIRONMENT
Ubuntu 16.04
##### SUMMARY
When using the newly introduced single encrypted variables in lists or
dictionaries these cannot be looped like expected with `with_items` or
`with_dict`. The list or dictionary is somehow treated as one object and not
as a "loopable" object.
##### STEPS TO REPRODUCE
Simple playbook containing a single encrypted variable `vaulted` in the list
`test_with_vaulted_variable`. The vault password is "test" (without quotes)
and the `vaulted` variable content is "vaulted variable".
---
- hosts: all
vars:
test_without_vaulted_variable:
- not_vaulted: not vaulted variable
- another_standard_variable: standard
test_with_vaulted_variable:
- not_vaulted: not vaulted variable
- vaulted: !vault |
$ANSIBLE_VAULT;1.1;AES256
66376230363937306331353166333731633037326166626530393462636666346630366463313134
6635313236366537346339313338633539643665313931390a373264326437663530616630623734
31666136343232666235323865653838393830613432343561633465333837633531643564343064
3237353766313835310a643963313163663632623064313034363531356330653131303833646138
65366139376134396231353864383662623832376239336433623630383464303161
tasks:
- debug: var=test_without_vaulted_variable
- debug: var=test_with_vaulted_variable
- debug:
with_items: "{{ test_without_vaulted_variable }}"
- debug:
with_items: "{{ test_with_vaulted_variable }}"
Start the play with (and enter the vault pass "test")
ansible-playbook -i 'lavego-test,' vault-with-items.yml --ask-vault-pass
##### EXPECTED RESULTS
The list processing should be the same for the lists
`test_without_vaulted_variable`and `test_with_vaulted_variable`: The loops
should output each element of each list.
##### ACTUAL RESULTS
The list containing the vaulted variable is not looped like expected. In the
last debug statement: Note that there is only one "Hello world!" message
printed instead of two, one for each element, for the list with no vaulted
variable.
Note that also the normal debug output of the two lists differs (first and
second debug statements) - the list containing the vaulted variable "has no
structure".
$ ansible-playbook -i 'localhost,' vault-with-items.yml --ask-vault-pass
Vault password:
PLAY [all] *********************************************************************************************************************************************************************************************************
TASK [Gathering Facts] *********************************************************************************************************************************************************************************************
ok: [localhost]
TASK [debug] *******************************************************************************************************************************************************************************************************
ok: [localhost] => {
"changed": false,
"test_without_vaulted_variable": [
{
"not_vaulted": "not vaulted variable"
},
{
"another_standard_variable": "standard"
}
]
}
TASK [debug] *******************************************************************************************************************************************************************************************************
ok: [localhost] => {
"changed": false,
"test_with_vaulted_variable": "[{u'not_vaulted': u'not vaulted variable'}, {u'vaulted': AnsibleVaultEncryptedUnicode($ANSIBLE_VAULT;1.1;AES256\n66376230363937306331353166333731633037326166626530393462636666346630366463313134\n6635313236366537346339313338633539643665313931390a373264326437663530616630623734\n31666136343232666235323865653838393830613432343561633465333837633531643564343064\n3237353766313835310a643963313163663632623064313034363531356330653131303833646138\n65366139376134396231353864383662623832376239336433623630383464303161\n)}]"
}
TASK [debug] *******************************************************************************************************************************************************************************************************
ok: [localhost] => (item={u'not_vaulted': u'not vaulted variable'}) => {
"item": {
"not_vaulted": "not vaulted variable"
},
"msg": "Hello world!"
}
ok: [localhost] => (item={u'another_standard_variable': u'standard'}) => {
"item": {
"another_standard_variable": "standard"
},
"msg": "Hello world!"
}
TASK [debug] *******************************************************************************************************************************************************************************************************
ok: [localhost => (item=[{u'not_vaulted': u'not vaulted variable'}, {u'vaulted': AnsibleVaultEncryptedUnicode($ANSIBLE_VAULT;1.1;AES256
66376230363937306331353166333731633037326166626530393462636666346630366463313134
6635313236366537346339313338633539643665313931390a373264326437663530616630623734
31666136343232666235323865653838393830613432343561633465333837633531643564343064
3237353766313835310a643963313163663632623064313034363531356330653131303833646138
65366139376134396231353864383662623832376239336433623630383464303161
)}]) => {
"item": "[{u'not_vaulted': u'not vaulted variable'}, {u'vaulted': AnsibleVaultEncryptedUnicode($ANSIBLE_VAULT;1.1;AES256\n66376230363937306331353166333731633037326166626530393462636666346630366463313134\n6635313236366537346339313338633539643665313931390a373264326437663530616630623734\n31666136343232666235323865653838393830613432343561633465333837633531643564343064\n3237353766313835310a643963313163663632623064313034363531356330653131303833646138\n65366139376134396231353864383662623832376239336433623630383464303161\n)}]",
"msg": "Hello world!"
}
PLAY RECAP *********************************************************************************************************************************************************************************************************
localhost : ok=5 changed=0 unreachable=0 failed=0
Thank you for your help!
| 1 |
Let's add a link on a atom-packages website, which will open editor and
install selected package. Something like an AppStore link
|

I believe we casually discussed this in person or in Chat, but I couldn't find
a follow-up issue. I think it would be really excellent. In the meantime we
can work around on the site with instructions for install through the editor
or CLI, but that's a bit clunky.
| 1 |
# Summary of the new feature/enhancement
I think there are some applications that are used very frequently. So it's a
good idea to show a list of applications that are pined and the most used.
Therefore, we don't need to type in a keyword to find them again.
# Proposed technical implementation details (optional)
The list can be above the input textbox or at the sides of it.
Items can be selected by keyboard "ALT + number", so one of my fingers does
not have to leave the ALT key after a "ALT + Space".
|
Hi everybody ? What about a new feature to FancyZones: dynamic resizing of the
zone hosting the currently focused application ? I'm thinking about something
like resizing windows in tiling window mangers such as i3 :)
| 0 |
**Context:**
* Playwright Version: 1.12
* Operating System: Mac
* Node.js version: 14.6
* Browser: Firefox
* Extra: NA
**Code Snippet**
Help us help you! Put down a short code snippet that illustrates your bug and
that we can run and debug locally. For example:
const {firefox} = require('playwright');
(async () => {
const browser = await firefox.launch();
const page = await browser.newPage("Page with SELF SIGNED certificate");
// ...
})();
**Describe the bug**
We are facing issues with Self Signed Certificates on firefox and while doing
browser.newPage() we are getting SEC_ERROR_UNKNOWN_ISSUER.
Can we allow please fix this behaviour for Self Signed Certificates on
firefox?
|
We are facing issues with Self Signed Certificates on firefox and while doing
browser.newPage() we are getting SEC_ERROR_UNKNOWN_ISSUER which gets resolved
by setting ignoreHTTPSErrors: true.
Can we allow Self Signed Certificates by default or provide an option to pass
these as options in launchServer options?
Working code:
const browser = await firefox.connect({
wsEndpoint: "ws://127.0.0.1:55614/a4de39415b37282b3f8ee16845753bf8",
});
const context = await browser.newContext({
ignoreHTTPSErrors: true
});
// Use the default browser context to create a new tab and navigate to URL
const page = await context.newPage();
Not Working:
const browser = await firefox.connect({
wsEndpoint: "ws://127.0.0.1:55614/a4de39415b37282b3f8ee16845753bf8",
});
// Use the default browser context to create a new tab and navigate to URL
const page = await browser.newPage();
| 1 |
I am trying to compile tensorflow from source. I can build it **successfully**
with CPU support only( i.e. not use `--config=cuda`) .
But when I try to build it with GPU support, I get error:
[chaowei@node07 tensorflow]$ export EXTRA_BAZEL_ARGS='-s --verbose_failures --ignore_unsupported_sandboxing --genrule_strategy=standalone --spawn_strategy=standalone --jobs 8'
[chaowei@node07 tensorflow]$
[chaowei@node07 tensorflow]$ /gpfs/home/chaowei/download/bazel-0.1.5/output/bazel build -c opt --config=cuda --linkopt '-lrt' --copt="-DGPR_BACKWARDS_COMPATIBILITY_MODE" --conlyopt="-std=c99" //tensorflow/tools/pip_package:build_pip_package
...........
WARNING: Sandboxed execution is not supported on your system and thus hermeticity of actions cannot be guaranteed. See http://bazel.io/docs/bazel-user-manual.html#sandboxing for more information. You can turn off this warning via --ignore_unsupported_sandboxing.
INFO: Found 1 target...
ERROR: /gpfs/home/chaowei/.cache/bazel/_bazel_chaowei/2ce35f089de902cec16e4a2c6a450834/external/grpc/BUILD:485:1: C++ compilation of rule '@grpc//:grpc_unsecure' failed: gcc failed: error executing command /gpfs/home/chaowei/software/gcc-6.1.0/bin/gcc -U_FORTIFY_SOURCE '-D_FORTIFY_SOURCE=1' -fstack-protector -fPIE -Wall -Wunused-but-set-parameter -Wno-free-nonheap-object -fno-omit-frame-pointer -g0 -O2 ... (remaining 39 argument(s) skipped): com.google.devtools.build.lib.shell.BadExitStatusException: Process exited with status 1.
external/grpc/src/core/compression/message_compress.c:41:18: fatal error: zlib.h: No such file or directory
#include <zlib.h>
^
compilation terminated.
Target //tensorflow/tools/pip_package:build_pip_package failed to build
Use --verbose_failures to see the command lines of failed build steps.
INFO: Elapsed time: 71.894s, Critical Path: 58.77s
**I also compile python3 from source in my computer. And when I`import zlib`,
it works fine.**
Here is the information of my system:
`[chaowei@mgt ~]$ cat /etc/redhat-release Red Hat Enterprise Linux Server
release 6.5 (Santiago)`
[chaowei@node07 gcc-6.1.0]$ gcc -v
built-in specs。
COLLECT_GCC=gcc
COLLECT_LTO_WRAPPER=/gpfs/home/chaowei/software/gcc-6.1.0/libexec/gcc/x86_64-pc-linux-gnu/6.1.0/lto-wrapper
Target:x86_64-pc-linux-gnu
Configured with:./configure --prefix=/gpfs/home/chaowei/software/gcc-6.1.0
Thread model:posix
gcc version 6.1.0 (GCC)
I wonder why I get `zlib.h` error when I only build tensorflow with GPU
support.
|
### Environment info
Operating System: Red Hat Enterprise Linux Server release 7.2
Installed version of CUDA and cuDNN: CUDA 7, cuDNN 4
(please attach the output of `ls -l /path/to/cuda/lib/libcud*`):
-rw-r--r-- 1 root root 179466 Jan 26 22:19 /usr/local/cuda/lib/libcudadevrt.a
lrwxrwxrwx 1 root root 16 Jan 26 22:19 /usr/local/cuda/lib/libcudart.so ->
libcudart.so.7.0
lrwxrwxrwx 1 root root 19 Jan 26 22:19 /usr/local/cuda/lib/libcudart.so.7.0 ->
libcudart.so.7.0.28
-rwxr-xr-x 1 root root 303052 Jan 26 22:19 /usr/local/cuda/lib/libcudart.so.7.0.28
-rw-r--r-- 1 root root 546514 Jan 26 22:19 /usr/local/cuda/lib/libcudart_static.a
cuDNN is installed for the local user only.
If installed from sources, provide the commit hash:
`b289bc7`
### Steps to reproduce
I have followed the instructions at:
https://www.tensorflow.org/versions/r0.8/get_started/os_setup.html#requirements
Since I am not root, I had to install everything for the local user, but it
seems to have worked.
I get an error at this line:
bazel build -c opt --config=cuda //tensorflow/cc:tutorials_example_trainer
Saying:
ERROR:
[...]/vlad/.cache/bazel/_bazel_vlad/6607a39fc04ec931b523fac975ff3100/external/png_archive/BUILD:23:1:
Executing genrule @png_archive//:configure failed: bash failed: error
executing command /bin/bash -c ... (remaining 1 argument(s) skipped):
com.google.devtools.build.lib.shell.BadExitStatusException: Process exited
with status 1.
[...]/vlad/.cache/bazel/_bazel_vlad/6607a39fc04ec931b523fac975ff3100/tensorflow/external/png_archive/libpng-1.2.53
[...]/vlad/.cache/bazel/_bazel_vlad/6607a39fc04ec931b523fac975ff3100/tensorflow
/tmp/tmp.pCUaj9eIKr
[...]/vlad/.cache/bazel/_bazel_vlad/6607a39fc04ec931b523fac975ff3100/tensorflow/external/png_archive/libpng-1.2.53
[...]/vlad/.cache/bazel/_bazel_vlad/6607a39fc04ec931b523fac975ff3100/tensorflow
... a bunch more lines until:
checking for pow... no
checking for pow in -lm... yes
checking for zlibVersion in -lz... no
**configure: error: zlib not installed**
Target //tensorflow/cc:tutorials_example_trainer failed to build
Use --verbose_failures to see the command lines of failed build steps.
INFO: Elapsed time: 57.196s, Critical Path: 22.12s
### What have you tried?
I installed zlib, and the following program compiles with g++
#include <cstdio>
#include <zlib.h>
int main()
{
printf("Hello world");
return 0;
}
I have the following in my .bashrc:
export LD_LIBRARY_PATH="$HOME/local/cuda/lib64:$LD_LIBRARY_PATH"
export LD_LIBRARY_PATH="$HOME/bin/zlibdev/lib:$LD_LIBRARY_PATH"
export CPATH="$HOME/local/cuda/include:$CPATH"
export CPATH="$HOME/bin/zlibdev/include:$CPATH"
export LIBRARY_PATH="$HOME/local/cuda/lib64:$LIBRARY_PATH"
export PKG_CONFIG_PATH="$HOME/bin/zlibdev/lib/pkgconfig"
Why can't bazel / tensorflow find zlib.h? It's there and accessible.
| 1 |
### Preflight Checklist
* I have read the Contributing Guidelines for this project.
* I agree to follow the Code of Conduct that this project adheres to.
* I have searched the issue tracker for an issue that matches the one I want to file, without success.
### Issue Details
* **Electron Version:**
* 6.0.10
* **Operating System:**
* Windows 7 / macOS 10.13.6
* **Last Known Working Electron version:**
* 4.1.5
### Expected Behavior
Electron Main Process should receive incoming `messages` from another process.
### Actual Behavior
Electron Main Process can only send messages, but not receive
### To Reproduce
TBD
### Screenshots
### Additional Information
Is there a reason anyone suspects this wouldn't work? I reviewed the changelog
meticulously but I don't see anything that seems relevant in either 5.x or
6.x. I see the sandbox is enabled by default, but it seems to only apply to
the `renderer` processes and not the `main` as far as I can tell. Currently I
haven't a clue what the cause could be, but if any Electron Team dev or anyone
else could point me in the right direction I'd appreciate it hugely.
|
### Preflight Checklist
* I have read the Contributing Guidelines for this project.
* I agree to follow the Code of Conduct that this project adheres to.
* I have searched the issue tracker for an issue that matches the one I want to file, without success.
# Issue Details
* **Electron Version:** `6.0.10`
* **Operating System:** `Windows 7`
* **Last Known Working Electron version:** `4.1.5`
### Expected Behavior
Debugger does not bug out and freeze/eat CPU indefinitely
### Actual Behavior
Debugging innocuous and functional code that was debuggable in Electron 4 can
lead to freezes that require destroying the BrowserWindow or restarting
Electron entirely.
### To Reproduce
No idea :(
### Screenshots

### Additional Information
I upgraded and setting breakpoints in working code that was previously
debuggable in Electron 4 there are issues. There's been no changes to that
code and the debugger worked fine in Electron 4. I've been working on new code
and figured that the issue was caused by a bug in my newly written code until
just now.
It will stop properly at the breakpoints (in a number of places in my code),
and then CPU usage spikes and stays high, taking up 40-45% CPU on my dual core
machine. It appears to respond, but if you resume or step ahead nothing
happens. If you press Pause script execution, similarly nothing seems to
happen except the UI updates. CPU usage remains the same throughout. This code
also runs properly when not debugging and last time I ran it in React Native
and debugged with React Native Debugger (based on Electron 1.8 I think?)
Hitting Ctrl+P, the quick nav feature comes up as expected, but there's no
files there except the HTML file loaded. Any additional files no longer
appear.
Electron 6 is based on Chromium 76, and on some machines Chrome 76 (and 77)
shoots to using a full CPU core and doesn't stop until the process is ended. I
wonder if perhaps I'm seeing the same thing here.
| 1 |
My app doesn't need `werkzeug` directly except for type annotations of views
that contain a `redirect` call, even when I explicitly provide
`Response=flask.Response`.
flask/src/flask/helpers.py
Lines 233 to 235 in 5cdfeae
| def redirect(
---|---
| location: str, code: int = 302, Response:
t.Optional[t.Type["BaseResponse"]] = None
| ) -> "BaseResponse":
I think this can be achieved in a similar way to `classmethod`s:
_BaseResponse = t.TypeVar("_BaseResponse", bound="BaseResponse")
def redirect(
location: str, code: int = 302, Response: t.Optional[t.Type[_BaseResponse]] = None
) -> _BaseResponse:
|
I have been having issues for 3-4 days now to get a hello world up and running
on the remote server. I figured out that it would be nice if there was a
documentation from A-Z how to get that done and working.
http://flask.pocoo.org/docs/deploying/mod_wsgi/
The current documentation does not provide an example what to put into an
exact python file, and what url to open up in the web browser to get the
desired python module running on the remote server. It is okay to have
references to other places if that is more logical, but the idea is to have a
self-contained page which I can start reading, and by I reach the end, I will
have a working remote hello world.
This would be well appreciated.
| 0 |
TensorFlow should have a Rust interface.
Original e-mail:
I'd like to write Rust bindings for TensorFlow, and I had a few questions.
First of all, is anyone already working on this, and if so, can I lend a hand?
If not, is this something the TensorFlow team would be interested in? I assume
that the TensorFlow team would not be willing to commit right now to
supporting Rust, so I thought a separate open source project (with the option
to fold into the main project later) would be the way to go.
|
### System information
* **Have I written custom code (as opposed to using a stock example script provided in TensorFlow)** :
No
* **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)** :
Ubuntu 14.04
* **TensorFlow installed from (source or binary)** :
pip install
* **TensorFlow version (use command below)** :
1.2.1
* **Python version** :
3.6
* **Exact command to reproduce** :
`tensorboard --logdir=gs://mybucket `
### Describe the problem
When trying to run tensorboard from a google cloud storage bucket the
following error occurs:
`tensorflow.python.framework.errors_impl.UnimplementedError: File system
scheme gs not implemented `
Even after running gs authentication
`gcloud auth application-default login`
### Source code / logs
I was following this guide on training a pet object detector
| 0 |
Hard to reproduce. I attached an array where this happens:
import numpy as np
np.seterr(all='raise')
a = np.load('weird_array.npy')
print(a.shape, a.dtype)
for i, val in enumerate(a):
try:
np.isfinite(np.array(val, ndmin=1))
except:
strange_index = i
print(type(val))
print(val.__class__.__name__)
print(i)
print(val)
print(np.isfinite(a[strange_index]))
Results in:
<class 'numpy.float64'>
float64
1023450
nan
Traceback (most recent call last):
File "test.py", line 15, in <module>
print(np.isfinite(a[strange_index]))
FloatingPointError: invalid value encountered in isfinite
weird_array.zip
|
My code failed with a `FloatingPointError` because `isfinite` encountered an
invalid value. The offending value was... `nan`. Apparently, it was _the wrong
kind of nan_. I reproduced it as follows:
# (earlier: seterr(all='raise'))
In [204]: x = uint32(0x7f831681).view("<f4")
In [205]: print(x)
nan
In [206]: isnan(x)
---------------------------------------------------------------------------
FloatingPointError Traceback (most recent call last)
<ipython-input-206-b5e847e0f3bf> in <module>()
----> 1 isnan(x)
FloatingPointError: invalid value encountered in isnan
In [207]: isfinite(x)
---------------------------------------------------------------------------
FloatingPointError Traceback (most recent call last)
<ipython-input-207-3d4ef4d5266d> in <module>()
----> 1 isfinite(x)
FloatingPointError: invalid value encountered in isfinite
I'm not sure how this ended up in my data, but it was not on purpose.
| 1 |


When creating an array from 0.2 to 0.8, the maximum value in the array is 0.8,
but when I run max of the whole array, I found the maximum value different
from 0.8. It has digits in the 15th and 16th decimal.
When the same is done with 0.7 as the ending value, the maximum of the array
is 0.60000000009.. and the value of 0.7 is not included in the array.
PFA the screenshots.
|
If I do `np.arange(3.18,3.21,0.01)`, it gives `array([ 3.18, 3.19, 3.2 ])`.
However, if I do `np.arange(3.18,3.22,0.01)`, it gives `array([ 3.18, 3.19,
3.2 , 3.21, 3.22])`. This seems inconsistent. Why is this so? I am using numpy
1.11.2 on Python 2.7.13.
| 1 |
Been using React (Native) for half a year now, really enjoying it! I'm no
expert but have run up against what seems like a weakness in the framework
that I'd like to bring up.
**The problem**
_Sending one-off events down the chain (parent-to-child) in a way that works
with the component lifecycle._
The issue arises from the fact that props are semi-persistent values, which
differs in nature from one-time events. So for example if a deep-link URL was
received you want to say 'respond to this once when you're ready', not 'store
this URL'. The mechanism of caching a one-time event value breaks down if the
same URL is then sent again, which is a valid event case.
Children have an easy and elegant way to communicate back to parents via
callbacks, but there doesn't seem to be a way to do this same basic thing the
other direction.
**Example cases**
* A deep-link was received and an app wants to tell child pages to respond appropriately
* A tab navigator wants to tell a child to scroll to top on secondary tap
* A list view wants to trigger all of its list items to animate each time the page is shown
From everything I've read, the two normal ways to do this are 1) call a method
on a child directly using a ref, or 2) emit an event that children may listen
for. But those ignore the component lifecycle, so the child isn't ready to
receive a direct call or event yet.
These also feel clunky compared to the elegance of React's architecture. But
React is a one-way top-down model, so the idea of passing one-time events down
the component chain seems like it would fit nicely and be a real improvement.
**Best workarounds we've found**
* Add a 'trigger' state variable in the parent that is a number, and wire this to children. Children use a lifecycle method to sniff for a change to their trigger prop, and then do a known action. We've done this a bunch now to handle some of the cases listed above.
* (really tacky) Set and then clear a prop immediately after setting it. Yuck.
Is there is some React Way to solve this common need? If so, no one on our
team knows of one, and the few articles I've found on the web addressing
component communication only suggest dispatching events or calling methods
directly refs. Thanks for the open discussion!
|
**Do you want to request a _feature_ or report a _bug_?**
Bug
**What is the current behavior?**
When an undefined object is assigned a property, the component in which this
is done re-renders.
**If the current behavior is a bug, please provide the steps to reproduce and
if possible a minimal demo of the problem. Your bug will get fixed much faster
if we can run your code and it doesn't have dependencies other than React.
Paste the link to your JSFiddle (https://jsfiddle.net/Luktwrdm/) or
CodeSandbox (https://codesandbox.io/s/new) example below:**
function ashwin() {
let obj = undefined;
obj["hasBasket"] = true;
}
call this function inside a functional component. It will result in the code
being rendered again before throwing an error.
https://codesandbox.io/s/priceless-wing-6xh8r
**What is the expected behavior?**
Shouldn't render again and just throw an error
**Which versions of React, and which browser / OS are affected by this issue?
Did this work in previous versions of React?**
React:- 16.10.2
Was able to reproduce it in chrome and safari.
Didn't test it in previous versions of React
| 0 |
Please add standarized/easy way to implement dependent selects form field..
|
We long have the problem of creating fields that depend on the value of other
fields now. See also:
* #3767
* #3768
* #4548
I want to propose a solution that seems feasible from my current point of
view.
Currently, I can think of two different APIs:
##### API 1
<?php
$builder->addIf(function (FormInterface $form) {
return $form->get('field1')->getData() >= 1
&& !$form->get('field2')->getData();
}, 'myfield', 'text');
$builder->addUnless(function (FormInterface $form) {
return $form->get('field1')->getData() < 1
|| $form->get('field2')->getData();
}, 'myfield', 'text');
##### API 2
<?php
$builder
->_if(function (FormInterface $form) {
return $form->get('field1')->getData() >= 1
&& !$form->get('field2')->getData();
})
->add('myfield', 'text')
->add('myotherfield', 'text')
->_endif()
;
$builder
->_switch(function (FormInterface $form) {
return $form->get('field1')->getData();
})
->_case('foo')
->_case('bar')
->add('myfield', 'text', array('foo' => 'bar'))
->add('myotherfield', 'text')
->_case('baz')
->add('myfield', 'text', array('foo' => 'baz'))
->_default()
->add('myfield', 'text')
->_endswitch()
;
The second API obviously is a lot more expressive, but also a bit more
complicated than the first one.
Please give me your opinions on what API you prefer or whether you can think
of further limitations in these APIs.
##### Implementation
The issue of creating dependencies between fields can be solved by a lazy
dependency resolution graph like in the OptionsResolver.
During form prepopulation, the conditions are invoked with a
`FormPrepopulator` object implementing `FormInterface`. When
`FormPrepopulator::get('field')` is called, "field" is prepopulated. If
"field" is also dependent on some condition, that condition will be evaluated
now in order to construct "field". After evaluating the condition, fields are
added or removed accordingly.
During form binding, the conditions are invoked with a `FormBinder` object,
that also implements `FormInterface`. This object works like
`FormPrepopulator`, only that it binds the fields instead of filling them with
default data.
In both cases, circular dependencies can be detected and reported.
| 1 |
### Preflight Checklist
* I have read the Contributing Guidelines for this project.
* I agree to follow the Code of Conduct that this project adheres to.
* I have searched the issue tracker for an issue that matches the one I want to file, without success.
### Issue Details
* **Electron Version:**
* 9.0.0 and later
* **Operating System:**
* macOS 10.13.6
* **Last Known Working Electron version:**
* 8.3.3
### Expected Behavior
in BrowserWindow set to do http CORS request
webPreferences: {
webSecurity: false
},
in low version electron, can do CORS request.
### Actual Behavior
Access to XMLHttpRequest at 'https://xxx' from origin 'http://localhost:9080'
has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is
present on the requested resource.
|
### Preflight Checklist
* I have read the Contributing Guidelines for this project.
* I agree to follow the Code of Conduct that this project adheres to.
* I have searched the issue tracker for an issue that matches the one I want to file, without success.
### Issue Details
* **Electron Version:**
12.0.0-beta.21
* **Operating System:**
Windows 10
* **Last Known Working Electron version:**
12.0.0-beta.20
### Expected Behavior
No crash.
### Actual Behavior
Crash reported to Sentry using minidump uploader:
https://sentry.io/share/issue/70e843318a304dae9616805c3223a1cc/
### To Reproduce this might be enough
Use Electron.WebRequest .onBeforeRequest(
{
urls: [],
},
({ url }, callback) => {
if (url.startsWith('https://test..com')) {
callback({ cancel: true });
} else {
callback({ cancel: false });
}
},
);
| 0 |
**Dave Syer** opened **SPR-5850** and commented
Provide a ContextLoader for WebApplicationApplicationContext: some components
(e.g. View implementations) are hard or impossible to test without an instance
of WebApplicationContext.
* * *
**Affects:** 3.0 M3
**Issue Links:**
* #9917 Support loading WebApplicationContexts with the TestContext Framework ( _ **"duplicates"**_ )
|
**Fritz Richter** opened **SPR-7784** and commented
In my current webapp project, I found out, that if I post something to the
server in the form of ?list=1&list=2&list=3 and I have got a Mapping on my
controller, which has the following parameter `@RequestParam` List<Long>, it
will contain String objects, and not Long objects.
* * *
**Affects:** 3.0.5
**Issue Links:**
* #12437 `@RequestParam` \- wanting List getting List ( _ **"duplicates"**_ )
* #12437 `@RequestParam` \- wanting List getting List
| 0 |
Is it possible to build a subset of babel to just use the `transform` function
for compiling es6/es7/jsx to js in the browser, does it generally have to be
this huge? Are there any tricks for using it with webpack?
`JSXTransformer` and `react-tools` are going away and babel seems to remain
the only available option for transpiling JSX in the browser, however it's 10
times bigger than those two which is often quite a problem.
|
Current size `browser-polyfill.min.js` ~80kb. Possible serious reduce it.
Browserify, by default, saves modules path, it's not required.
Full shim version of `core-js` with `browserify` (w/o `bundle-collapser`) -
65kb, with `webpack` \- 43kb.
Possible use `webpack` or add `bundle-collapser`.
I think, possible do the same with `browser.js`.
| 1 |
##### Issue Type:
Bug Report
##### Ansible Version:
Bug on current git devel, introduced with `eeb5973`
##### Environment:
Ubuntu 12.04 and 14.04
##### Summary:
When applying some filters on a list, when the resulting list is empty, it is
not recognised as a list anymore.
##### Steps To Reproduce:
---
- hosts: localhost
gather_facts: false
connection: local
tasks:
- command: cat /proc/cpuinfo
register: cpuinfo
- debug: var=cpuinfo.stdout_lines|difference(cpuinfo.stdout_lines)
- debug: var=item
with_items: cpuinfo.stdout_lines|difference(cpuinfo.stdout_lines)
##### Expected Results:
TASK: [debug var=cpuinfo.stdout_lines|difference(cpuinfo.stdout_lines)] *******
ok: [localhost] => {
"cpuinfo.stdout_lines|difference(cpuinfo.stdout_lines)": "set([])"
}
TASK: [debug var=item] ********************************************************
skipping: [localhost]
##### Actual Results:
TASK: [debug var=cpuinfo.stdout_lines|difference(cpuinfo.stdout_lines)] *******
ok: [localhost] => {
"cpuinfo.stdout_lines|difference(cpuinfo.stdout_lines)": "set([])"
}
TASK: [debug var=item] ********************************************************
fatal: [localhost] => with_items expects a list or a set
FATAL: all hosts have already failed -- aborting
|
##### Issue Type:
Bug Report
##### Ansible Version:
ansible 1.6.6
##### Environment:
What OS are you running Ansible from and what OS are you managing? Examples
include RHEL 5/6, Centos 5/6, Ubuntu 12.04/13.10, *BSD, Solaris. If this is a
generic feature request or it doesn't apply, just say “N/A”.
##### Summary:
I have an ansible task that looks like this:
- name: Distribute Jar files to all other cluster nodes
shell: rsync -avz {{ home }}/.m2 {{ user }}@{{ item }}:{{ home }}/
with_items: groups.hadoop_all | difference(inventory_hostname)
when: groups.hadoop_all | difference(inventory_hostname) | length > 1
ignore_errors: yes
sudo: no
tags:
- distribute_jars
which works fine on ansible 1.6.5 but fails on 1.6.6
##### Steps To Reproduce:
Working on a reduced test case now.
##### Expected Results:
I would expect the playbook to complete without errors.
##### Actual Results:
The actual result is that it fails in 1.6.6 with this output:
TASK: [dev | Distribute Jar files to all other cluster nodes] ***********
fatal: [dev] => with_items expects a list or a set
FATAL: all hosts have already failed -- aborting
I've stepped back through ansible versions on a machine, keeping everything
else the same and as soon as you upgrade to 1.6.6, it starts falling with this
error. I'm using ansible installed via pip, if that makes any difference.
| 1 |
Today I've upgraded our codebase from r88 -> r91, everything seems to work
fine aside from the reflectors (code which has not significantly changed since
r88 from what I see). I've noticed that when checking individual upgrades, the
problem starts at r90.
It seems to go exponentially bad when our entire model + mirror are in the
frustrum, when I'm right in front of it & little else of the model, the
performance is OK. Now since I can't provide live examples, I was wondering if
someone could point me on where to look/investigate, or what could have
changed in that release that affects them?
##### Three.js version
* Dev
* r91
* r90
* r89
* r88
##### Browser
* All of them
* Chrome
* Firefox
* Internet Explorer
##### OS
* All of them
* Windows
* macOS
* Linux
* Android
* iOS
##### Hardware Requirements (graphics card, VR Device, ...)
|
Right now, there are two ways to adjust the positioning of a texture on an
object:
* UV coordinates in geometry
* `.offset` and `.repeat` properties on `Texture`
It would be useful to also have `.offset` and `.repeat` properties on
`Material` as well, so that these values could be varied on different objects
without having to allocate additional geometries or texture resources.
I'm considering an interesting use case: breaking down sky spheres/domes into
smaller components. Rather than create a single, large, solid sphere, imagine
a fraction of an icosahedron (subdivided) and repeated as multiple meshes to
construct a sphere out of pieces. It would cost a few extra draw calls, but
provide at least two benefits:
1. Significantly reduce the number of vertices computed by excluding the ones that are off camera, which is well more than half. Spheres can have hundreds of vertices, depending on the level of detail. This is not such a big deal on desktop, but I suspect it could make a big difference on mobile devices with tiled GPU architectures that would cause the vertex shader to be run many times.
2. Reduce overdraw by fixing z-sorting. If an entire sky sphere is positioned at [0, 0, 0], it will almost certainly be sorted incorrectly and drawn first every time, even though it should be drawn last. By positioning individual sphere components far away, they should be correctly sorted last.
Implementing this approach today would require duplicating either the geometry
(for different UV coordinates) or the texture (for different offsets) for each
piece of the sky. If we could set the offset on the material, each piece would
use the same geometry, shader and sampler, only varying the uniform values
between draw calls.
| 0 |
# Checklist
* I have verified that the issue exists against the `master` branch of Celery.
* This has already been asked to the discussion group first.
* I have read the relevant section in the
contribution guide
on reporting bugs.
* I have checked the issues list
for similar or identical bug reports.
* I have checked the pull requests list
for existing proposed fixes.
* I have checked the commit log
to find out if the bug was already fixed in the master branch.
* I have included all related issues and possible duplicate issues
in this issue (If there are none, check this box anyway).
## Mandatory Debugging Information
* I have included the output of `celery -A proj report` in the issue.
(if you are not able to do this, then at least specify the Celery
version affected).
* I have verified that the issue exists against the `master` branch of Celery.
* I have included the contents of `pip freeze` in the issue.
* I have included all the versions of all the external dependencies required
to reproduce this bug.
## Optional Debugging Information
* I have tried reproducing the issue on more than one Python version
and/or implementation.
* I have tried reproducing the issue on more than one message broker and/or
result backend.
* I have tried reproducing the issue on more than one version of the message
broker and/or result backend.
* I have tried reproducing the issue on more than one operating system.
* I have tried reproducing the issue on more than one workers pool.
* I have tried reproducing the issue with autoscaling, retries,
ETA/Countdown & rate limits disabled.
* I have tried reproducing the issue after downgrading
and/or upgrading Celery and its dependencies.
## Related Issues and Possible Duplicates
#### Related Issues
* None
#### Possible Duplicates
* None
## Environment & Settings
**Celery version** :
**`celery report` Output:**
# Steps to Reproduce
## Required Dependencies
* **Minimal Python Version** : N/A or Unknown
* **Minimal Celery Version** : N/A or Unknown
* **Minimal Kombu Version** : N/A or Unknown
* **Minimal Broker Version** : N/A or Unknown
* **Minimal Result Backend Version** : N/A or Unknown
* **Minimal OS and/or Kernel Version** : N/A or Unknown
* **Minimal Broker Client Version** : N/A or Unknown
* **Minimal Result Backend Client Version** : N/A or Unknown
### Python Packages
**`pip freeze` Output:**
### Other Dependencies
N/A
## Minimally Reproducible Test Case
Swap the commented line setting `c` for expected behaviour.
import celery
app = celery.Celery(broker="redis://", backend="redis://")
@app.task
def nop(*_):
pass
@app.task
def die(*_):
raise RuntimeError
@app.task(bind=True)
def replace(self, with_):
with_ = celery.Signature.from_dict(with_)
raise self.replace(with_)
@app.task
def cb(*args):
print("CALLBACK", *args)
#c = celery.chain(nop.s(), die.s())
c = celery.chain(nop.s(), replace.si(die.s()))
c.link_error(cb.s())
c.apply_async()
# Expected Behavior
`cb` should be called as a new-style errback because it accepts starargs
# Actual Behavior
`cb` is not called
|
when trying to start a worker.
[2013-06-28 12:28:07,258: ERROR/MainProcess] Unrecoverable error:
AttributeError("'Connection' object has no attribute 'setblocking'",)
Traceback (most recent call last): File
"/Users/yannick/.pythonbrew/pythons/Python-3.3.0/lib/python3.3/site-
packages/celery-3.1.0rc3-py3.3.egg/celery/worker/ **init**.py", line 189, in
start self.blueprint.start(self) File
"/Users/yannick/.pythonbrew/pythons/Python-3.3.0/lib/python3.3/site-
packages/celery-3.1.0rc3-py3.3.egg/celery/bootsteps.py", line 119, in start
step.start(parent) File
"/Users/yannick/.pythonbrew/pythons/Python-3.3.0/lib/python3.3/site-
packages/celery-3.1.0rc3-py3.3.egg/celery/bootsteps.py", line 352, in start
return self.obj.start() File
"/Users/yannick/.pythonbrew/pythons/Python-3.3.0/lib/python3.3/site-
packages/celery-3.1.0rc3-py3.3.egg/celery/concurrency/base.py", line 112, in
start self.on_start() File
"/Users/yannick/.pythonbrew/pythons/Python-3.3.0/lib/python3.3/site-
packages/celery-3.1.0rc3-py3.3.egg/celery/concurrency/processes.py", line 461,
in on_start **self.options) File
"/Users/yannick/.pythonbrew/pythons/Python-3.3.0/lib/python3.3/site-
packages/celery-3.1.0rc3-py3.3.egg/celery/concurrency/processes.py", line 236,
in **init** for _ in range(processes)) File
"/Users/yannick/.pythonbrew/pythons/Python-3.3.0/lib/python3.3/site-
packages/celery-3.1.0rc3-py3.3.egg/celery/concurrency/processes.py", line 236,
in for _ in range(processes)) File
"/Users/yannick/.pythonbrew/pythons/Python-3.3.0/lib/python3.3/site-
packages/celery-3.1.0rc3-py3.3.egg/celery/concurrency/processes.py", line 268,
in create_process_queues inq._writer.setblocking(0) AttributeError:
'Connection' object has no attribute 'setblocking'
| 0 |
Hello, folks.
Version 3.0.3 of bootstrap.css has ineffective rules for striped tables which
have rows with .danger or other contextual class(es), if _tbody_ is used
Given a table with the following structure...
<table class="table table-striped table-hover">
<tbody>
<tr class="danger">
<td>Row 1</td>
</tr>
<tr class="danger">
<td>Row 2</td>
</tr>
<tr class="danger">
<td>Row 3</td>
</tr>
</tbody>
</table>
...only Row 2 gets emphasized with the _.danger_ contextual class rules. On
hovering these rows, the normal emphasis returns, including the darker color
on the shaded rows.
In dist, the rule for _table-striped_ in bootstrap.css : 1712 overrides the
rule in bootstrap : 1770, because the former contains a _tbody_ selector node,
which determines the rule to be sharper than the latter, where this node is
missing.
This bug has probably been introduced with the super-specific rules for
.table-striped class in `224296f`/less/tables.less, purposed to fix #7281.
|
i have table with class .table-striped. it has 3 rows with tr.danger. only the
middle one is red, the other two are default color.
when i remove .table-striped, it works correctly
| 1 |
**Stephen Todd** opened **SPR-3450** and commented
Spring's current code makes it difficult to use java collections as a command
in command controllers. Specifically, referencing elements in the collection
is prevented. Currently, elements in an collection are references using [].
Support needs to be added to allw paths to start with [index/key] as in
"[1].property".
The application for this is probably rare, which is probably why it hasn't
been brought up before (at least I couldn't find an similarly reported
issues). I use the functionality for executing multiple of the same commands.
I currently have a form that has multiple objects that can be selected. If the
user clicks "Delete" with multiple objects selected, I have an array of id's
that get passed to a delete form. This form creates a delete command for each
object specified. Details for the way the objects are deleted are stored in an
object, which is put in to a list. When the user submits the form, each
command is executed in succession (the list is actually passed to the business
layer and executed in a single transaction).
The current work around is to create a subclass of the java collection you
want and make a getter that returns this. Then you can reference the array as
(using a getter getSelf()) "self[1].property". Although this method works, a
method that doesn't require this simple extension would be preferable.
* * *
**Affects:** 2.0.4
|
**Alex Antonov** opened **SPR-2058** and commented
When a BeanWrapper wraps an object that is a map or a collection of sorts, it
has trouble retrieving a value using a key property
i.e.
Person p = new Person("John");
Map map = new HashMap();
map.put("key", person);
BeanWrapperImpl wrapper = new BeanWrapperImpl(map);
String name = wrapper.getPropertyValue("[key].name")
This kind of access is very possible when comming from a web-layer using a
bind-path of something like [key].name when the top-level object is itself a
map.
In this case, when calling errros.rejectValue("[key].name", ...) in the
validator, the call throws an exception due to inability to find an object
referenced by [key].
Currently a work-around this problem is to subclass a map and provide a getter
for every key you might have in the map, so that the path looks like key.name,
but
this approach requires a lot of overhead of the getter creation.
* * *
11 votes, 9 watchers
| 1 |
The first build against master where this happened is
https://ci.appveyor.com/project/StefanKarpinski/julia/build/1.0.5994/job/l01nwf0rwaedju0o,
for `6ec7c21`
I can reproduce locally, will try to see whether it's repeatable or
intermittent (once I walk over to stata)
|
I've gotten this to happen on 2 different Win64 computers, one Sandy Bridge,
one Haswell. Happens when running `runtests.jl all` in parallel, seemingly
more often the more cores I use for the tests. One of the workers gets stuck
on its first test - so usually linalg, but I just got it to happen even on the
strings test - while the rest of the workers happily finish everything else,
waiting right before running `parallel` at the very end like they're supposed
to.
This isn't just the usual linalg slowness, I've left these going on multiple
computers for half an hour or longer. The offending processes are stuck at
100% of a single core, but the memory consumption isn't changing at all.
This doesn't happen on Win32, or when `JULIA_CPU_CORES=1`. Any ideas how to
narrow this down? OpenBlas interaction? Win64 codegen problem? Something to do
with libuv and task spawning? Ignore it and hope it doesn't show up in normal
code?
| 1 |
There's now a number of issues open for thoughts about how to make the
printing of types more readable. Here's another notion: it would be nice if
printing could replace `Union`s with appropriate typealiases. For example:
julia> methods(permutedims)
#2 methods for generic function "permutedims":
permutedims(B::Union{Base.ReshapedArray{T<:Any,N<:Any,A<:DenseArray,MI<:Tuple{Vararg{Base.MultiplicativeInverses.SignedMultiplicativeInverse{Int64},N<:Any}}},DenseArray{T<:Any,N<:Any},SubArray{T<:Any,N<:Any,A<:Union{Base.ReshapedArray{T<:Any,N<:Any,A<:DenseArray,MI<:Tuple{Vararg{Base.MultiplicativeInverses.SignedMultiplicativeInverse{Int64},N<:Any}}},DenseArray},I<:Tuple{Vararg{Union{Base.AbstractCartesianIndex,Colon,Int64,Range{Int64}},N<:Any}},L<:Any}}, perm) at multidimensional.jl:959
permutedims{T,N}(A::AbstractArray{T,N}, perm) at permuteddimsarray.jl:47
But the first method is actually defined as `permutedims(B::StridedArray,
perm)` which is rather simpler to read.
This is easy to point out, but actually implementing it seems likely to be
hard starting from the `jl_uniontype_t` itself. Unless `Union` types get
modified to record whether they come from a `typealias`.
|
I'd like to request some speculative brain cells spent on how we print method
signatures to the REPL.
Currently, we expand out all typealiases. While this policy has the advantage
of producing unambiguous output, the composition of type aliases through type
parameters leads to combinatorially long string representations.
A particularly egregious example in Base is `eigfact!`:
julia> methods(eigfact!)
#12 methods for generic function "eigfact!":
...
eigfact!{T<:Union{Complex{Float32},Complex{Float64}}}(A::Union{DenseArray{T,2},SubArray{T,2,A<:DenseArray{T<:Any,N<:Any},I<:Tuple{Vararg{Union{Colon,Int64,Range{Int64}}}},LD<:Any}}) at linalg/eigen.jl:50
eigfact!{T<:Union{Float32,Float64}}(A::Union{DenseArray{T,2},SubArray{T,2,A<:DenseArray{T<:Any,N<:Any},I<:Tuple{Vararg{Union{Colon,Int64,Range{Int64}}}},LD<:Any}}, B::Union{DenseArray{T,2},SubArray{T,2,A<:DenseArray{T<:Any,N<:Any},I<:Tuple{Vararg{Union{Colon,Int64,Range{Int64}}}},LD<:Any}}) at linalg/eigen.jl:121
eigfact!{T<:Union{Complex{Float32},Complex{Float64}}}(A::Union{DenseArray{T,2},SubArray{T,2,A<:DenseArray{T<:Any,N<:Any},I<:Tuple{Vararg{Union{Colon,Int64,Range{Int64}}}},LD<:Any}}, B::Union{DenseArray{T,2},SubArray{T,2,A<:DenseArray{T<:Any,N<:Any},I<:Tuple{Vararg{Union{Colon,Int64,Range{Int64}}}},LD<:Any}}) at linalg/eigen.jl:142
...
I estimate that only three people in the world can read the first method
signature and immediately recognize that the original calling signature was
eigfact!{T<:BlasComplex}(A::StridedMatrix{T})
and it would take an eagle eye to realize that the first method takes one
argument whereas the other two take two inputs each.
It would be much nicer to print something like the original method signature
instead of feeling overwhelmed by C++ expression template-like verbal
diarrhea. It's clear, though, that the additional bookkeeping of typealiases
and matching of typealiases to method signatures is a hard problem.
| 1 |
When a typeahead `process` call receives an array of objects instead of
strings it passes "[Object object]" to `updater`. One would expect that
updater would receive the object, as `matcher`, `sorter`, and `highlighter`
work as expected.
The problem seems to be that the data is being stored by way of calls to
jQuery.attr.
One potential fix is changing
https://github.com/twitter/bootstrap/blob/master/js/bootstrap-typeahead.js#L47
and https://github.com/twitter/bootstrap/blob/master/js/bootstrap-
typeahead.js#L141 respectively as follows:
47 var val = this.$menu.find('.active').attr('data-value')
becomes
47 var val = this.$menu.find('.active').data('typeahead-value')
and
141 i = $(that.options.item).attr('data-value', item)
becomes
141 i = $(that.options.item).data('typeahead-value', item)
The underlying issue is jQuery's 'attr' function calls `toString` instead of
serializing, whereas 'data' serializes properly - preserving all the useful
bits of the object.
Note that I changed 'data-value' to 'typeahead-value' just to avoid any name
collisions (although I am sure they are pretty unlikely).
A workaround is to serialize with eg `JSON.stringify` the results of `sorter`,
and then `JSON.parse` the item passed to `updater`.
|
your current set up is so:
.clearfix() {
&:before,
&:after {
content: " "; // 1
display: table; // 2
}
&:after {
clear: both;
}
}
.clearfix {
.clearfix();
}
consider switching to this:
.clearfix {
&:before,
&:after {
content: " "; // 1
display: table; // 2
}
&:after {
clear: both;
}
}
.clearfix() {
&:extend(.clearfix all);
}
it will reduce the amount of resulting css when devs use your mixin in their
own less files. There will still be some duplicate with .clearfix() calling
both mixin .clearfix() and selector .clearfix (known issue with less.js, and I
do wish you would change .clearfix() to .makeclearfix() or .addclearfix() or
whatever to alleviate this problem, but I know you won't).
When the good fellas over at LESS do fix that problem, this code (combined
with my suggested change)
.my-class {
.clearfix();
}
will result in:
.clearfix:before,
.clearfix:after,
.my-class:before,
.my-class:after {
content: " ";
display: table;
}
.clearfix:after,
.my-class:after {
clear: both;
}
.my-class {
color: blue;
}
and of course if a dev uses .clearfix(); throught their own less files, the
resultant compiled css will see a pretty significant reduction in size
| 0 |
Google search (at the top right corner) is broken in both:
* http://scikit-learn.org/stable/
* http://scikit-learn.org/dev/
It is still working in:
* http://scikit-learn.org/0.17/
|
#### Description
I noticed problems with the search results on the scikit-learn website. I am
not sure if this problem just occurred today after 0.18 went live. Below is a
screenshot of how it looks like when I do a search on the main page; on
subpages the search does not seem to work altogether -- tested it on Chrome
and Safari.

#### Steps/Code to Reproduce
#### Expected Results
#### Actual Results
#### Versions
| 1 |
Sometime copy paste hightlighted text from Atom to other app likes browser ,
skype ... or reversed cause ubuntu 14.04 freeze keyboard , mouse move super
slow/lag then whole system hang up need to hold power button to turn off
|
1. Have two panes open
2. Visit the same file in both of 'em
3. Hit `M-b`
4. See two entries for the file
I expect to see one.
| 0 |
Hello,
I was trying to create czech lemmatisation analyzer using stemmer override
filter and czech dictionary from aspell.
This dictionary contains around `300 000` words in base form and some
suffix/prefix rules. After expansion format file looks like this
Aakjaer Aakjaerech Aakjaery Aakjaerům Aakjaerů Aakjaerem Aakjaere Aakjaerovi Aakjaeru Aakjaera Aakjaerové
Aakjaerová Aakjaerovými Aakjaerovým Aakjaerových Aakjaerovou Aakjaerové
Aakjaerův Aakjaerovýma Aakjaerovými Aakjaerových Aakjaerovou Aakjaerovo Aakjaerovy Aakjaerovi Aakjaerovým
each line is one word with its forms.
Because of rules format `form => lemma` the final rule set is expanded from
`300 000` to `4 364 674` lines.
When I was trying on my local machine to index czech wikipedia pages (around
`400 000` documents) `java.lang.OutOfMemoryError: Java heap space` error
occured after approx 10 minutes of indexing (log file here)
I'm using snapshot build of elasticsearch
(54e7e309a5d407b2fb1123a79e6af9d62e41ea1e), `JAVA_OPTS -Xss200000 -Xms2g
-Xmx2g` with no other indices.
Index settings/mapping and river settings are in separate gist
I was trying to achieve this functionality using synonym token filter, because
of better format of synonym rules - `form1, form2, form3, form4 => lemma` (so
number of rules are only about `300 000`).
But it's not the same. In the case of using `stemmer override filter`, when
token was not found in rule set, stemmer was used. I probably can do the same
by adding keyword marker and stemmer in the filter chain, but I don't think it
is the right way to do that.
Please, is there some better 'compressed' format of `stemmer override filter`
rules? Any thoughts how to avoid `java.lang.OutOfMemoryError: Java heap space`
error?
|
**Elasticsearch version** : 5.0.0
**Plugins installed** : None
**JVM version** : 1.8.0_101
**OS version** : Ubuntu 16.04
**Description of the problem including expected versus actual behavior** :
I have added multiple context mappings to a completion suggester field. In
version 2.x and earlier, when filtering on those contexts, suggestions were
only returned if they matched all of the contexts. But that does not seem to
be the case with 5.0.0.
**Steps to reproduce** :
1. Mapping for my suggest field:
"suggest": {
"type": "completion",
"contexts": [
{
"name": "companyContext",
"type": "category"
},
{
"name": "usersContext",
"type": "category"
}
]
}
2. Index a document with suggest field contexts populated:
"suggest": {
"input": "test team site",
"contexts": {
"companyContext": "vendor",
"usersContext": ["chad", "fred"]
}
}
3. Run a query supplying only one matching context:
"suggest": {
"suggestions": {
"text": "tes",
"completion": {
"field": "suggest",
"contexts":{
"companyContext": "vendor",
"usersContext": "bill"
}
}
}
}
I would expect that this would not return the indexed document, because the
usersContext is not a match. And that was the behavior in 2.x. But in 5.0.0,
the document that matches only the companyContext is returned.
| 0 |
**I'm submitting a ...** (check one with "x")
[x] bug report => search github for a similar issue or PR before submitting
[ ] feature request
[ ] support request => Please do not submit support request here, instead see https://github.com/angular/angular/blob/master/CONTRIBUTING.md#question
**Current behavior**
When using multiple @ HostListener decorator on same function, such as code
below
@HostListener('click', ['$event'])
@HostListener('mouseover', ['$event'])
@HostListener('focus',['$event'])
private eventRouter(e) {
console.log(e);
}
Only the first decorator will be listened, but the function got the latest
decorator event, and the middle decorator and the last decorator will never be
trigger.
**Expected behavior**
fix this bug
**Minimal reproduction of the problem with instructions**
**What is the motivation / use case for changing the behavior?**
**Please tell us about your environment:**
* **Angular version:** 2.3.0
* **Browser:** [all ]
* **Language:** [TypeScript 2.0.7]
* **Node (for AoT issues):** `node --version` =
|
**I'm submitting a ...** (check one with "x")
[ ] bug report => search github for a similar issue or PR before submitting
[x] feature request
[ ] support request => Please do not submit support request here, instead see https://github.com/angular/angular/blob/master/CONTRIBUTING.md#question
**Current behavior**
Angular library authors should inline templates (html/css) to ensure overall
compatibility between different consumer types.
This can be done with a fairly simple script, but it has a few pitfalls:
* special care must be taken to maintain sourcemap lines (inline over TS sources before `ngc`).
* unit testing requires a separate module.id based setup to load templates into karma.
* no watch mode.
**Expected behavior**
Since inlining is heavily encouraged for library AOT compilation, it should be
included as an option for `ngc` (e.g.
`angularCompilerOptions.inlineTemplates`).
**Minimal reproduction of the problem with instructions**
An example library can be found at https://github.com/filipesilva/angular-
quickstart-lib.
**What is the motivation / use case for changing the behavior?**
Less work for everyone building and shipping components, one less point of
failure.
* **Angular version:** 4.x
* **Language:** [ TypeScript 2.x ]
/cc @IgorMinar @jasonaden
| 0 |
##### ISSUE TYPE
* Bug Report
##### COMPONENT NAME
include_role module
##### ANSIBLE VERSION
ansible 2.4.0.0
config file = /home/esio/work/ansible/ansible.cfg
configured module search path = [u'/home/esio/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.13 (default, Sep 5 2017, 08:53:59) [GCC 7.1.1 20170622 (Red Hat 7.1.1-3)]
##### CONFIGURATION
ANSIBLE_NOCOWS(/home/esio/work/ansible/ansible.cfg) = True
DEFAULT_BECOME_EXE(/home/esio/work/ansible/ansible.cfg) = sudo su -
DEFAULT_BECOME_METHOD(/home/esio/work/ansible/ansible.cfg) = su
DEFAULT_HASH_BEHAVIOUR(/home/esio/work/ansible/ansible.cfg) = merge
DEFAULT_HOST_LIST(/home/esio/work/ansible/ansible.cfg) = [u'/home/esio/work/ansible/inventories/test/hosts']
DEFAULT_REMOTE_TMP(/home/esio/work/ansible/ansible.cfg) = /tmp/.ansible-${USER}/tmp # workaround become permission issues
DEFAULT_ROLES_PATH(/home/esio/work/ansible/ansible.cfg) = [u'/home/esio/work/ansible/roles', u'/home/esio/work/ansible/vendor']
HOST_KEY_CHECKING(/home/esio/work/ansible/ansible.cfg) = False
RETRY_FILES_SAVE_PATH(/home/esio/work/ansible/ansible.cfg) = /home/esio/work/ansible/.retryfiles
##### OS / ENVIRONMENT
Fedora 26 x86_64
##### SUMMARY
When I use include_role ansible doesn't apply default variables.
##### STEPS TO REPRODUCE
I have directories and files in my ansible config:
└── roles
├── r1
│ └── tasks
│ └── main.yml
└── r2
├── defaults
│ └── main.yml
└── tasks
└── main.yml
In roles/r1/tasks/main.yml
- name: Run role r2
include_role:
name: gpdw.deploy-hdfs-component
vars:
artifact_version: "{{ x[2] }}"
artifact_id: "{{ x[1] }}"
with_list: "{{ list }}"
loop_control:
loop_var: x
In roles/r2/tasks/main.yml
- name: debug
debug:
msg: "{{ artifact_extension }}
In roles/r2/defaults/main.yml
artifact_extension: "tar.gz"
##### EXPECTED RESULTS
It should work and print tar.gz as debug.
##### ACTUAL RESULTS
Ansible fails with error, that variable artifact_extension is undefined. In my
opinion ansible should use variables from defaults/main.yml file in included
role.
fatal: [ap-hdpen1t]: FAILED! => {
"msg": "The task includes an option with an undefined variable. The error was: 'artifact_extension' is undefined\n\nThe error appears to have been in '/home/esio/work/ansible/roles/gpdw.deploy-dq-all/tasks/main.yml': line 29, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: debug\n ^ here\n\nexception type: <class 'ansible.errors.AnsibleUndefinedVariable'>\nexception: 'artifact_extension' is undefined"
}
|
##### ISSUE TYPE
* Feature Idea
##### COMPONENT NAME
include_role
##### ANSIBLE VERSION
ansible 2.2.1.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
##### CONFIGURATION
None.
##### OS / ENVIRONMENT
OS: CentOS7, but shouldn't matter.
##### SUMMARY
When including a role using include_role its vars do not become available to
the rest of the playbook. This does work when using the regular roles
definition.
##### STEPS TO REPRODUCE
A minimal test-case lives here: https://github.com/wouterhund/demo-ansible-
include_role-bug/blob/master/bad-playbook.yml
https://github.com/wouterhund/demo-ansible-include_role-
bug/blob/master/roles/testrole/vars/main.yml
The actual code that triggers the issue:
---
- hosts: localhost
tasks:
- include_role:
name: testrole
- debug: msg="{{some_var}}"
`roles/testrole/vars/main.yml`
---
some_var: "Hello world!"
##### EXPECTED RESULTS
I would expect `good-playbook.yml` and `bad-playbook.yml` from the linked
github to work identically, however `bad-playbook.yml` fails. I expect
variables defined within an included role to be made available to the rest of
the playbook.
For the example above I'd expect it to pass and display "Hello world!"
##### ACTUAL RESULTS
» ansible-playbook -vvvv bad-playbook.yml 1 ↵
Using /etc/ansible/ansible.cfg as config file
[WARNING]: provided hosts list is empty, only localhost is available
Loading callback plugin default of type stdout, v2.0 from /usr/lib/python2.7/site-packages/ansible/plugins/callback/__init__.pyc
PLAYBOOK: bad-playbook.yml *****************************************************
1 plays in bad-playbook.yml
PLAY [localhost] ***************************************************************
TASK [setup] *******************************************************************
Using module file /usr/lib/python2.7/site-packages/ansible/modules/core/system/setup.py
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: vagrant
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo ~/.ansible/tmp/ansible-tmp-1487936470.27-110880382737009 `" && echo ansible-tmp-1487936470.27-110880382737009="` echo ~/.ansible/tmp/ansible-tmp-1487936470.27-110880382737009 `" ) && sleep 0'
<127.0.0.1> PUT /tmp/tmp017eF_ TO /home/vagrant/.ansible/tmp/ansible-tmp-1487936470.27-110880382737009/setup.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/vagrant/.ansible/tmp/ansible-tmp-1487936470.27-110880382737009/ /home/vagrant/.ansible/tmp/ansible-tmp-1487936470.27-110880382737009/setup.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python2 /home/vagrant/.ansible/tmp/ansible-tmp-1487936470.27-110880382737009/setup.py; rm -rf "/home/vagrant/.ansible/tmp/ansible-tmp-1487936470.27-110880382737009/" > /dev/null 2>&1 && sleep 0'
ok: [localhost]
TASK [testrole : debug] ********************************************************
task path: /home/vagrant/demo-ansible-include_role-bug/roles/testrole/tasks/main.yml:2
ok: [localhost] => {
"msg": "Hello world!"
}
TASK [debug] *******************************************************************
task path: /home/vagrant/demo-ansible-include_role-bug/bad-playbook.yml:7
fatal: [localhost]: FAILED! => {
"failed": true,
"msg": "the field 'args' has an invalid value, which appears to include a variable that is undefined. The error was: 'some_var' is undefined\n\nThe error appears to have been in '/home/vagrant/demo-ansible-include_role-bug/bad-playbook.yml': line 7, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n name: testrole\n - debug: msg=\"{{some_var}}\"\n ^ here\nWe could be wrong, but this one looks like it might be an issue with\nmissing quotes. Always quote template expression brackets when they\nstart a value. For instance:\n\n with_items:\n - {{ foo }}\n\nShould be written as:\n\n with_items:\n - \"{{ foo }}\"\n"
}
to retry, use: --limit @/home/vagrant/demo-ansible-include_role-bug/bad-playbook.retry
PLAY RECAP *********************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=1
| 1 |
**Migrated issue, originally created by Michael Bayer (@zzzeek)**
stack trace for reserved bind names only on the second go, meaning the
insert() has changed state upon compile():
import sqlalchemy as sa
meta = sa.MetaData()
table = sa.Table('mytable', meta,
sa.Column('foo', sa.String),
sa.Column('bar', sa.String, default='baz'),
)
select = sa.select([table.c.foo])
insert = table.insert().from_select(['foo'], select)
print insert.compile()
print insert.compile()
|
**Migrated issue, originally created by Michael Bayer (@zzzeek)**
these may all be from test_baked not cleaning up connections
#!
_ ERROR at teardown of ResultTest_postgresql_psycopg2cffi.test_w_new_entities __
[gw0] linux2 -- Python 2.7.8 /var/jenkins/workspace/sqlalchemy-default-sqlite-pypy-2.7/.tox/full/bin/python
Traceback (most recent call last):
File "/var/jenkins/workspace/sqlalchemy-default-sqlite-pypy-2.7/.tox/full/site-packages/sqlalchemy/testing/fixtures.py", line 273, in teardown_class
cls._teardown_once_metadata_bind()
File "/var/jenkins/workspace/sqlalchemy-default-sqlite-pypy-2.7/.tox/full/site-packages/sqlalchemy/testing/fixtures.py", line 155, in _teardown_once_metadata_bind
drop_all_tables(cls.metadata, cls.bind)
File "/var/jenkins/workspace/sqlalchemy-default-sqlite-pypy-2.7/.tox/full/site-packages/sqlalchemy/testing/engines.py", line 108, in drop_all_tables
metadata.drop_all(bind)
File "/var/jenkins/workspace/sqlalchemy-default-sqlite-pypy-2.7/.tox/full/site-packages/sqlalchemy/sql/schema.py", line 3641, in drop_all
tables=tables)
File "/var/jenkins/workspace/sqlalchemy-default-sqlite-pypy-2.7/.tox/full/site-packages/sqlalchemy/engine/base.py", line 1850, in _run_visitor
with self._optional_conn_ctx_manager(connection) as conn:
File "/opt/pypy-python2.7/lib-python/2.7/contextlib.py", line 17, in __enter__
return self.gen.next()
File "/var/jenkins/workspace/sqlalchemy-default-sqlite-pypy-2.7/.tox/full/site-packages/sqlalchemy/engine/base.py", line 1843, in _optional_conn_ctx_manager
with self.contextual_connect() as conn:
File "/var/jenkins/workspace/sqlalchemy-default-sqlite-pypy-2.7/.tox/full/site-packages/sqlalchemy/engine/base.py", line 2034, in contextual_connect
self._wrap_pool_connect(self.pool.connect, None),
File "/var/jenkins/workspace/sqlalchemy-default-sqlite-pypy-2.7/.tox/full/site-packages/sqlalchemy/engine/base.py", line 2069, in _wrap_pool_connect
return fn()
File "/var/jenkins/workspace/sqlalchemy-default-sqlite-pypy-2.7/.tox/full/site-packages/sqlalchemy/pool.py", line 376, in connect
return _ConnectionFairy._checkout(self)
File "/var/jenkins/workspace/sqlalchemy-default-sqlite-pypy-2.7/.tox/full/site-packages/sqlalchemy/pool.py", line 708, in _checkout
fairy = _ConnectionRecord.checkout(pool)
File "/var/jenkins/workspace/sqlalchemy-default-sqlite-pypy-2.7/.tox/full/site-packages/sqlalchemy/pool.py", line 480, in checkout
rec = pool._do_get()
File "/var/jenkins/workspace/sqlalchemy-default-sqlite-pypy-2.7/.tox/full/site-packages/sqlalchemy/pool.py", line 1042, in _do_get
(self.size(), self.overflow(), self._timeout))
TimeoutError: QueuePool limit of size 5 overflow 0 reached, connection timed out, timeout 0
--------------------------- Captured stdout teardown ---------------------------
2015-05-01 01:44:56,618 ERROR sqlalchemy.pool.QueuePool Exception during reset or similar
Traceback (most recent call last):
File "/var/jenkins/workspace/sqlalchemy-default-sqlite-pypy-2.7/.tox/full/site-packages/sqlalchemy/pool.py", line 631, in _finalize_fairy
fairy._reset(pool)
File "/var/jenkins/workspace/sqlalchemy-default-sqlite-pypy-2.7/.tox/full/site-packages/sqlalchemy/pool.py", line 765, in _reset
pool._dialect.do_rollback(self)
File "/var/jenkins/workspace/sqlalchemy-default-sqlite-pypy-2.7/.tox/full/site-packages/sqlalchemy/engine/default.py", line 412, in do_rollback
dbapi_connection.rollback()
File "/opt/pypy-python2.7/site-packages/psycopg2cffi/_impl/connection.py", line 42, in check_closed_
raise exceptions.InterfaceError('connection already closed')
InterfaceError: connection already closed
2015-05-01 01:44:56,619 ERROR sqlalchemy.pool.QueuePool Exception during reset or similar
Traceback (most recent call last):
File "/var/jenkins/workspace/sqlalchemy-default-sqlite-pypy-2.7/.tox/full/site-packages/sqlalchemy/pool.py", line 631, in _finalize_fairy
fairy._reset(pool)
File "/var/jenkins/workspace/sqlalchemy-default-sqlite-pypy-2.7/.tox/full/site-packages/sqlalchemy/pool.py", line 765, in _reset
pool._dialect.do_rollback(self)
File "/var/jenkins/workspace/sqlalchemy-default-sqlite-pypy-2.7/.tox/full/site-packages/sqlalchemy/engine/default.py", line 412, in do_rollback
dbapi_connection.rollback()
File "/opt/pypy-python2.7/site-packages/psycopg2cffi/_impl/connection.py", line 42, in check_closed_
raise exceptions.InterfaceError('connection already closed')
InterfaceError: connection already closed
2015-05-01 01:44:56,620 ERROR sqlalchemy.pool.QueuePool Exception during reset or similar
Traceback (most recent call last):
File "/var/jenkins/workspace/sqlalchemy-default-sqlite-pypy-2.7/.tox/full/site-packages/sqlalchemy/pool.py", line 631, in _finalize_fairy
fairy._reset(pool)
File "/var/jenkins/workspace/sqlalchemy-default-sqlite-pypy-2.7/.tox/full/site-packages/sqlalchemy/pool.py", line 765, in _reset
pool._dialect.do_rollback(self)
File "/var/jenkins/workspace/sqlalchemy-default-sqlite-pypy-2.7/.tox/full/site-packages/sqlalchemy/engine/default.py", line 412, in do_rollback
dbapi_connection.rollback()
File "/opt/pypy-python2.7/site-packages/psycopg2cffi/_impl/connection.py", line 42, in check_closed_
raise exceptions.InterfaceError('connection already closed')
InterfaceError: connection already closed
2015-05-01 01:44:56,621 ERROR sqlalchemy.pool.QueuePool Exception during reset or similar
Traceback (most recent call last):
File "/var/jenkins/workspace/sqlalchemy-default-sqlite-pypy-2.7/.tox/full/site-packages/sqlalchemy/pool.py", line 631, in _finalize_fairy
fairy._reset(pool)
File "/var/jenkins/workspace/sqlalchemy-default-sqlite-pypy-2.7/.tox/full/site-packages/sqlalchemy/pool.py", line 765, in _reset
pool._dialect.do_rollback(self)
File "/var/jenkins/workspace/sqlalchemy-default-sqlite-pypy-2.7/.tox/full/site-packages/sqlalchemy/engine/default.py", line 412, in do_rollback
dbapi_connection.rollback()
File "/opt/pypy-python2.7/site-packages/psycopg2cffi/_impl/connection.py", line 42, in check_closed_
raise exceptions.InterfaceError('connection already closed')
InterfaceError: connection already closed
2015-05-01 01:44:56,621 ERROR sqlalchemy.pool.QueuePool Exception during reset or similar
Traceback (most recent call last):
File "/var/jenkins/workspace/sqlalchemy-default-sqlite-pypy-2.7/.tox/full/site-packages/sqlalchemy/pool.py", line 631, in _finalize_fairy
fairy._reset(pool)
File "/var/jenkins/workspace/sqlalchemy-default-sqlite-pypy-2.7/.tox/full/site-packages/sqlalchemy/pool.py", line 765, in _reset
pool._dialect.do_rollback(self)
File "/var/jenkins/workspace/sqlalchemy-default-sqlite-pypy-2.7/.tox/full/site-packages/sqlalchemy/engine/default.py", line 412, in do_rollback
dbapi_connection.rollback()
File "/opt/pypy-python2.7/site-packages/psycopg2cffi/_impl/connection.py", line 42, in check_closed_
raise exceptions.InterfaceError('connection already closed')
InterfaceError: connection already closed
=================================== FAILURES ===================================
________ ResultTest_postgresql_psycopg2cffi.test_spoiled_half_w_params _________
[gw0] linux2 -- Python 2.7.8 /var/jenkins/workspace/sqlalchemy-default-sqlite-pypy-2.7/.tox/full/bin/python
Traceback (most recent call last):
File "/var/jenkins/workspace/sqlalchemy-default-sqlite-pypy-2.7/test/ext/test_baked.py", line 341, in test_spoiled_half_w_params
bq.spoil().add_criteria(fn3)(sess).params(id=7).all(),
File "/var/jenkins/workspace/sqlalchemy-default-sqlite-pypy-2.7/.tox/full/site-packages/sqlalchemy/ext/baked.py", line 295, in all
return list(self)
File "/var/jenkins/workspace/sqlalchemy-default-sqlite-pypy-2.7/.tox/full/site-packages/sqlalchemy/ext/baked.py", line 241, in __iter__
return iter(self._as_query())
File "/var/jenkins/workspace/sqlalchemy-default-sqlite-pypy-2.7/.tox/full/site-packages/sqlalchemy/orm/query.py", line 2515, in __iter__
return self._execute_and_instances(context)
File "/var/jenkins/workspace/sqlalchemy-default-sqlite-pypy-2.7/.tox/full/site-packages/sqlalchemy/orm/query.py", line 2528, in _execute_and_instances
close_with_result=True)
File "/var/jenkins/workspace/sqlalchemy-default-sqlite-pypy-2.7/.tox/full/site-packages/sqlalchemy/orm/query.py", line 2519, in _connection_from_session
**kw)
File "/var/jenkins/workspace/sqlalchemy-default-sqlite-pypy-2.7/.tox/full/site-packages/sqlalchemy/orm/session.py", line 882, in connection
execution_options=execution_options)
File "/var/jenkins/workspace/sqlalchemy-default-sqlite-pypy-2.7/.tox/full/site-packages/sqlalchemy/orm/session.py", line 887, in _connection_for_bind
engine, execution_options)
File "/var/jenkins/workspace/sqlalchemy-default-sqlite-pypy-2.7/.tox/full/site-packages/sqlalchemy/orm/session.py", line 334, in _connection_for_bind
conn = bind.contextual_connect()
File "/var/jenkins/workspace/sqlalchemy-default-sqlite-pypy-2.7/.tox/full/site-packages/sqlalchemy/engine/base.py", line 2034, in contextual_connect
self._wrap_pool_connect(self.pool.connect, None),
File "/var/jenkins/workspace/sqlalchemy-default-sqlite-pypy-2.7/.tox/full/site-packages/sqlalchemy/engine/base.py", line 2069, in _wrap_pool_connect
return fn()
File "/var/jenkins/workspace/sqlalchemy-default-sqlite-pypy-2.7/.tox/full/site-packages/sqlalchemy/pool.py", line 376, in connect
return _ConnectionFairy._checkout(self)
File "/var/jenkins/workspace/sqlalchemy-default-sqlite-pypy-2.7/.tox/full/site-packages/sqlalchemy/pool.py", line 708, in _checkout
fairy = _ConnectionRecord.checkout(pool)
File "/var/jenkins/workspace/sqlalchemy-default-sqlite-pypy-2.7/.tox/full/site-packages/sqlalchemy/pool.py", line 480, in checkout
rec = pool._do_get()
File "/var/jenkins/workspace/sqlalchemy-default-sqlite-pypy-2.7/.tox/full/site-packages/sqlalchemy/pool.py", line 1042, in _do_get
(self.size(), self.overflow(), self._timeout))
TimeoutError: QueuePool limit of size 5 overflow 0 reached, connection timed out, timeout 0
________ ResultTest_postgresql_psycopg2cffi.test_subquery_eagerloading _________
[gw0] linux2 -- Python 2.7.8 /var/jenkins/workspace/sqlalchemy-default-sqlite-pypy-2.7/.tox/full/bin/python
Traceback (most recent call last):
File "/var/jenkins/workspace/sqlalchemy-default-sqlite-pypy-2.7/test/ext/test_baked.py", line 527, in test_subquery_eagerloading
self.assert_sql_count(testing.db, go, 2)
File "/var/jenkins/workspace/sqlalchemy-default-sqlite-pypy-2.7/.tox/full/site-packages/sqlalchemy/testing/assertions.py", line 466, in assert_sql_count
db, callable_, assertsql.CountStatements(count))
File "/var/jenkins/workspace/sqlalchemy-default-sqlite-pypy-2.7/.tox/full/site-packages/sqlalchemy/testing/assertions.py", line 447, in assert_sql_execution
callable_()
File "/var/jenkins/workspace/sqlalchemy-default-sqlite-pypy-2.7/test/ext/test_baked.py", line 525, in go
result = bq(sess).all()
File "/var/jenkins/workspace/sqlalchemy-default-sqlite-pypy-2.7/.tox/full/site-packages/sqlalchemy/ext/baked.py", line 295, in all
return list(self)
File "/var/jenkins/workspace/sqlalchemy-default-sqlite-pypy-2.7/.tox/full/site-packages/sqlalchemy/ext/baked.py", line 257, in __iter__
with_session(self.session)._execute_and_instances(context)
File "/var/jenkins/workspace/sqlalchemy-default-sqlite-pypy-2.7/.tox/full/site-packages/sqlalchemy/orm/query.py", line 2528, in _execute_and_instances
close_with_result=True)
File "/var/jenkins/workspace/sqlalchemy-default-sqlite-pypy-2.7/.tox/full/site-packages/sqlalchemy/orm/query.py", line 2519, in _connection_from_session
**kw)
File "/var/jenkins/workspace/sqlalchemy-default-sqlite-pypy-2.7/.tox/full/site-packages/sqlalchemy/orm/session.py", line 882, in connection
execution_options=execution_options)
File "/var/jenkins/workspace/sqlalchemy-default-sqlite-pypy-2.7/.tox/full/site-packages/sqlalchemy/orm/session.py", line 887, in _connection_for_bind
engine, execution_options)
File "/var/jenkins/workspace/sqlalchemy-default-sqlite-pypy-2.7/.tox/full/site-packages/sqlalchemy/orm/session.py", line 334, in _connection_for_bind
conn = bind.contextual_connect()
File "/var/jenkins/workspace/sqlalchemy-default-sqlite-pypy-2.7/.tox/full/site-packages/sqlalchemy/engine/base.py", line 2034, in contextual_connect
self._wrap_pool_connect(self.pool.connect, None),
File "/var/jenkins/workspace/sqlalchemy-default-sqlite-pypy-2.7/.tox/full/site-packages/sqlalchemy/engine/base.py", line 2069, in _wrap_pool_connect
return fn()
File "/var/jenkins/workspace/sqlalchemy-default-sqlite-pypy-2.7/.tox/full/site-packages/sqlalchemy/pool.py", line 376, in connect
return _ConnectionFairy._checkout(self)
File "/var/jenkins/workspace/sqlalchemy-default-sqlite-pypy-2.7/.tox/full/site-packages/sqlalchemy/pool.py", line 708, in _checkout
fairy = _ConnectionRecord.checkout(pool)
File "/var/jenkins/workspace/sqlalchemy-default-sqlite-pypy-2.7/.tox/full/site-packages/sqlalchemy/pool.py", line 480, in checkout
rec = pool._do_get()
File "/var/jenkins/workspace/sqlalchemy-default-sqlite-pypy-2.7/.tox/full/site-packages/sqlalchemy/pool.py", line 1042, in _do_get
(self.size(), self.overflow(), self._timeout))
TimeoutError: QueuePool limit of size 5 overflow 0 reached, connection timed out, timeout 0
____________ ResultTest_postgresql_psycopg2cffi.test_w_new_entities ____________
[gw0] linux2 -- Python 2.7.8 /var/jenkins/workspace/sqlalchemy-default-sqlite-pypy-2.7/.tox/full/bin/python
Traceback (most recent call last):
File "/var/jenkins/workspace/sqlalchemy-default-sqlite-pypy-2.7/test/ext/test_baked.py", line 368, in test_w_new_entities
bq(session).all(),
File "/var/jenkins/workspace/sqlalchemy-default-sqlite-pypy-2.7/.tox/full/site-packages/sqlalchemy/ext/baked.py", line 295, in all
return list(self)
File "/var/jenkins/workspace/sqlalchemy-default-sqlite-pypy-2.7/.tox/full/site-packages/sqlalchemy/ext/baked.py", line 257, in __iter__
with_session(self.session)._execute_and_instances(context)
File "/var/jenkins/workspace/sqlalchemy-default-sqlite-pypy-2.7/.tox/full/site-packages/sqlalchemy/orm/query.py", line 2528, in _execute_and_instances
close_with_result=True)
File "/var/jenkins/workspace/sqlalchemy-default-sqlite-pypy-2.7/.tox/full/site-packages/sqlalchemy/orm/query.py", line 2519, in _connection_from_session
**kw)
File "/var/jenkins/workspace/sqlalchemy-default-sqlite-pypy-2.7/.tox/full/site-packages/sqlalchemy/orm/session.py", line 882, in connection
execution_options=execution_options)
File "/var/jenkins/workspace/sqlalchemy-default-sqlite-pypy-2.7/.tox/full/site-packages/sqlalchemy/orm/session.py", line 887, in _connection_for_bind
engine, execution_options)
File "/var/jenkins/workspace/sqlalchemy-default-sqlite-pypy-2.7/.tox/full/site-packages/sqlalchemy/orm/session.py", line 334, in _connection_for_bind
conn = bind.contextual_connect()
File "/var/jenkins/workspace/sqlalchemy-default-sqlite-pypy-2.7/.tox/full/site-packages/sqlalchemy/engine/base.py", line 2034, in contextual_connect
self._wrap_pool_connect(self.pool.connect, None),
File "/var/jenkins/workspace/sqlalchemy-default-sqlite-pypy-2.7/.tox/full/site-packages/sqlalchemy/engine/base.py", line 2069, in _wrap_pool_connect
return fn()
File "/var/jenkins/workspace/sqlalchemy-default-sqlite-pypy-2.7/.tox/full/site-packages/sqlalchemy/pool.py", line 376, in connect
return _ConnectionFairy._checkout(self)
File "/var/jenkins/workspace/sqlalchemy-default-sqlite-pypy-2.7/.tox/full/site-packages/sqlalchemy/pool.py", line 708, in _checkout
fairy = _ConnectionRecord.checkout(pool)
File "/var/jenkins/workspace/sqlalchemy-default-sqlite-pypy-2.7/.tox/full/site-packages/sqlalchemy/pool.py", line 480, in checkout
rec = pool._do_get()
File "/var/jenkins/workspace/sqlalchemy-default-sqlite-pypy-2.7/.tox/full/site-packages/sqlalchemy/pool.py", line 1042, in _do_get
(self.size(), self.overflow(), self._timeout))
TimeoutError: QueuePool limit of size 5 overflow 0 reached, connection timed out, timeout 0
_________ ExecutionTest_postgresql_psycopg2cffi.test_parameter_execute _________
[gw0] linux2 -- Python 2.7.8 /var/jenkins/workspace/sqlalchemy-default-sqlite-pypy-2.7/.tox/full/bin/python
Traceback (most recent call last):
File "/var/jenkins/workspace/sqlalchemy-default-sqlite-pypy-2.7/test/orm/test_session.py", line 61, in test_parameter_execute
{"id": 8, "name": "u8"}
File "/var/jenkins/workspace/sqlalchemy-default-sqlite-pypy-2.7/.tox/full/site-packages/sqlalchemy/orm/session.py", line 1023, in execute
bind, close_with_result=True).execute(clause, params or {})
File "/var/jenkins/workspace/sqlalchemy-default-sqlite-pypy-2.7/.tox/full/site-packages/sqlalchemy/engine/base.py", line 914, in execute
return meth(self, multiparams, params)
File "/var/jenkins/workspace/sqlalchemy-default-sqlite-pypy-2.7/.tox/full/site-packages/sqlalchemy/sql/elements.py", line 323, in _execute_on_connection
return connection._execute_clauseelement(self, multiparams, params)
File "/var/jenkins/workspace/sqlalchemy-default-sqlite-pypy-2.7/.tox/full/site-packages/sqlalchemy/engine/base.py", line 1010, in _execute_clauseelement
compiled_sql, distilled_params
File "/var/jenkins/workspace/sqlalchemy-default-sqlite-pypy-2.7/.tox/full/site-packages/sqlalchemy/engine/base.py", line 1146, in _execute_context
context)
File "/var/jenkins/workspace/sqlalchemy-default-sqlite-pypy-2.7/.tox/full/site-packages/sqlalchemy/engine/base.py", line 1339, in _handle_dbapi_exception
exc_info
File "/var/jenkins/workspace/sqlalchemy-default-sqlite-pypy-2.7/.tox/full/site-packages/sqlalchemy/util/compat.py", line 199, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb)
File "/var/jenkins/workspace/sqlalchemy-default-sqlite-pypy-2.7/.tox/full/site-packages/sqlalchemy/engine/base.py", line 1116, in _execute_context
context)
File "/var/jenkins/workspace/sqlalchemy-default-sqlite-pypy-2.7/.tox/full/site-packages/sqlalchemy/engine/default.py", line 439, in do_executemany
cursor.executemany(statement, parameters)
File "/opt/pypy-python2.7/site-packages/psycopg2cffi/_impl/cursor.py", line 26, in check_closed_
return func(self, *args, **kwargs)
File "/opt/pypy-python2.7/site-packages/psycopg2cffi/_impl/cursor.py", line 49, in check_async_
return func(self, *args, **kwargs)
File "/opt/pypy-python2.7/site-packages/psycopg2cffi/_impl/cursor.py", line 286, in executemany
self.execute(query, params)
File "/opt/pypy-python2.7/site-packages/psycopg2cffi/_impl/cursor.py", line 26, in check_closed_
return func(self, *args, **kwargs)
File "/opt/pypy-python2.7/site-packages/psycopg2cffi/_impl/cursor.py", line 259, in execute
self._pq_execute(self._query, conn._async)
File "/opt/pypy-python2.7/site-packages/psycopg2cffi/_impl/cursor.py", line 692, in _pq_execute
self._pq_fetch()
File "/opt/pypy-python2.7/site-packages/psycopg2cffi/_impl/cursor.py", line 753, in _pq_fetch
raise self._conn._create_exception(cursor=self)
IntegrityError: (psycopg2cffi._impl.exceptions.IntegrityError) duplicate key value violates unique constraint "users_pkey"
DETAIL: Key (id)=(7) already exists.
[SQL: u'INSERT INTO users (id, name) VALUES (%(id)s, %(name)s)'] [parameters: ({'id': 7, 'name': 'u7'}, {'id': 8, 'name': 'u8'})]
| 0 |
spec:
selector:
app: foobar-service
track: prod
track: post-prod-staging
I can't seem to find how to define _IN_ semantic for svc's selector, I tried
to above and it resulted in only the last one being used:
> kubectl describe svc svc-prod-svc
Name: svc-prod-svc
Namespace: default
Labels: <none>
Selector: app=foobar-service,track=post-prod-staging
kubectl should probably notify of duplication and fail, or are least print
warning.
|
**Is this a request for help?** (If yes, you should use our troubleshooting
guide and community support channels, see
http://kubernetes.io/docs/troubleshooting/.):
**What keywords did you search in Kubernetes issues before filing this one?**
(If you have found any duplicates, you should instead reply there.):
* * *
**Is this a BUG REPORT or FEATURE REQUEST?** (choose one):
**Kubernetes version** (use `kubectl version`):
**Environment** :
* **Cloud provider or hardware configuration** :
* **OS** (e.g. from /etc/os-release):
* **Kernel** (e.g. `uname -a`):
* **Install tools** :
* **Others** :
**What happened** :
**What you expected to happen** :
**How to reproduce it** (as minimally and precisely as possible):
**Anything else do we need to know** :
| 0 |
I have been unable to get past a certain line in some of the challenges.
It will literally just stop my cursor and cannot go down further.
I had to spam refresh for it to work in 3 different browsers and have deleted
my browser cache but still problem persists.
Windows 10, chrome latest firefox latest and vivaldi latest.
|
Challenge Waypoint: Use a CSS Class to Style an Element has an issue.
User Agent is: `Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_2)
AppleWebKit/537.36 (KHTML, like Gecko) Chrome/47.0.2526.106 Safari/537.36`.
Please describe how to reproduce this issue, and include links to screenshots
if possible.
My code:
<style>
.red-text {
color: red;
}
</style>
<h2 class="red-text">CatPhotoApp</h2>
<p>Kitty ipsum dolor sit amet, shed everywhere shed everywhere stretching attack your ankles chase the red dot, hairball run catnip eat the grass sniff.</p>
| 0 |
Would be nice to, after selecting a package in the settings view, to be able
move to the next/previous installed package using the up/down arrow keys.
I'm using Atom 136 on Windows 8.1 64 bit installed via chocolatey.
|
_(originally reported over atatom/atom#1635)_
support/f96903ba9fc811e38844f9159f8f1555
> Allow navigation of packages in "Settings" (the left pane with the list of
> packages) via up/down arrows.
>
> THE KING OF KEYBOARDS WILL THANK YOU
🙇 👑 🔢
| 1 |
I am using a modal with the remote option. I'd like to be able to execute some
code once the modal-body is loaded.
Currently the only events we have available is `show` and `shown`. I tried
using `shown` but loading the modal-body occurs after the shown event is
fired.
How could I achieve this?
|
I think it is a good idea to add 'loaded' callback to modals. Something like
that:
this.options.remote && this.$element.find('.modal-body').load(this.options.remote, $.proxy(function() {
this.$element.trigger('loaded')
}, this))
Here's complete code https://gist.github.com/3486192
I can make pull request if this functionality is really needed.
| 1 |
I want to customize the frameless window, so I need to style the borders, but
at the current time, there is no way to do that.
|
Is it possible to set the window background to transparent and then allow the
transparency level to be controlled with CSS? Similar to TextMate?

/cc @zcbenz
| 1 |
## Bug Report
**Current Behavior**
In loose mode, `@babel/plugin-proposal-class-properties` emits a property
assignment for TypeScript definite assignments.
**Input Code**
REPL (I'm not sure if it's possible to turn turn on loose mode for the plugin)
TS Playground
// Like Object.assign but will not assign properties which are already defined
declare const assignDefaults: typeof Object.assign
class Foo {
bar!: number;
constructor(overrides: { bar: number }) {
assignDefaults(this, overrides);
}
}
**Expected behavior/code**
A definite assignment should not emit any JS.
// Like Object.assign but will not assign properties which are already defined
class Foo {
constructor(overrides) {
assignDefaults(this, overrides);
}
}
**Babel Configuration (.babelrc, package.json, cli command)**
{
"presets": ["@babel/preset-typescript"],
"plugins": [["@babel/plugin-proposal-class-properties", { "loose": true }]]
}
**Environment**
* Babel version(s): 7.5.5
* Node/npm version: Node 12.6 / yarn 1.15.2
* OS: macOS 10.14.5
* Monorepo: no
* How you are using Babel: cli
**Additional context/Screenshots**
This was previously raised in #7997, but considered expected behavior. It was
mentioned that it could be reconsidered for `loose` mode, however.
I do feel like it should be reconsidered one way or another; emitting for
these declarations is problematic in the same way emitting for a `declare`
would be. Additionally, this syntax is not valid JS so there the only
applicable spec compliance is that of TS.
|
### Choose one: is this a bug report or feature request?
Bug report.
### Input Code
class Text extends TextRecord implements Types.TextNode {
public readonly firstChild: Types.NestedNode
public readonly nextSibling: Types.NestedNode
public readonly type: Types.NodeType.Text
public readonly uuid: string
public readonly value: string
public constructor(props: Types.TextNode) {
super(props)
}
}
### Babel/Babylon Configuration (.babelrc, package.json, cli command)
module.exports = {
presets: [
['@babel/env', {
exclude: [
'transform-async-to-generator',
'transform-regenerator',
],
targets: {
browsers: ['last 2 versions'],
},
modules: 'commonjs',
loose: true,
useBuiltIns: 'usage',
}],
'@babel/react',
['@babel/stage-0', {
loose: true,
}],
'@babel/typescript',
],
plugins: [
'autobind-class-methods',
['module:fast-async', {
compiler: {
lazyThenables: true,
parser: {
sourceType: 'module',
},
promises: true,
wrapAwait: true,
},
useRuntimeModule: true,
}],
'lodash',
'./src/vendor/babel-plugin-ramda',
'react-loadable/babel',
'reflective-bind/babel',
['styled-components', {
displayName: true,
ssr: true,
}],
'transform-promise-to-bluebird',
['@babel/transform-regenerator', {
async: false,
asyncGenerators: true,
generators: true,
}],
'@babel/transform-runtime',
],
}
### Expected Behavior
Babel should strip the type annotations from the class altogether, à la
TypeScript's transpiled code:
var Text = /** @class */ (function (_super) {
__extends(Text, _super);
function Text(props) {
return _super.call(this, props) || this;
}
return Text;
}(TextRecord));
### Current Behavior
Babel adds code to the constructor assigning `void 0` to the annotated class
properties:
var Text =
/*#__PURE__*/
function (_TextRecord) {
(0, _inheritsLoose2.default)(Text, _TextRecord);
function Text(props) {
var _this12;
_this12 = _TextRecord.call(this, props) || this;
_this12.firstChild = void 0;
_this12.nextSibling = void 0;
_this12.type = void 0;
_this12.uuid = void 0;
_this12.value = void 0;
return _this12;
}
return Text;
}(TextRecord);
### Possible Solution
It would be great if either the TypeScript transform specifically stripped
annotation-only class properties or the class properties transform generally
didn't add assignments for properties that aren't defined.
### Context
It's a common pattern when using Immutable.js with TypeScript to thread strong
typings through Immutable Records by creating classes that extend the records
and add type annotations to their properties. This works with the TypeScript
compiler because TypeScript completely strips the type annotations from the
class properties; however, it breaks with Babel because after the `super` call
Babel is effectively attempting to set new properties on an Immutable Record,
which throws a runtime error:
saga.ts:89 Error: Cannot set on an immutable record.
at invariant (immutable.es.js:1873)
at Text.set (immutable.es.js:5550)
at new Text (records.ts:251)
at Array.postReviver (lib.ts:33)
at fromJSWith (immutable.es.js:5682)
at immutable.es.js:5685
at immutable.es.js:1222
at ArraySeq.__iterate (immutable.es.js:432)
at IndexedCollection.mappedSequence.__iterateUncached (immutable.es.js:1221)
at IndexedCollection.__iterate (immutable.es.js:302)
at IndexedCollection.toArray (immutable.es.js:4575)
at List (immutable.es.js:3064)
at IndexedCollection.toList (immutable.es.js:4633)
at Object.postReviver (lib.ts:39)
at fromJSWith (immutable.es.js:5682)
at immutable.es.js:5685
I know it's a bit of a corner case, but it's making it pretty challenging to
enjoy the benefits of TypeScript with Immutable.
### Your Environment
software | version(s)
---|---
Babel | `v7.0.0-beta.42`
Babylon | N/A
node | `v9.8.0`
npm | `v5.7.1`
Operating System | macOS 10.13.4
| 1 |
**Rossen Stoyanchev** opened **SPR-9622** and commented
* * *
**Affects:** 3.1.2
This issue is a backport sub-task of #13856
**Issue Links:**
* #14302 Issue/Problem on redirect after migration project on RequestMappingHandlerMapping/RequestMappingHandlerAdapter ( _ **"is duplicated by"**_ )
1 votes, 2 watchers
|
**Rossen Stoyanchev** opened **SPR-8474** and commented
With suffix pattern matching "/users" also matches to "/users.*". This is
useful for content type negotiation - e.g. /users.xml, /users.pdf - but can
lead to ambiguity when extracting URI template variables.
For example given "/users/{user}":
1. "/users/1.json" should extract "1"
2. "/users/john.j.joe" should extract "john.j.joe"
Currently the above cannot be supported at the same time. You can only turn
suffix pattern matching on or off. A simple solution could look for a single
"." only but then this would be impossible:
"/users/john.j.joe.json" should extract "john.j.joe"
Ideally the PatternsRequestCondition should be able to decide if the suffix
represents a known file extension (.xml, .json) similar to how the
ContentNegotiatingViewResolver is configured today.
This should become possible as part of the content negotiation improvements
planned for Spring 3.2 (#13057).
* * *
**Affects:** 3.1 M2
This issue is a sub-task of #13057
**Issue Links:**
* #12288 Allow valid file extension paths for content negotiation to be specified ( _ **"duplicates"**_ )
* #14694 404 error when working with .htm servlet-mapping
**Referenced from:** commits `4fd7645`, `9cc4bd8`
2 votes, 3 watchers
| 0 |
HttpCache stores empty content when saving StreamedResponse because
StreamedResponse:getContent returns always false
|
This is my first Symfony bug report! I'm working with v2.3.4, and the relevant
code in master branch hasn't changed since that release.
Yesterday I was building a Silex controller that generates images, and I hoped
to use HttpCache so I could delay generating and storing images the right way
in the web root.
Using Symfony's built `HttpCache\Store` class, I found that the method
`Store::generateContentDigest()` and `Store::save()` were calling
`BinaryFileResponse::getContent()` which returns FALSE. (So does
StreamedResponse, so I assume this problem occurs in that case too.)
However, the content digest returned still has a value of `en`, which
determines the save location of the cached response. I think the checks in
`Store::save()` all pass because this directory exists if any prior responses
have been successfully cached.
When the application handles other requests for the same file, it considers it
a cache hit even though there is no saved content. I don't know if this is
related to the code comments in `Store::lookup()` about returning null and
purging the entry from the metadata store.
Ideally, the HttpCache would cache even streaming responses, but this may only
have an inelegant solution. Since actual reverse proxy caches like Varnish
would be able to cache the streamed response, I don't want to avoid setting
appropriate cache headers on the response in my application. I guess HttpCache
should refuse to store metadata when `Response::getContent()` returns false.
| 1 |
Referring to #10611, my issue just got closed without being solved. One admin
just 'assuming' something and then the thread gets closed? Great.
So back on this, the issue still remains. My content shifts a little when
theres less than 4 thumbnails. Everything moves about 2cm when there's less
than 4 thumbnails.
I wrapped everything in a container, and we are not talking of a scrollbar. I
did try to use a clearfix after every 4 thumbnails (4 x 3col = 12col) and that
didn't work either.
Please don't just close this like you did last time.
|
On this page, click "Launch demo modal", then run this JS in the console:
$("#myModal").modal("hide").modal("show")
The dialog should hide and show again, which it does, but the background
overlay also turns completely black and opaque, which it shouldn't.
JsFiddle link: http://jsfiddle.net/fgVv3/12/
| 0 |
`Deno.readTextFile` error is not too descriptive my opinion:
error: Uncaught NotFound: No such file or directory (os error 2)
...
Maybe it could include a bit more information, like the path it was trying to
read?
error: Uncaught NotFound: No such file or directory (os error 2): "path/to/file.ts"
|
Hey,
Thinking about the I/O stuff made me realize that since we don't have `defer`
in JavaScript and we have exceptions so:
func main() {
f, err := os.Open("/tmp/test.txt")
if err != nil {
return
}
defer f.Close();
n, err := fmt.Fprintln(f, "data")
// skip checking n and `err` for brevity but code should
}
Could be:
async function main() {
const f = await createFile('/tmp/test.txt'); // ignore API bikeshed
try {
const n = await f.write(data);
} finally {
f.close();
}
}
Which unlike `defer` doesn't really stack up too nicely with multiple files:
async function main() {
const f = await createFile('/tmp/test.txt');
try {
try {
const f2 = await createFile('/tmp/test2.txt');
await f.write('data');
await f2.write('data'); // or Promise.all and do it concurrently
} finally {
f2.close(); // by the way - is this synchronous?
}
} finally {
f.close();
}
}
This sort of code is very hard to write correctly - especially if resources
are acquired concurrently (we don't await before the first createFile finishes
to do the second) and some of them fail.
We can do other stuff instead for resource management:
* We can expose disposers for resource management.
* We can expose a `using` function from deno and have a "real" resource" story.
* Something else?
Here is what the above example looks like with exposing a `using`:
import { using } from 'deno'
async function main() {
await using(createFile('/tmp/test.txt'), createFile('/tmp/test2.txt'), async (test1, test2) => {
// when the promise this returns resolves - both files are closed
}); // can .catch here to handle errors or wrap with try/catch
}
We wrote some prior art in bluebird in here \- C# also does this with `using`
(I asked before defining `using` in bluebird and got an interesting
perspective from Eric). Python has `with` and Java has `try with resource`.
Since this is a problem "browser TypeScript" doesn't really have commonly I
suspect it will be a few years before a solution might come from TC39 if at
all - issues were opened in TypeScript but it was deemed out of scope. Some
CCs to get the discussion started:
* @spion who worked on `defer` for bluebird coroutines and `using` with me.
* @littledan for a TC39 reference and knowing if there is interest in solving this at a language level.
* @1st1 who did the work on asynchronous contexts for Python
We also did some discussion in Node but there is no way I'm aware of for Node
to do this nicely that won't be super risky - Deno seems like the perfect
candidate for a safer API for resource management.
@ry if you prefer this as a PR with a concrete proposal for either approach
let me know. Alternatively if you prefer the discussion to happen at a later
future point let me know.
| 0 |
One of the our application's users has a problem starting electron executable.
It even happens with original executable available to download from releases
page without additional modifications on our side.
We've tested the following electron versions: 2.0.10, 3.0.5 and all have the
same behavior for this specific user.
* Operating System (Platform and Version): Windows 7
**Actual behavior**
electron.exe starts and fails silently not reaching our code.
**Additional Information**
Here is stack trace from crash dump for version 3.0.5. It was collected using
ProcDump (options -ma -e 1 -w electron) and WinDbg (with debug symbols for
version 3.0.5).
0:000> k
# ChildEBP RetAddr
00 002deaa8 7733874f ntdll!NtWaitForSingleObject+0x15
01 002deb2c 7733887d ntdll!RtlReportExceptionEx+0x14b
02 002deb84 77330f09 ntdll!RtlReportException+0x86
03 002deb9c 773110cd ntdll!LdrpInitializeProcessWrapperFilter+0x63
04 002deba8 77304fc4 ntdll!_LdrpInitialize+0xef
05 002debbc 77304e59 ntdll!_EH4_CallFilterFunc+0x12
06 002debe4 772f34c1 ntdll!_except_handler4+0x8e
07 002dec08 772f3493 ntdll!ExecuteHandler2+0x26
08 002dec2c 772f3434 ntdll!ExecuteHandler+0x24
09 002decb8 772a0163 ntdll!RtlDispatchException+0x127
0a 002decb8 7597338d ntdll!KiUserExceptionDispatcher+0xf
0b 002df180 0f7d22fa KERNELBASE!DebugBreak+0x2
0c 002df198 0f7bb7bd node!uv_fatal_error+0x8a [c:\projects\electron-39ng6\vendor\node\deps\uv\src\win\error.c @ 62]
0d 002df5c0 0f7d3413 node!uv_winsock_init+0x1dd [c:\projects\electron-39ng6\vendor\node\deps\uv\src\win\winsock.c @ 138]
0e 002df5e8 0f7c8e2b node!uv_init+0x43 [c:\projects\electron-39ng6\vendor\node\deps\uv\src\win\core.c @ 207]
0f (Inline) -------- node!uv__once_inner+0x32 [c:\projects\electron-39ng6\vendor\node\deps\uv\src\win\thread.c @ 65]
10 002df600 0f7bcb48 node!uv_once+0x4b [c:\projects\electron-39ng6\vendor\node\deps\uv\src\win\thread.c @ 87]
11 (Inline) -------- node!uv__once_init+0xf [c:\projects\electron-39ng6\vendor\node\deps\uv\src\win\core.c @ 296]
12 002df61c 0f6c1155 node!uv_hrtime+0x18 [c:\projects\electron-39ng6\vendor\node\deps\uv\src\win\util.c @ 473]
13 002df620 00509293 node!node::performance::`dynamic initializer for 'timeOrigin''+0x5 [c:\projects\electron-39ng6\vendor\node\src\node_perf.cc @ 36]
14 002df63c 0ff9e6f2 ucrtbase!_initterm+0x43
15 002df67c 0ff9e64b node!dllmain_crt_process_attach+0x8f [f:\dd\vctools\crt\vcstartup\src\startup\dll_dllmain.cpp @ 63]
16 002df68c 0ff9e83d node!dllmain_crt_dispatch+0x3b [f:\dd\vctools\crt\vcstartup\src\startup\dll_dllmain.cpp @ 137]
17 002df6cc 0ff9e92c node!dllmain_dispatch+0x59 [f:\dd\vctools\crt\vcstartup\src\startup\dll_dllmain.cpp @ 194]
18 002df6e0 772c9264 node!_DllMainCRTStartup+0x1c [f:\dd\vctools\crt\vcstartup\src\startup\dll_dllmain.cpp @ 252]
19 002df700 772cfe97 ntdll!LdrpCallInitRoutine+0x14
1a 002df7f4 772db454 ntdll!LdrpRunInitializeRoutines+0x26f
1b 002df974 772d9f11 ntdll!LdrpInitializeProcess+0x1402
1c 002df9c4 772c9789 ntdll!_LdrpInitialize+0x78
1d 002df9d4 00000000 ntdll!LdrInitializeThunk+0x10
|
I use libuv(version 1.9.0) in my Windows client for networking framework. But
i have received some crash reports in uv_winsock_init function. call stack is
as bellow, i print error is socket error 10106, and i have received some other
stack at winsock.c lineno 121, and socket errorno is 10014, is this a bug?
STACK_TEXT:
07c3f478 037e879c 00000000 00000560 5a371f5e MainFrame!uv_fatal_error+0x80 [g:\mojo-release\mojo_v3.5.0\src\net\atlas\atlas\platform\win32\libuv\src\win\error.c @ 61]
07c3f8a0 037d38ad 00000560 07c3f8d4 07c3f8d4 MainFrame!uv_winsock_init+0x1fc [g:\mojo-release\mojo_v3.5.0\src\net\atlas\atlas\platform\win32\libuv\src\win\winsock.c @ 148]
07c3f8b4 03d43d44 07c3f8d4 037e83e6 078304e8 MainFrame!uv_init+0x2d [g:\mojo-release\mojo_v3.5.0\src\net\atlas\atlas\platform\win32\libuv\src\win\core.c @ 113]
07c3f8b8 07c3f8d4 037e83e6 078304e8 078305a8 MainFrame!uv_init_guard_+0x4
WARNING: Frame IP not in any known module. Following frames may be wrong.
07c3f8bc 037e83e6 078304e8 078305a8 596ced74 0x7c3f8d4
07c3f8d4 037d393d 078304e8 078305a8 596ced74 MainFrame!uv__once_inner+0x36 [g:\mojo-release\mojo_v3.5.0\src\net\atlas\atlas\platform\win32\libuv\src\win\thread.c @ 67]
07c3f8e4 037bbc57 d19ee234 596ced74 078304e8 MainFrame!uv_loop_init+0x1d [g:\mojo-release\mojo_v3.5.0\src\net\atlas\atlas\platform\win32\libuv\src\win\core.c @ 133]
07c3f9f4 03791318 d19ee18c 596ced74 07830228 MainFrame!wukong::net::EventLoop::EventLoop+0x167 [g:\mojo-release\mojo_v3.5.0\src\net\atlas\atlas\wukong\net\eventloop.cpp @ 153]
07c3fa4c 035cf7a2 d19ee1ac 007c7478 07830228 MainFrame!wukong::lwp::UserAgent::UserAgent+0x98 [g:\mojo-release\mojo_v3.5.0\src\net\atlas\atlas\wukong\lwp\useragent.cpp @ 21]
07c3fa6c 035d7374 d19ee0c4 596cee11 07c3fbb8 MainFrame!wukong::Singleton<wukong::lwp::UserAgent>::init+0x62 [g:\mojo-release\mojo_v3.5.0\src\net\atlas\atlas\wukong\common\singleton.hpp @ 23]
07c3fb04 035bd5fc 07c3fb30 07c3fb48 d19ee0d8 MainFrame!mj::AppSettings::Init+0xdd4 [g:\mojo-release\mojo_v3.5.0\src\apps\app_settings.cpp @ 240]
07c3fb6c 03b2c3fd 07c3fbe0 0397c593 07c3fbb8 MainFrame!mj::LaunchRemotePlatformApp+0x16c [g:\mojo-release\mojo_v3.5.0\src\apps\launcher.cpp @ 65]
07c3fb74 0397c593 07c3fbb8 07c3fbb8 07c3fb9c MainFrame!libm_sse2_pow_precise+0x595dd
07c3fbe0 03983a1e d19ee7fc 039839e0 077d1070 MainFrame!CLoginContent::LaunchPlatformApp+0x53 [g:\mojo-release\mojo_v3.5.0\win\source\app\frame\logincontent.cpp @ 223]
07c3fc3c 03985c01 077d1070 00000000 077d4398 MainFrame!CLoginContent::ThreadInit+0x3e [g:\mojo-release\mojo_v3.5.0\win\source\app\frame\logincontent.cpp @ 143]
07c3fc4c 5a37f33c d195a375 00000000 077d4398 MainFrame!std::_LaunchPad<std::_Bind<1,void,void (__cdecl*const)(CLoginContent *),CLoginContent *> >::_Go+0x11 [d:\dev\vs2013\vc\include\thr\xthread @ 187]
07c3fc74 596ec01d 0040e420 d195a3d1 00000000 msvcp120!_Call_func+0x17 [f:\dd\vctools\crt\crtw32\stdcpp\thr\threadcall.cpp @ 28]
07c3fcac 596ec001 00000000 07c3fcc4 7659337a msvcr120!_callthreadstartex+0x1b [f:\dd\vctools\crt\crtw32\startup\threadex.c @ 376]
07c3fcb8 7659337a 077d3fd0 07c3fd04 77979882 msvcr120!_threadstartex+0x7c [f:\dd\vctools\crt\crtw32\startup\threadex.c @ 354]
07c3fcc4 77979882 077d3fd0 7106bde2 00000000 kernel32+0x1337a
07c3fd04 77979855 596ebfb4 077d3fd0 00000000 ntdll+0x39882
07c3fd1c 00000000 596ebfb4 077d3fd0 00000000 ntdll+0x39855
| 1 |
I use examples/pytorch/translation/run_translation.py fine-tune mbart-large-
cc25 on my datasets, it automatically runs on CPU. I have 2 GPU, but only one
is Nvidia.It is RTX 2080super.
python main.py
\--model_name_or_path facebook/mbart-large-cc25
\--do_train
\--do_eval
\--source_lang en_XX
\--target_lang zh_CN
\--train_file /data/2WangHongyu/bioNMT_WHY/train.json
\--validation_file /data/2WangHongyu/bioNMT_WHY/dev.json
\--output_dir /output
\--per_device_train_batch_size=4
\--per_device_eval_batch_size=4
\--overwrite_output_dir
\--predict_with_generate
\--cache_dir /model/2WangHongyu/mbart-large
|
-`transformers` version: 4.5.0
* Platform: linux
* Python version: 3.8
* PyTorch version (GPU?): 1.7.1
* Tensorflow version (GPU?):
* Using GPU in script?: yes
* Using distributed or parallel set-up in script?:yes
@patrickvonplaten, @patil-suraj
Models: mbart-large-50
| 1 |
**chris tam** opened **SPR-3250** and commented
The problem is reported by Mark Menard in Spring support forum. I have checked
the source code and find out the problem come from LangNamespaceHandler "init"
method. Currently the "init" method contains the following code:
...
public void init() {
registerScriptBeanDefinitionParser("groovy", GroovyScriptFactory.class);
registerScriptBeanDefinitionParser("jruby", JRubyScriptFactory.class);
registerScriptBeanDefinitionParser("bsh", BshScriptFactory.class);
}
...
The above code will throw ClassNotFoundException when either groovy, beanshell
or jruby library is not found in the classpath. The
DefaultNamespaceHandlerResolver will not register the "lang" namespace when
"init" method throws the ClassNotFoundException. The lang namespace requires
that both groovy, beanshell and jruby library must be in the classpath. Is it
possible to add some checking to the init method of LangNamespaceHandler to
avoid the above dependency. For example:
...
public void init() {
if (checkGroovyExists()) {
registerScriptBeanDefinitionParser("groovy", GroovyScriptFactory.class);
}
if (checkJRubyExists()) {
registerScriptBeanDefinitionParser("jruby", JRubyScriptFactory.class);
}
if (checkBshExists()) {
registerScriptBeanDefinitionParser("bsh", BshScriptFactory.class);
}
// what to do if all the above jars are not found???
}
private boolean checkGroovyExists() {
try {
Class.forName("groovy.lang.GroovyObject") ;
return true ;
} catch(Exception e) {
return false ;
}
}
private boolean checkBshExists() {
try {
Class.forName("bsh.Interpreter") ;
return true ;
} catch(Exception e) {
return false ;
}
}
private boolean checkJRubyExists() {
try {
Class.forName("org.jruby.runtime.builtin.IRubyObject") ;
return true ;
} catch(Exception e) {
return false ;
}
}
...
Thanks for all the help from spring team.
cheers
chris tam
xenium
* * *
**Affects:** 2.0.3
**Issue Links:**
* #7942 Cannot find "lang" namespace handler unless all library jars are present ( _ **"is duplicated by"**_ )
* #7958 NoClassDefFoundError when groovy isn't on classpath ( _ **"is duplicated by"**_ )
|
**Oliver Siegmar** opened **SPR-9067** and commented
Consider this configuration:
<mvc:resources mapping="/static/**" location="/WEB-INF/resources/immutable/"/>
The resource directory /WEB-INF/resources/immutable/ contains several files
and subdirectories. One subdirectory is called images.
A browser request to /static/nonexistingfile.png results in a 404 HTTP error.
But a request to /static/images (existing directory, without a file name
specified) results in a FileNotFoundException:
java.io.FileNotFoundException: Could not open ServletContext resource [/WEB-INF/resources/immutable/images]
at org.springframework.web.context.support.ServletContextResource.getInputStream(ServletContextResource.java:118) ~[spring-web-3.1.0.RELEASE.jar:3.1.0.RELEASE]
at org.springframework.web.servlet.resource.ResourceHttpRequestHandler.writeContent(ResourceHttpRequestHandler.java:240) ~[spring-webmvc-3.1.0.RELEASE.jar:3.1.0.RELEASE]
at org.springframework.web.servlet.resource.ResourceHttpRequestHandler.handleRequest(ResourceHttpRequestHandler.java:141) ~[spring-webmvc-3.1.0.RELEASE.jar:3.1.0.RELEASE]
at org.springframework.web.servlet.mvc.HttpRequestHandlerAdapter.handle(HttpRequestHandlerAdapter.java:49) ~[spring-webmvc-3.1.0.RELEASE.jar:3.1.0.RELEASE]
at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:900) ~[spring-webmvc-3.1.0.RELEASE.jar:3.1.0.RELEASE]
at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:827) ~[spring-webmvc-3.1.0.RELEASE.jar:3.1.0.RELEASE]
at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:882) ~[spring-webmvc-3.1.0.RELEASE.jar:3.1.0.RELEASE]
at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:778) ~[spring-webmvc-3.1.0.RELEASE.jar:3.1.0.RELEASE]
at javax.servlet.http.HttpServlet.service(HttpServlet.java:621) ~[servlet-api.jar:na]
at javax.servlet.http.HttpServlet.service(HttpServlet.java:722) ~[servlet-api.jar:na]
I think the getResource(HttpServletRequest) method in
ResourceHttpRequestHandler should also check if the requested resources is a
File.
On a high volume website this can cause tons of FileNotFoundExceptions due to
robot visits. This is why I classified this issue with major priority.
* * *
**Affects:** 3.1 GA
**Issue Links:**
* #13712 Spring MVC resources handler generates a 500 internal error when accessing a directory resource ( _ **"is duplicated by"**_ )
**Referenced from:** commits `f8238f5`
| 0 |
PUT _template/test
{
"template": "i*",
"settings": {
"analysis": {
"analyzer": {
"custom": {
"type": "custom",
"tokenizer": "keyword"
}
}
}
}
}
PUT /_template/test1
{
"template": "index*",
"mappings": {
"type": {
"properties": {
"message": {
"type": "text",
"analyzer": "custom"
}
}
}
}
}
The first template, which can be thought of as the parent template, defines
something that is used by the second template: a custom analyzer. The template
validator appears to only consider the template as a standalone one, which
means that the analyzer does not exist, thus returning:
{
"error": {
"root_cause": [
{
"type": "mapper_parsing_exception",
"reason": "analyzer [custom] not found for field [message]"
}
],
"type": "mapper_parsing_exception",
"reason": "Failed to parse mapping [type]: analyzer [custom] not found for field [message]",
"caused_by": {
"type": "mapper_parsing_exception",
"reason": "analyzer [custom] not found for field [message]"
}
},
"status": 400
}
The template validation should combine matching templates like normal index
creation works, which would to ensure that the _entire_ chain is valid or
invalid.
**Workaround**
There is a workaround, which is to manually apply necessary pieces to the sub-
template (e.g., duplicating the analyzer in it, which will be only appear once
in the resulting index).
|
Given the following document
`{ ... "input" : ["foobar", "barfoo", "foo"] ... }`
and the query `foo`, the document will show up twice in the result set when
using completion suggester. I'm currently using a custom method to clean my
input data to make sure the same prefix is not being used twice for a given
document but it somehow feels wrong. With a regular search a document will
also not show multiple times if a keyword is contained more than once in the
document, is the completion suggester working this way by design?
| 0 |
* [√] I have searched the issues of this repository and believe that this is not a duplicate.
* [√] I have checked the FAQ of this repository and believe that this is not a duplicate.
### Environment
* Dubbo version: 2.7.6-2.7.8
* Operating System version: Centos 3.10.0-862.el7.x86_64
* Java version: 1.8.0
### Steps to reproduce this issue
1. 设置虚IP
有一台机器因为某些原因,在eth0上设置了虚IP(10.19.15.115,真实IP是10.19.15.111)。
* 执行 ip a输出如下:
...
eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether xx:xx:xx:xx:dc:cc brd ff:ff:ff:ff:ff:ff
inet 10.19.15.111/22 brd 10.19.15.255 scope global eth0
valid_lft forever preferred_lft forever
inet 10.19.15.115/32 scope global eth0
valid_lft forever preferred_lft forever
...
* 执行 ifconfig 输出如下:
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.19.15.111 netmask 255.255.252.0 broadcast 10.19.15.255
ether xx:xx:xx:xx:dc:cc txqueuelen 1000 (Ethernet)
2. 编写代码
public static void main(String[] args) {
InetAddress localAddress = NetUtils.getLocalAddress();
System.out.println(localAddress);
}
3. 运行程序
Pls. provide [GitHub address] to reproduce this issue.
### Expected Result
`/10.19.15.111`
### Actual Result
What actually happens?
`/10.19.15.115`
此问题在2.7.5以及之前是能够获取期望的正确的IP的。
因此我认为 #5795 引入了一个 Breakable
change:让Netutils没有机会调用`java.net.InetAddress#getLocalHost()`从hostname获取对应的IP地址:
Returns the address of the local host. This is achieved by retrieving
* the name of the host from the system, then resolving that name into
* an {@code InetAddress}.
java.net.InetAddress#getLocalHost()
例如改成如下方式?(2.7.5以及以前是先调用`InetAddress.getLocalHost()`的)
private static InetAddress getLocalAddress0() {
InetAddress localAddress = null;
//to provide a chance to retrieve ip from hostname. maybe add an system property like 'dubbo.network.hostname.first'
if ("true".equals(System.getProperty("dubbo.network.hostname.first"))) {
try {
localAddress = InetAddress.getLocalHost();
Optional<InetAddress> addressOp = toValidAddress(localAddress);
if (addressOp.isPresent()) {
return addressOp.get();
}
} catch (Throwable e) {
logger.warn(e);
}
}
//以上代码放在后面可能没有机会执行了。
// @since 2.7.6, choose the {@link NetworkInterface} first
try {
NetworkInterface networkInterface = findNetworkInterface();
Enumeration<InetAddress> addresses = networkInterface.getInetAddresses();
while (addresses.hasMoreElements()) {
Optional<InetAddress> addressOp = toValidAddress(addresses.nextElement());
if (addressOp.isPresent()) {
try {
if (addressOp.get().isReachable(100)) {
return addressOp.get();
}
} catch (IOException e) {
// ignore
}
}
}
} catch (Throwable e) {
logger.warn(e);
}
return localAddress;
}
|
* I have searched the issues of this repository and believe that this is not a duplicate.
* I have checked the FAQ of this repository and believe that this is not a duplicate.
### Environment
* Dubbo version: 2.7.6-SNAPSHOT(aliyun maven镜像,也用源码的 2.7.6.release分支测试过)
* Operating System version: win10
* Java version: 1.8.0_171
### Steps to reproduce this issue
测试代码:https://github.com/apache/dubbo-samples/tree/master/java/dubbo-samples-
nacos/dubbo-samples-nacos-registry
dubbo版本修改为"2.7.6.SNAPSHOT"
Pls. provide [GitHub address] to reproduce this issue.
### Expected Result
### Actual Result
(IP地址手动改成了 127.0.0.1)
Exception in thread "main" org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'annotatedConsumer': Injection of @Reference dependencies is failed; nested exception is java.lang.IllegalStateException: Failed to check the status of the service org.apache.dubbo.samples.api.GreetingService. No provider available for the service org.apache.dubbo.samples.api.GreetingService:1.0.0 from the url nacos://localhost:8848/org.apache.dubbo.registry.RegistryService?application=nacos-registry-demo-consumer&dubbo=2.0.2&init=false&interface=org.apache.dubbo.samples.api.GreetingService&methods=sayHello&pid=19028®ister.ip=127.0.0.1&release=2.7.6-SNAPSHOT&revision=1.0.0&side=consumer&sticky=false&timeout=3000×tamp=1584584637337&version=1.0.0 to the consumer 127.0.0.1 use dubbo version 2.7.6-SNAPSHOT
at com.alibaba.spring.beans.factory.annotation.AbstractAnnotationBeanPostProcessor.postProcessPropertyValues(AbstractAnnotationBeanPostProcessor.java:146)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:1268)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:553)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:483)
at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:312)
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:230)
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:308)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:197)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:761)
at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:867)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:543)
at org.springframework.context.annotation.AnnotationConfigApplicationContext.<init>(AnnotationConfigApplicationContext.java:84)
at org.apache.dubbo.samples.ConsumerBootstrap.main(ConsumerBootstrap.java:32)
Caused by: java.lang.IllegalStateException: Failed to check the status of the service org.apache.dubbo.samples.api.GreetingService. No provider available for the service org.apache.dubbo.samples.api.GreetingService:1.0.0 from the url nacos://localhost:8848/org.apache.dubbo.registry.RegistryService?application=nacos-registry-demo-consumer&dubbo=2.0.2&init=false&interface=org.apache.dubbo.samples.api.GreetingService&methods=sayHello&pid=19028®ister.ip=127.0.0.1&release=2.7.6-SNAPSHOT&revision=1.0.0&side=consumer&sticky=false&timeout=3000×tamp=1584584637337&version=1.0.0 to the consumer 127.0.0.1 use dubbo version 2.7.6-SNAPSHOT
at org.apache.dubbo.config.ReferenceConfig.createProxy(ReferenceConfig.java:349)
at org.apache.dubbo.config.ReferenceConfig.init(ReferenceConfig.java:258)
at org.apache.dubbo.config.ReferenceConfig.get(ReferenceConfig.java:158)
at org.apache.dubbo.config.spring.beans.factory.annotation.ReferenceAnnotationBeanPostProcessor.getOrCreateProxy(ReferenceAnnotationBeanPostProcessor.java:274)
at org.apache.dubbo.config.spring.beans.factory.annotation.ReferenceAnnotationBeanPostProcessor.doGetInjectedBean(ReferenceAnnotationBeanPostProcessor.java:143)
at com.alibaba.spring.beans.factory.annotation.AbstractAnnotationBeanPostProcessor.getInjectedObject(AbstractAnnotationBeanPostProcessor.java:359)
at com.alibaba.spring.beans.factory.annotation.AbstractAnnotationBeanPostProcessor$AnnotatedFieldElement.inject(AbstractAnnotationBeanPostProcessor.java:539)
at org.springframework.beans.factory.annotation.InjectionMetadata.inject(InjectionMetadata.java:88)
at com.alibaba.spring.beans.factory.annotation.AbstractAnnotationBeanPostProcessor.postProcessPropertyValues(AbstractAnnotationBeanPostProcessor.java:142)
... 12 more
### 备注
1. nacos中已存在 provider注册的服务 serviceName = "providers:org.apache.dubbo.samples.api.GreetingService:1.0.0:"
2. 启动consumer时,`NacosRegistry#doSubscribe(...)`为了兼容会创建2个 serviceNames
* providers:org.apache.dubbo.samples.api.GreetingService:1.0.0:
* providers:org.apache.dubbo.samples.api.GreetingService:1.0.0
并且进行了 `NacosRegistry#subscribeEventListener(...)` 监听和订阅 nacos。
但是因为 provider 其实并不存在
`providers:org.apache.dubbo.samples.api.GreetingService:1.0.0`,
所以(nacos 回调的 instances = 0)`NacosRegistry#notifySubscriber(URL ,
NotifyListener , Collection<Instance> )`中会创建一个 empty 的 protocol。
又因为只有1个provider,所以`RegistryDirectory#refreshInvoker()`中满足了 forbidden
的条件,所以invokers被destroy....
| 0 |
Thank you for the amazing work.
I am using Tensorboard to visualize my training, and it works just fine.
But when I need to change the frequency of loss scalers, I can't find api to
do this. I want to more scalers log, maybe log end of every batch. Now I can
only get one log during an epoch. Thanks a lot.
tensorboard_callback = TensorBoard(log_dir = log_dir,
histogram_freq = 1,
write_graph = True,
write_images = False,
embeddings_freq = embeddings_freq,
embeddings_layer_names = None,
embeddings_metadata = "w2v_metadata.tsv")
* [x ] Check that you are up-to-date with the master branch of Keras. You can update with:
pip install git+git://github.com/fchollet/keras.git --upgrade --no-deps
* [x ] If running on TensorFlow, check that you are up-to-date with the latest version. The installation instructions can be found here.
* If running on Theano, check that you are up-to-date with the master branch of Theano. You can update with:
pip install git+git://github.com/Theano/Theano.git --upgrade --no-deps
* Provide a link to a GitHub Gist of a Python script that can reproduce your issue (or just copy the script here if it is short).
|
Please make sure that the boxes below are checked before you submit your
issue. If your issue is an implementation question, please ask your question
on StackOverflow or join the Keras Slack channel and ask there instead of
filing a GitHub issue.
Thank you!
* Check that you are up-to-date with the master branch of Keras. You can update with:
pip install git+git://github.com/fchollet/keras.git --upgrade --no-deps
* If running on Theano, check that you are up-to-date with the master branch of Theano. You can update with:
pip install git+git://github.com/Theano/Theano.git --upgrade --no-deps
* Provide a link to a GitHub Gist of a Python script that can reproduce your issue (or just copy the script here if it is short).
from keras.layers.wrappers import Bidirectional
from keras.layers import LSTM, Dense, Input
from keras.models import Model
import numpy as np
data=np.random.rand(1,5,10)
label=np.asarray([[1,0,0]])
inp = Input(shape=data.shape[1:])
x = Bidirectional(LSTM(units=32,recurrent_dropout=0.5))(inp)
x = Dense(3,activation='softmax')(x)
model = Model(input=inp, output=x)
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
model.fit(data, label, epochs=1)
Above code gives the following error, If I remove `recurrent_dropout=0.5` it
trains without error:
MissingInputError: Input 0 of the graph (indices start from 0), used to compute if{}(keras_learning_phase, Elemwise{true_div,no_inplace}.0, Reshape{2}.0), was not provided and not given a value. Use the Theano flag exception_verbosity='high', for more information on this error.
Backtrace when that variable is created:
File "<ipython-input-1-aa27b7e1da0a>", line 1, in <module>
runfile('/home/bozkurt.a/DEJ/train_lstm_seq.py', wdir='/home/bozkurt.a/DEJ')
File "/home/bozkurt.a/miniconda2/lib/python2.7/site-packages/spyder/utils/site/sitecustomize.py", line 866, in runfile
execfile(filename, namespace)
File "/home/bozkurt.a/miniconda2/lib/python2.7/site-packages/spyder/utils/site/sitecustomize.py", line 94, in execfile
builtins.execfile(filename, *where)
File "/home/bozkurt.a/DEJ/train_lstm_seq.py", line 11, in <module>
from keras.models import Model
File "/home/bozkurt.a/miniconda2/lib/python2.7/site-packages/Keras-2.0.2-py2.7.egg/keras/__init__.py", line 3, in <module>
from . import activations
File "/home/bozkurt.a/miniconda2/lib/python2.7/site-packages/Keras-2.0.2-py2.7.egg/keras/activations.py", line 3, in <module>
from . import backend as K
File "/home/bozkurt.a/miniconda2/lib/python2.7/site-packages/Keras-2.0.2-py2.7.egg/keras/backend/__init__.py", line 61, in <module>
from .theano_backend import *
File "/home/bozkurt.a/miniconda2/lib/python2.7/site-packages/Keras-2.0.2-py2.7.egg/keras/backend/theano_backend.py", line 28, in <module>
_LEARNING_PHASE = T.scalar(dtype='uint8', name='keras_learning_phase') # 0 = test, 1 = train
| 0 |
It parses `•` in html as `ÔÇó`:
The following is from the output of a test that should pass:
Failed asserting that two strings are equal.
--- Expected
+++ Actual
@@ @@
-'phpBB ÔÇó Free and Open Source Forum Software'
+'phpBB • Free and Open Source Forum Software'
|
Currently, adders and removers must be prefixed with "add" and "remove". When
applying DDD, this sometimes doesn't make sense, for example:
class Contact
{
public function addGroup(ContactGroup $group) { }
public function removeGroup(ContactGroup $group) { }
}
Here it would make much more sense to prefix the methods with "join" and
"leave":
class Contact
{
public function joinGroup(ContactGroup $group) { }
public function leaveGroup(ContactGroup $group) { }
}
The PropertyAccessor should provide a way to use these methods.
| 0 |
I am getting error
> Could not find "store" in either the context or props of "Connect(Base)".
> Either wrap the root component in a , or explicitly pass "store" as a prop
> to "Connect(Base)".
while connecting template component with redux. This seems like it worked in
previous version.
* I have searched the issues of this repository and believe that this is not a duplicate.
## Expected Behavior
Template component connects to Redux successfully.
## Current Behavior
Getting error: `Could not find "store" in either the context or props of
"Connect(Base)". Either wrap the root component in a <Provider>, or explicitly
pass "store" as a prop to "Connect(Base)".`
If I wrap root component with `<Provider>` from 'react-redux`, it throws error
that it expects object, not a function.
## Steps to Reproduce (for bugs)
store.js
const initStore = (initialState = {}) => {
createStore(
combineReducers({ someReducer, otherReducer }),
initialState,
composeWithDevTools(
applyMiddleware(thunkMiddleware),
))
}
template.js
class Base extends React.Component {
...
render(){
return (
<div>
{this.props.children}
</div>
)
}
}
...
export default withRedux(Store, mapStateToProps, mapDispatchToProps)(Base)
./pages/index.js
import Template from 'Templates/Base'
export default class Index extends React.Component {
render(){
return (
<Template>
{'index'}
</Template>
)
}
}
## Context
I am wrapping all pages in template component which includes methods and
components that are reused among all pages.
## Your Environment
Tech | Version
---|---
next | "^3.0.1-beta.20"
node | 6.10.2
OS | Ubuntu/Linux
browser | Chrome 60
|
Regrading webpack allow array of config as input, this `next.config.js`
crashes `yarn dev`
module.exports = {
webpack: (config, { dev }) => {
return [config];
},
};
Console output:
DONE Compiled successfully in 3232ms 12:18:49
(node:4702) UnhandledPromiseRejectionWarning: Unhandled promise rejection (rejection id: 1): TypeError: Cannot read property 'chunks' of undefined
(node:4702) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.
(node:4702) UnhandledPromiseRejectionWarning: Unhandled promise rejection (rejection id: 2): TypeError: Cannot read property 'chunks' of undefined
* I have searched the issues of this repository and believe that this is not a duplicate.
## Expected Behavior
Should work as normal
## Current Behavior
Throw an UnhandledPromiseRejectionWarning, the app doesn't work.
## Steps to Reproduce (for bugs)
1. Create a new next app.
2. Add `next.config.js` with following content:
module.exports = {
webpack: (config, { dev }) => {
return [config];
},
};
3. Run `yarn dev`
## Context
I was trying to transpile custom server code using next's webpack config.
## Your Environment
Tech | Version
---|---
next | 3.0.3
node | v8.1.0
OS | macOS Sierra 10.12.6 (16G29)
| 0 |
### Model description
LayoutLMv3 is a pre-trained multimodal Transformer for Document AI with
unified text and image masking. The simple unified architecture and training
objectives make LayoutLMv3 a general-purpose pre-trained model. For example,
LayoutLMv3 can be fine-tuned for both text-centric tasks, including form
understanding, receipt understanding, and document visual question answering,
and image-centric tasks such as document image classification and document
layout analysis.
LayoutLMv3 greatly simplifies training and reduces the number of parameters
compared to v3, making it an important milestone in document understanding.
LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking
Yupan Huang, Tengchao Lv, Lei Cui, Yutong Lu, Furu Wei, Preprint 2022.
### Open source status
* The model implementation is available
* The model weights are available
### Provide useful links for the implementation
Huggingface Pretrained Download
|
## ❓ Questions & Help
The BertForQuestionAnswering sample code creates duplicate [CLS] tokens.
Wondering why:
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertForQuestionAnswering.from_pretrained('bert-large-uncased-whole-word-masking-finetuned-squad')
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
input_text = "[CLS] " + question + " [SEP] " + text + " [SEP]"
input_ids = tokenizer.encode(input_text)
token_type_ids = [0 if i <= input_ids.index(102) else 1 for i in range(len(input_ids))]
start_scores, end_scores = model(torch.tensor([input_ids]), token_type_ids=torch.tensor([token_type_ids]))
all_tokens = tokenizer.convert_ids_to_tokens(input_ids)
print(' '.join(all_tokens[torch.argmax(start_scores) : torch.argmax(end_scores)+1]))
# a nice puppet
tokenizer.decode(input_ids)
#'[CLS] [CLS] who was jim henson? [SEP] jim henson was a nice puppet [SEP] [SEP]'
If I remove the extra [CLS], the extraction doesn't work. It's exactly two
tokens off:
input_ids = tokenizer.encode(input_text, add_special_tokens=False)
...rerun same code as above...
print(' '.join(all_tokens[torch.argmax(start_scores) : torch.argmax(end_scores)+1]))
# was a
What am I doing wrong? How can I get the extraction working without duplicate
[CLS] tokens? (and duplicate final [SEP] tokens BTW).
The sample code comes right from the docs:
https://huggingface.co/transformers/model_doc/bert.html#bertforquestionanswering
| 0 |
Using a session object and setting the proxies dictionary does not affect the
proxies used during a request.
ex: Still tries to use system proxy.
s = requests.Session()
s.proxies = {'http': None, 'https': None}
s.post(url='http://10.0.1.1', json={'test': 'data'})
Using proxies during each request works, but it would be great to set them at
the session level. Specifically if you potentially have dozens or ".post()" or
".get()" calls throughout your script.
Any thoughts?
|
`Session.trust_env = False` turns off the checking of environment variables
for options including proxy settings (`*_proxy`). But `urllib` picks up and
uses these environment proxy settings anyway. `requests` should pass the
`trust_env` setting on to `urllib`. (Although I'm not sure if `urllib` has a
similar override.)
(Proxy setting precedence should be sorted out here as well. They way it is
now, environment proxy settings will interfere with (rather than be over-
ridden by) the `proxies` argument in `Session.request` or `requests.request`
calls and the `Session.proxies` config regardless of `trust_env` settings.)
| 1 |
Unzip the arm prebuilt on arm linux system. Ran all Math functions and the
results are shown here. The functions in red return incorrect results. We've
noticed this issue all the way back to 0.32.x. 0.29.2 does not have these
issues.

|
I'm using this electron:
https://github.com/atom/electron/releases/download/v0.35.4/electron-v0.35.4-linux-
arm.zip
Long.js library returns different result on Raspberry Pi 2 for the following
js code:
Long.fromString("13370000000").toString()
I've created proof-of-concept repo here: https://github.com/bartekn/electron-
long-bug.
The problem seems to be connected with this line in Long.js:
return new Long((value % TWO_PWR_32_DBL) | 0, (value / TWO_PWR_32_DBL) | 0, unsigned);
When value of `value` variable is equal `13370000000`, the first value passed
to `Long` constructor (`(value % TWO_PWR_32_DBL) | 0`) is equal `0` on
Raspberry Pi, while it's equal `485098112` on Mac OS (the correct value).
When trying to calculate `13370000000%4294967296`
(`TWO_PWR_32_DBL`=4294967296) in the chrome dev tools console on Raspberry Pi
2 it returns correct result (`485098112`).
I'm not sure if it's a problem of electron or libchromiumcontent, sorry.
| 1 |
Piping ReadableStream through TextDecoderStream leaks resource as demonstrated
with following test:
Deno.test("leaks", async () => {
const res = await fetch(
"https://deno.land/std@0.186.0/json/testdata/test.jsonl"
)
const textStream = new TextDecoderStream()
const reader = res.body!.pipeThrough(textStream).getReader()
const t = await reader.read()
await reader!.cancel()
})
Run `deno test stream-test.ts` and you will get:
error: Leaking resources:
- A text decoder (rid 6) was created during the test, but not finished during the test. Close the text decoder by calling `textDecoder.decode('')` or `await textDecoderStream.readable.cancel()`.
Adding `await t.readable.cancel()` as error message suggested does not help.
A following solution with manual decoding works properly:
Deno.test("working alternative", async () => {
const res = await fetch(
"https://deno.land/std@0.186.0/json/testdata/test.jsonl"
)
const textDecoder = new TextDecoder()
const reader = res.body!.getReader()
const b = await reader.read()
const t = await textDecoder.decode(b.value!)
await reader.cancel()
})
I spent whole evening last night to make it work and ended up with latter
piece. Shouldn't closing initial stream also close all subsequent ones? Either
way, closing readable TextDecoderStream doesn't work too.
Deno version: 1.33.2 and 1.33.1
|
When configuring a server with a cloudflare origin server certificate, it
works perfectly when you visit the site using the url with https:

But when directly visiting the server IP using https the process stops on the
server with the following error:

Would it be possible to ignore that error so that the server continues to run?
The connection may close or return an error message, but the server should
continue to listen to other requests.
I hope you have a solution! regards
| 0 |
## Environment info
* `transformers` version: 4.2.2
* Platform: Linux-4.15.0-45-generic-x86_64-with-glibc2.10
* Python version: 3.8.3
* PyTorch version (GPU?): 1.6.0 (True)
* Tensorflow version (GPU?): not installed (NA)
* Using GPU in script?: No
* Using distributed or parallel set-up in script?: No
### Who can help
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* my own modified scripts: (give details below)
The tasks I am working on is:
* my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. The code
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
words = ['(cid:3)', '하셨습니까', '하다']
tokenizer.batch_encode_plus(
[words],
max_length=512,
truncation=True,
padding=True,
is_split_into_words=True,
return_offsets_mapping=True,
return_special_tokens_mask=True,
return_tensors="pt",
)
2. The `offset_mapping` in the output is
tensor([[[0, 0], # [CLS]
[0, 1], # for '('
[1, 4], # for 'cid'
[4, 5], # for ':'
[5, 6], # for '3'
[6, 7], # for ')'
[0, 5], # for '하셨습니까'
[0, 1], # for '하'
[0, 1], # for '하'
[1, 2], # for '다'
[1, 2], # for '다'
[0, 0]]])
3. As you could find, it generates four tokens for `하다`. The output is correct according to Byte pair encoding. However, it generates duplicated `[0,1]` and `[1,2]`s, which changes the structure of the outputs (for regular tokens, it can only have one `[0,x]`, which can be used to project the encoded tokens back to their original positions). Therefore, we need extra indicators for positions where Byte-pair encoding is used.
## Expected behavior
1. An additional output showing the mapping for input_ids -> original_token_ids . In this case, it should be something like:
[0, 1, 1, 1, 1, 1, 2, 3, 3, 3, 3 0]
Therefore, we could use this map to figure out byte code embedding is used for
the 3rd token.
Updated - @n1t0
|
## Environment info
* `transformers` version: 4.12.2
* Platform: Linux-5.4.0-87-generic-x86_64-with-glibc2.10
* Python version: 3.8.8
* PyTorch version (GPU?): 1.9.1+cu111 (True)
* Tensorflow version (GPU?): not installed (NA)
* Flax version (CPU?/GPU?/TPU?): not installed (NA)
* Jax version: not installed
* JaxLib version: not installed
* Using GPU in script?: Yes
* Using distributed or parallel set-up in script?: No
### Who can help
@LysandreJik
## Information
Model I am using (Bert, XLNet ...): BertLMHeadModel
The problem arises when using:
* the official example scripts: (give details below)
* my own modified scripts: (give details below)
The tasks I am working on is:
* an official GLUE/SQUaD task: (give the name)
* my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Instantiate a BertLMHeadModel. The model contains, among others, a linear layer under the path `cls.predictions.decoder `. Specifically, it contains 2 parameters: `cls.predictions.decoder.weight` and `cls.predictions.decoder.bias`.
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("bert-base-uncased", is_decoder=True)
print(model.cls.predictions.decoder)
2. Get a list of model parameters.
parameter_keys = list(dict(model.named_parameters()).keys())
3. The decoder parameters are missing.
print("cls.predictions.decoder.weight" in parameter_keys)
print("cls.predictions.decoder.bias" in parameter_keys)
4. Note that the parameters appear under the cls module:
print(list(dict(model.cls.named_parameters()).keys()))
5. Note that the `cls.decoder` module appears to be registerred.
print(list(dict(model.named_modules()).keys()))
## Expected behavior
Decoder parameters should be incldued in the model parameters.
| 0 |
# Summary of the new feature
I would like to customize PowerToys Run's shell (used for launching commands
starting with `> `).
For example I would like to use Power Shell Core instead of cmd. Here's many
more options might be available.
* cmd
* PowerShell
* PowerShell Core
* WSL
* Custom shell (eg.: git, zsh, ...)
C:\Program Files(x86)\Myshell\customshell.exe --interactive --command "{0}"
> `{0}` might be used for shell command.
|
PowerToys Run is a convinient tool.
But it would be even better, if it would be possible to configure which
program executes `> ` commands.
Right now `> ping github.com` starts in a cmd window, but I, for example,
would prefer to use Terminal.
Is it possible?
| 1 |
**Symfony version(s) affected** : all
**Description**
When a form field is disabled but a value is submitted to it regardless, the
value is silently ignored. This can be confusing to the user - when he opened
the form the field was not disabled but something changed in the system while
he was filling it so the field is disabled now. The value is then silently
ignored but the user gets no notification about the problem.
**How to reproduce**
Create a form field that has the `'disabled'` option active conditionally.
Open the form in one window while the field is enabled, then do something in
another window causing it to be disabled and then submit the form in the first
window.
**Possible Solution**
Submitting a value into a disabled field should cause a validation error on
the form field in question so that the user knows about the change.
|
The validation of an image I have:
photo:
- Image: { maxSize: 2M, maxSizeMessage: "the size is bigger than {{ limit }}, please change the photo file." }
- NotBlank: { groups: [registration], message: "Please select an photo." }
everything works great except that the maxSizeMessage don't show, only appear
the default symfony photo.
I've removed the {{ limit }} to test and still it persists
| 0 |
Process for bringing up an etcd cluster in a PetSet (minimal requirements).
Some discussion of races and inconsistencies.
* https://coreos.com/etcd/docs/latest/clustering.html
* https://coreos.com/etcd/docs/latest/runtime-configuration.html
#### Initialization (option1)
1. Decide on a unique `--initial-cluster-token` and ensure that will be every pod as an ENV var (could be the petset UID or a simple parameter value)
2. Create all pods and unique PVs for each
3. Wait for all endpoints to be reachable to reflect all pods (requires we either know the quorum size, or know that the pet set has reached desired size). Pods won't be ready, so we need to wait for ready+unready endpoints to == quorum size
4. etcd process starts with the initial bootstrap args (knowing what other members are in the set)
After this, no dynamic reconfiguration is allowed.
#### Initialization (option2)
1. Decide on a unique `--initial-cluster-token` and ensure that will be every pod as an ENV var (could be the petset UID or a simple parameter value)
2. Have the pod that gets identity "1" (or "master", or "first") create a new cluster
3. Have a control loop (babysitter / etc) running with pod identity 1 that tries to keep the cluster membership up to date with the endpoints list in etcd.
This allows dynamic reconfiguration, but is subject to duplicate instances of
the babysitter running (discussed below). It is possible to write the
babysitter so that cluster membership always moves forward, but not possible
to guarantee that two babysitters aren't running at the same time.
#### Dealing with duplicate members without PV to act as a lock
It is possible that a kubelet becomes isolated from the cluster while running
member 1 of a 3 pod set, and etcd is not configured to use PV (which acts as a
form of lock in some cases). The node controller detects that the kubelet is
dead and triggers deletion of those pods. After graceful deletion elapses, a
new pod with member 3 will be created. At this point, there are potentially
two member-1's running.
In option2 above, there could be two control loops running at the same time,
and different pods could also see different endpoint versions (so the
separated pod could see the older endpoint list).
In option1 above, the kubelet could get isolated during initialization, and
two instances of member-1 could be started. Both would initialize, and the
control loops in each would try to acquire members. If sufficiently large
numbers of nodes experienced partition, both sides could think they had a
quorum.
The resolution is to require the babysitters to make a write to the master API
in a way that collapses one side or the other (such as writing to the
endpoints).
#### Upgrade (normal)
1. Are all members healthy with a quorum?
2. Backup each member PV
3. Ensure clean shutdown of members on old version up to `floor(N / 2)`, to leave a quorum running
4. Start a new pod with the same PV as the old pod
During upgrade the same membership problems become possible as during
initialization. As long as the set has a quorum, it can police it's own
membership, but a disruptive event during upgrade that disables quorum means
that the babysitters would have to write to the master API in a similar way as
during initialization.
#### Upgrade (disaster)
In a disaster scenario, a majority of the PVs in the etcd cluster are lost. No
quorum is possible, but again a babysitter can rely on the external master API
to resolve leadership for a period of time and reestablish quorum and ensure
that the new members join the appropriate quorum.
#### Thoughts
Even though etcd has its own consistent mechanism for managing membership,
that requires a quorum. Since the babysitter is itself potentially
distributed, it must obtain a lease / lock to change membership in a strongly
consistent fashion. Only membership changes (pod creation / deletion, endpoint
changes) require the babysitter to act in this fashion. The remainder of the
time, etcd can manage its own quorum state.
If membership changes are relatively rare, there could be advantages to
viewing the babysitter as a "cluster change hook". The babysitter could be a
run-once pod with a long duration grace period (lease, effectively) that is
invoked on cluster membership changes, and the name of the pod could act as
the lease (since writes to the master API are strongly consistent). If the
node running the babysitter goes down, it's possible to break the lock by
deleting the pod forcefully with the corresponding lack of control over the
outcome of any code still running in the pod.
A one shot pod on membership changes has some advantages - limited resource
consumption for the stability of the pod, easier potentially to reason about.
|
**Is this a request for help?** (If yes, you should use our troubleshooting
guide and community support channels, see
http://kubernetes.io/docs/troubleshooting/.):
No
**What keywords did you search in Kubernetes issues before filing this one?**
(If you have found any duplicates, you should instead reply there.):
dnsmasq log
* * *
**Is this a BUG REPORT or FEATURE REQUEST?** (choose one):
FEATURE REQUEST
**Kubernetes version** (use `kubectl version`):
1.3.4
**Environment** :
* **Cloud provider or hardware configuration** : bare-metal
* **OS** (e.g. from /etc/os-release): CoreOS 1068.8.0
* **Kernel** (e.g. `uname -a`): Linux $REDACTED 4.6.3-coreos #2 SMP Mon Jul 18 06:10:39 UTC 2016 x86_64 Intel(R) Xeon(R) CPU E5530 @ 2.40GHz GenuineIntel GNU/Linux
* **Install tools** : bootcfg and ignition
* **Others** :
**What happened** :
1. I started kube-dns according to this example: https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/dns/skydns-rc.yaml.base
2. `kubectl logs --tail=100 -f kube-dns-v19-????? dnsmasq` did not show anything.
**What you expected to happen** :
`kubectl logs --tail=100 -f kube-dns-v19-????? dnsmasq` should have shown log
messages. There should have been at least a start-up message and it should
possibly also log DNS requests, as does the `kubedns` container.
**How to reproduce it** (as minimally and precisely as possible):
Take the example `kube-dns-rc.yaml` and create a resource controller from it.
**Anything else do we need to know** :
For my DHCP/TFTP dnsmasq pod these command line switches have the desired
effect: `--keep-in-foreground --log-facility=-`
| 0 |
* I have searched the issues of this repository and believe that this is not a duplicate.
## Expected Behavior
## Current Behavior
## Steps to Reproduce (for bugs)
1. 2. 3. 4.
## Context
## Your Environment
Tech | Version
---|---
Material-UI | 1.0.0-beta12
React | 16
browser | Electron
etc |
|
### For Fixing table Head in material 1.0.0 11 alpha
I have a table with scrolling rows but I am not able to fix the header.
Is there a property to do so as fixedHeader was in material 0.15 and up but
there doesnt seem to be something similar in 1.0.0 version
### Versions
* Material-UI: 1.0.0-alpha 11
* React: 15.4.2
| 0 |
## Bug Report
The following code cannot be compiled when using `@babel/plugin-proposal-
private-methods`.
Output: `Duplicate declaration "i"`.
class A {
#a() {
const i = 9
}
#b() {
const i = 8
}
}
|
## Bug Report
**Current behavior**
Switch statements containing closures are generating broken code causing
adjacent cases to run. See this minimal reproduction with the default
settings:
https://babeljs.io/en/repl#?browsers=defaults&build=&builtIns=false&spec=false&loose=false&code_lz=GYewTgBAFAxiB2BnALhAhhEwIG0CMAugJQQDeAUBBIgO4CWyMAFtMmAK4CmJFVVMaRJwhsuALjKU-
_BCggAjCAF4IeANxTpUEkoB8CjdKp1sUUdwVhOaANaGjceMjrwu9qgF9NEAUIjA0ABshCV4jZCYwEBoIeE4YgFEwKLAoACJ2eCs0ZjR5QM40oncIL09yLyA&debug=false&forceAllTransforms=false&shippedProposals=false&circleciRepo=&evaluate=false&fileSize=false&timeTravel=false&sourceType=module&lineWrap=true&presets=env%2Creact%2Cstage-2%2Cenv&prettier=false&targets=&version=7.10.4&externalPlugins=
**Input Code**
for (const a of [1]) {
switch (true) {
case true: {
const b = 1;
() => b;
if (true) break;
continue;
}
case false: {
throw new Error("unreachable");
}
}
}
**Output Code**
for (var _i = 0, _arr = [1]; _i < _arr.length; _i++) {
var a = _arr[_i];
switch (true) {
case true:
{
var _ret = function () {
var b = 1;
(function () {
return b;
});
if (true) return "break";
return "continue";
}();
switch (_ret) {
case "break":
break;
case "continue":
continue;
}
}
case false:
{
throw new Error("unreachable");
}
}
}
**Expected behavior**
No error is thrown (the break statement functions as intended, and the false
case does not run).
**Babel Configuration (babel.config.js, .babelrc, package.json#babel, cli
command, .eslintrc)**
Defaults. See the REPL.
**Environment**
Defaults. See the REPL.
**Possible Solution**
If you want to use a `break` statement here, it needs to be a labeled break to
break the outer loop.
switch (_ret) {
case "break":
break outer;
case "continue":
continue;
}
Alternatively, use an if instead of a nested switch.
if (_ret === "break") break;
if (_ret === "continue") continue;
| 0 |
### Is there an existing issue for this?
* I have searched the existing issues
### Current Behavior
I run `npm version` and no commit or tag is created. I am not using `--no-git-
tag-version`. F.e. just `npm version major -m "..."`.
Under what conditions does `npm version` avoid creating a commit and tag?
### Expected Behavior
Always commit and tag unless I specify not to.
### Steps To Reproduce
git clone https://github.com/lume/lume.git
cd lume/packages/lume
npm version major -m "v%s" --ignore-scripts
After this, you will see that `package.json` was updated, but no commit or tag
was created.
### Environment
* OS: Linux
* Node: 14.16
* npm: 7.10
|
### Current Behavior:
`npm version <version>` is not committing the modified package.json or
package-lock.json; nor git-tagging.
We were using `npm version {major|minor|patch}` extensively (and
successfully). It stopped working once the package was moved out of the root
of the repo and into a subdirectory.
### Expected Behavior:
`npm version <version>` should continue to create a git-commit and git-tag as
indicated in the docs:
> If run in a git repo, it will also create a version commit and tag. This
> behavior is controlled by git-tag-version (see below), and can be disabled
> on the command line by running npm --no-git-tag-version version. It will
> fail if the working directory is not clean, unless the -f or --force flag is
> set.
### Steps To Reproduce:
1. initialize an npm package in the root of an initialized git repo
2. `npm version minor` successfully bumps the version, commits and tags
3. move the npm package into a subdirectory of the repo
4. `npm version minor` still bumps the version in package.json and package-lock.json, but git is not committed nor tagged.
### Environment:
* macOS 10.15.7
* node: v14.13.1
* npm: 6.14.8
* git: git version 2.28.0
This is apparently an existing bug going as far back as npm v3: npm/npm#18795
| 1 |
From version 0.9.0 to version 0.11.0 the "diagonal" on non quadratic pairplots
is empty.
Sample Code:
import seaborn as sns
import pandas as pd
from matplotlib import pyplot as plt
test_df = pd.DataFrame({"a": [1,2,3,4], "b": [3,4,5,6], "c": [5,3,6,1]})
sns.pairplot(test_df, x_vars=["a"], y_vars=["b", "c"])
plt.show()
The top graph is empty although a scatter plot should be shown in place as the
two columns have matching indices.
|
Just updated to seaborn 0.11.0 and matplotlib 3.3.1.
Run this code:
iris = sns.load_dataset("iris")
sns.pairplot(data=iris, hue="species", y_vars='sepal_width');
sns.pairplot(data=iris, hue="species", y_vars='sepal_width', x_vars=['sepal_length', 'petal_length']);
And saw it:

And this problem appeared in all previously created pairplots, that have
parameter y_vars
| 1 |
Hello,
I tried to use the clip function on a basic Panel but it seems that the axis
management is not working as expected. I believe this is an update to issue
#8344
import numpy as np
import pandas as pd
data = np.random.randn(3, 4, 5)
panel = pd.Panel(data)
panel.clip(0, 1)
If I run the script with pandas 0.16.2 on python 2.7.10 and 3.4.3, I'm getting
the following error:
.../pandas/core/generic.pyc in clip(self, lower, upper, out, axis)
3052 result = self
3053 if lower is not None:
-> 3054 result = result.clip_lower(lower, axis)
3055 if upper is not None:
3056 result = result.clip_upper(upper, axis)
.../pandas/core/generic.pyc in clip_lower(self, threshold, axis)
3103 raise ValueError("Cannot use an NA value as a clip threshold")
3104
-> 3105 subset = self.ge(threshold, axis=axis) | isnull(self)
3106 return self.where(subset, threshold, axis=axis)
3107
TypeError: f() got an unexpected keyword argument 'axis'
**Thanks a lot for all your efforts!**
|
Let's take the following example:
d = {'Item1' : pandas.DataFrame(numpy.random.randn(4, 3)),
'Item2' : pandas.DataFrame(numpy.random.randn(4, 2))}
p = pandas.Panel(d)
p.clip(0,1)
If I run the script with pandas 0.14.1 on both python 2.7.8 and 3.4.1 I'm
getting the following error:
Traceback (most recent call last):
File "Untitled.py", line 11, in <module>
p.clip(0,1)
File "/usr/local/lib/python2.7/site-packages/pandas/core/generic.py", line 2684, in clip
result = result.clip_lower(lower)
File "/usr/local/lib/python2.7/site-packages/pandas/core/generic.py", line 2722, in clip_lower
return self.where((self >= threshold) | isnull(self), threshold)
File "/usr/local/lib/python2.7/site-packages/pandas/core/ops.py", line 934, in f
self._constructor.__name__)
ValueError: Simple arithmetic with Panel can only be done with scalar values
Digging a little bit deeper showed that the following line (the | operator to
be precise) raises the error:
https://github.com/pydata/pandas/blob/1d65bc89d64c71f8d36f3ca92dd57db2efad7fdb/pandas/core/generic.py#L2796
| 1 |
### System information
* **Have I written custom code (as opposed to using a stock example script provided in TensorFlow)** : NO
* **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)** : Windows 10 Build 16299.192 and Windows 7
* **TensorFlow installed from (source or binary)** : binary
* **TensorFlow version (use command below)** : 1.5.0rc1 and tf-nightly 1.6.0.dev20180124
* **Python version** : 3.6.2 and 3.5.2
* **Bazel version (if compiling from source)** :
* **GCC/Compiler version (if compiling from source)** :
* **CUDA/cuDNN version** : 8.0
* **GPU model and memory** : Nvidia GT 740M 2GB
* **Exact command to reproduce** : toco --help
### Describe the problem
I am trying to run the codelab tutorial of tensorflow lite. After installing
tf-nightly, when I try to run the command "toco --help", I get the error
ModuleNotFoundError: No module named 'tensorflow.contrib.lite.toco.python'.
I have tried this on 3 computers( all Windows) and the same problem persists.
### Source code / logs
C:\Users\HP\Downloads>toco --help
Traceback (most recent call last):
File "c:\programdata\anaconda3\lib\runpy.py", line 193, in _run_module_as_main
" **main** ", mod_spec)
File "c:\programdata\anaconda3\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\ProgramData\Anaconda3\Scripts\toco.exe_ _main__.py", line 5, in
ModuleNotFoundError: No module named 'tensorflow.contrib.lite.toco.python'
|
### System information
* **Have I written custom code (as opposed to using a stock example script provided in TensorFlow)** : No
* **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)** : Windows 10
* **Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device** : n/a
* **TensorFlow installed from (source or binary)** : source, pip install --upgrade tf-nightly
* **TensorFlow version (use command below)** : GIT: 'v1.9.0-rc2-798-gc818bf016d', VERSION: '1.10.0-dev20180719'
* **Python version** : 3.6.6
* **Bazel version (if compiling from source)** :
* **GCC/Compiler version (if compiling from source)** :
* **CUDA/cuDNN version** :
* **GPU model and memory** :
* **Exact command to reproduce** :
### Python API
`import tensorflow as tf
converter = tf.contrib.lite.TocoConverter.from_keras_model_file("model.h5")
tflite_model = converter.convert()
open("model.tflite", "wb").write(tflite_model)`
and
### Command-line
`toco`
You can collect some of this information using our environment capture script:
https://github.com/tensorflow/tensorflow/tree/master/tools/tf_env_collect.sh
You can obtain the TensorFlow version with
python -c "import tensorflow as tf; print(tf.GIT_VERSION, tf.VERSION)"
### Describe the problem
I'm trying to convert a keras model to the tflite format, but the sample code
provided on the website doesn't seem to work. As advised in my other issue, I
installed tf-nightly. I used a venv to get a clean environment but for some
reason, other issues have arisen. In the case of the Python API, 'lite' seems
to be missing from 'tensorflow.contrib' whereas when I run 'toco' from the
command line, it raises a ModuleNotFoundError as shown below. Any help would
be greatly appreciated.
### Source code / logs
### Python API
`Traceback (most recent call last): File "sandbox/run.py", line 3, in <module>
converter = tf.contrib.lite.TocoConverter.from_keras_model_file("model.h5")
File "C:\beta\lib\site-packages\tensorflow\python\util\lazy_loader.py", line
54, in __getattr__ return getattr(module, item) AttributeError: module
'tensorflow.contrib' has no attribute 'lite'`
### Command-line (running toco)
`Traceback (most recent call last): File
"c:\users\user\appdata\local\programs\python\python36\Lib\runpy.py", line 193,
in _run_module_as_main "__main__", mod_spec) File
"c:\users\user\appdata\local\programs\python\python36\Lib\runpy.py", line 85,
in _run_code exec(code, run_globals) File
"C:\beta\Scripts\toco.exe\__main__.py", line 5, in <module>
ModuleNotFoundError: No module named
'tensorflow.contrib.lite.python.tflite_convert'`
| 1 |
pip install -U git+https://github.com/pydata/pandas.git
gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-
prototypes -fPIC -I/local/lib/python2.7/site-packages/numpy/core/include
-I/usr/include/python2.7 -c pandas/src/generated.c -o
build/temp.linux-i686-2.7/pandas/src/generated.o
gcc: error: pandas/src/generated.c: No such file or directory
|
`pip install git+https://github.com/pydata/pandas.git`
does currently not work because the .pyx files are not being cythonized (not
sure why). An easy fix is to include .c files in the git repo which should
make it easier for people to deploy.
I used a simple try: import cython in setup.py that cythonizes if cython is
installed and uses the .c files otherwise:
https://github.com/hddm-devs/hddm/blob/develop/setup.py#L4
| 1 |
Continuing from this discussion: https://discuss.atom.io/t/double-icon-when-
pinned/17320/24
I've got Atom installed on Windows 10. I open the application, then right
click the icon in the taskbar, then click Pin to Taskbar. This creates a
duplicate (second) icon on the taskbar. The first icon is the actual instance
of the app running, and it goes away when the app is closed. The second icon
stays permanently. If I click the second icon while Atom is closed, it adds
the first icon back to my taskbar.
Any ideas as to what I can do to fix this?
|
* run latest Windows 10
* pin either electron or atom to the taskbar
* click it to open the app
=> you end up having 2 icons
=> somehow the app when started does not get associated to the pinned entry in
the taskbar
=> it does not reproduce on Windows 8.x
I wonder if your change in `fb6c80d` could have an impact here. Imho it is
used to find out if an application belongs to the same process group or not:
https://msdn.microsoft.com/en-
us/library/windows/desktop/dd378422(v=vs.85).aspx
| 1 |
**Do you want to request a _feature_ or report a _bug_?**
Not sure tbh.
**What is the current behavior?**
I have global handler for unhandled errors:
window.addEventListener('error', (evt) => ...)
Now when I use the componentDidCatch function it gets called correctly when
render throws an exception but that global error event is also triggered - and
before the componentDidCatch call.
**What is the expected behavior?**
Since componentDidCatch handles the error I'd prefer if the global event
wasn't triggered, same as with a usual try-catch block.
Or is there at least some way to figure out from the evt object in the event
handler that the exception is caught by react?
I hope this made sense...
|
I'm trying to make use of componentDidCatch in the React 16 beta. I already
had a global window error handler which was working fine, but it unexpectedly
catches errors that I would expect componentDidCatch to have handled. That is,
component-local errors are being treated as window-global errors in dev
builds.
The problem seems to stem from `invokeGuardedCallbackDev` in
`ReactErrorUtils.js`. I think that this entire `__DEV__` block of code is
problematic. The stated rational is:
// In DEV mode, we swap out invokeGuardedCallback for a special version
// that plays more nicely with the browser's DevTools. The idea is to preserve
// "Pause on exceptions" behavior. Because React wraps all user-provided
// functions in invokeGuardedCallback, and the production version of
// invokeGuardedCallback uses a try-catch, all user exceptions are treated
// like caught exceptions, and the DevTools won't pause unless the developer
// takes the extra step of enabling pause on caught exceptions. This is
// untintuitive, though, because even though React has caught the error, from
// the developer's perspective, the error is uncaught.
This is misguided because it's not about pausing on exceptions, it's about
"pause on _uncaught_ exceptions." However, `componentDidCatch` makes
exceptions _caught_!
Rather than switching on prod vs dev and using try/catch in prod and window's
error handler in dev, React should always use try/catch, but rethrow if you
reach the root without hitting a componentDidCatch handler. This would
preserve the correct "pause on uncaught exceptions" behavior without messing
with global error handlers.
| 1 |
_From@AlecBoutin on May 18, 2016 19:55_
* VSCode Version: 1.1
The .tsx/.jsx auto-formatter adds an unnecessary space on the end of certain
dynamic attributes.
E.g. `<button id={fn()} />` becomes `<button id={fn() } />` after the file is
auto-formatted. A space is inserted after the closing parenthesis of the fn()
call.
The problem appears to be related to having parenthesis in the attribute.
E.g. <button id={""} /> is left unchanged by auto-format. `<button id={("")}
/>` becomes `<button id={("") } />` (the space is inserted)
I have also observed the auto-formatter preserves the spaces in the id
attribute of `<button id={ ("") } />`
_Copied from original issue:microsoft/vscode#6498_
|
When I format document a space is added before closing parenthesis.

It should not be added.
| 1 |
See the example below
>>> double = np.array([0.0], dtype=np.float64)[0]
>>> float = 0.0
>>> a = Variable(torch.FloatTensor(1))
>>> a + float
Variable containing:
1.00000e-34 *
1.3192
[torch.FloatTensor of size 1]
>>> float + a
Variable containing:
1.00000e-34 *
1.3192
[torch.FloatTensor of size 1]
>>> a + double
Variable containing:
1.00000e-34 *
1.3192
[torch.FloatTensor of size 1]
>>> double + a
array([[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[Variable containing:
1.00000e-34 *
1.3192
[torch.FloatTensor of size 1]
]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]], dtype=object)
|
Pytorch shows the following inconsistent behaviour.
import numpy as np
import torch
from torch.autograd import Variable
1.0 + Variable(torch.ones(1))
# returns as expected
# Variable containing:
# 2
# [torch.FloatTensor of size 1]
np.sum(1.0) + Variable(torch.ones(1))
# returns an unexpected (depth of array is 32)
# array([[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[Variable containing:
# 2
# [torch.FloatTensor of size 1]
# ]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]], dtype=object)
# Switching their order
Variable(torch.ones(1)) + np.sum(1.0)
# returns the expected
# Variable containing:
# 2
# [torch.FloatTensor of size 1]
| 1 |
## Bug Report
**Current Behavior**
I am using
require("@babel/register")({
babelrc: false, // Tell babel-register to ignore the .babelrc file
presets: ["babel-preset-env", "babel-preset-react"],
plugins: [
"babel-plugin-transform-class-properties",
"babel-plugin-transform-object-rest-spread",
[
"babel-plugin-transform-runtime",
{
helpers: false,
polyfill: false,
regenerator: true
}
]
]
});
and all I get is an error when requiring the next file which says:
ReferenceError: Unknown option: .caller. Check out http://babeljs.io/docs/usage/options/ for more information about options.
at buildUnknownError (/Users/andy/Development/oa-content/app/node_modules/@babel/core/lib/config/validation/options.js:98:11)
at /Users/andy/Development/oa-content/app/node_modules/@babel/core/lib/config/validation/options.js:84:57
at Array.forEach (<anonymous>)
at validate (/Users/andy/Development/oa-content/app/node_modules/@babel/core/lib/config/validation/options.js:62:21)
at loadPrivatePartialConfig (/Users/andy/Development/oa-content/app/node_modules/@babel/core/lib/config/partial.js:28:48)
at loadFullConfig (/Users/andy/Development/oa-content/app/node_modules/@babel/core/lib/config/full.js:33:37)
at loadOptions (/Users/andy/Development/oa-content/app/node_modules/@babel/core/lib/config/index.js:18:34)
at OptionManager.init (/Users/andy/Development/oa-content/app/node_modules/@babel/core/lib/config/index.js:28:12)
at compile (/Users/andy/Development/oa-content/app/node_modules/@babel/register/lib/node.js:61:42)
at compileHook (/Users/andy/Development/oa-content/app/node_modules/@babel/register/lib/node.js:102:12)
I checked through the code and `@babel/register` adds the "caller" to the
transformOpts which is then passed to the transform function so I really don't
know what's going on here. For some reason I just can't get it to work and
it's not making a lot of sense.
**Expected behavior/code**
The next require statement should just work.
**Environment**
* Babel version(s): ```
├─ @babel/cli@7.0.0-rc.3
├─ @babel/code-frame@7.0.0-beta.42
├─ @babel/core@7.0.0-beta.42
├─ @babel/generator@7.0.0-beta.42
├─ @babel/helper-annotate-as-pure@7.0.0-beta.42
├─ @babel/helper-builder-binary-assignment-operator-visitor@7.0.0-beta.42
├─ @babel/helper-builder-react-jsx@7.0.0-beta.42
├─ @babel/helper-call-delegate@7.0.0-beta.42
├─ @babel/helper-define-map@7.0.0-beta.42
├─ @babel/helper-explode-assignable-expression@7.0.0-beta.42
├─ @babel/helper-function-name@7.0.0-beta.42
├─ @babel/helper-get-function-arity@7.0.0-beta.42
├─ @babel/helper-hoist-variables@7.0.0-beta.42
├─ @babel/helper-module-imports@7.0.0-beta.42
├─ @babel/helper-module-transforms@7.0.0-beta.42
├─ @babel/helper-optimise-call-expression@7.0.0-beta.42
├─ @babel/helper-plugin-utils@7.0.0-beta.42
├─ @babel/helper-regex@7.0.0-beta.42
├─ @babel/helper-remap-async-to-generator@7.0.0-beta.42
├─ @babel/helper-replace-supers@7.0.0-beta.42
├─ @babel/helper-simple-access@7.0.0-beta.42
├─ @babel/helper-split-export-declaration@7.0.0-beta.42
├─ @babel/helper-wrap-function@7.0.0-beta.42
├─ @babel/helpers@7.0.0-beta.42
├─ @babel/highlight@7.0.0-beta.42
├─ @babel/node@7.0.0-rc.3
├─ @babel/plugin-proposal-async-generator-functions@7.0.0-beta.42
├─ @babel/plugin-proposal-class-properties@7.0.0-beta.42
├─ @babel/plugin-proposal-object-rest-spread@7.0.0-beta.42
├─ @babel/plugin-proposal-optional-catch-binding@7.0.0-beta.42
├─ @babel/plugin-proposal-unicode-property-regex@7.0.0-beta.42
├─ @babel/plugin-syntax-async-generators@7.0.0-beta.42
├─ @babel/plugin-syntax-class-properties@7.0.0-beta.42
├─ @babel/plugin-syntax-dynamic-import@7.0.0-beta.42
├─ @babel/plugin-syntax-jsx@7.0.0-beta.42
├─ @babel/plugin-syntax-object-rest-spread@7.0.0-beta.42
├─ @babel/plugin-syntax-optional-catch-binding@7.0.0-beta.42
├─ @babel/plugin-transform-arrow-functions@7.0.0-beta.42
├─ @babel/plugin-transform-async-to-generator@7.0.0-beta.42
├─ @babel/plugin-transform-block-scoped-functions@7.0.0-beta.42
├─ @babel/plugin-transform-block-scoping@7.0.0-beta.42
├─ @babel/plugin-transform-classes@7.0.0-beta.42
├─ @babel/plugin-transform-computed-properties@7.0.0-beta.42
├─ @babel/plugin-transform-destructuring@7.0.0-beta.42
├─ @babel/plugin-transform-dotall-regex@7.0.0-beta.42
├─ @babel/plugin-transform-duplicate-keys@7.0.0-beta.42
├─ @babel/plugin-transform-exponentiation-operator@7.0.0-beta.42
├─ @babel/plugin-transform-for-of@7.0.0-beta.42
├─ @babel/plugin-transform-function-name@7.0.0-beta.42
├─ @babel/plugin-transform-literals@7.0.0-beta.42
├─ @babel/plugin-transform-modules-amd@7.0.0-beta.42
├─ @babel/plugin-transform-modules-commonjs@7.0.0-beta.42
├─ @babel/plugin-transform-modules-systemjs@7.0.0-beta.42
├─ @babel/plugin-transform-modules-umd@7.0.0-beta.42
├─ @babel/plugin-transform-new-target@7.0.0-beta.42
├─ @babel/plugin-transform-object-super@7.0.0-beta.42
├─ @babel/plugin-transform-parameters@7.0.0-beta.42
├─ @babel/plugin-transform-react-display-name@7.0.0-beta.42
├─ @babel/plugin-transform-react-jsx-self@7.0.0-beta.42
├─ @babel/plugin-transform-react-jsx-source@7.0.0-beta.42
├─ @babel/plugin-transform-react-jsx@7.0.0-beta.42
├─ @babel/plugin-transform-regenerator@7.0.0-beta.42
├─ @babel/plugin-transform-runtime@7.0.0-beta.42
├─ @babel/plugin-transform-shorthand-properties@7.0.0-beta.42
├─ @babel/plugin-transform-spread@7.0.0-beta.42
├─ @babel/plugin-transform-sticky-regex@7.0.0-beta.42
├─ @babel/plugin-transform-template-literals@7.0.0-beta.42
├─ @babel/plugin-transform-typeof-symbol@7.0.0-beta.42
├─ @babel/plugin-transform-unicode-regex@7.0.0-beta.42
├─ @babel/polyfill@7.0.0-rc.3
├─ @babel/preset-env@7.0.0-beta.42
├─ @babel/preset-react@7.0.0-beta.42
├─ @babel/register@7.0.0-rc.3
├─ @babel/runtime@7.0.0-beta.42
├─ @babel/template@7.0.0-beta.42
├─ @babel/traverse@7.0.0-beta.42
├─ @babel/types@7.0.0-beta.42
└─ babel-eslint@8.2.5
├─ @babel/code-frame@7.0.0-beta.44
├─ @babel/generator@7.0.0-beta.44
├─ @babel/helper-function-name@7.0.0-beta.44
├─ @babel/helper-get-function-arity@7.0.0-beta.44
├─ @babel/helper-split-export-declaration@7.0.0-beta.44
├─ @babel/highlight@7.0.0-beta.44
├─ @babel/template@7.0.0-beta.44
├─ @babel/traverse@7.0.0-beta.44
└─ @babel/types@7.0.0-beta.44
and
├─ babel-code-frame@6.26.0
├─ babel-core@7.0.0-bridge.0
├─ babel-eslint@8.2.5
├─ babel-generator@6.26.1
├─ babel-helper-builder-binary-assignment-operator-visitor@6.24.1
├─ babel-helper-builder-react-jsx@6.26.0
├─ babel-helper-call-delegate@6.24.1
├─ babel-helper-define-map@6.26.0
├─ babel-helper-explode-assignable-expression@6.24.1
├─ babel-helper-function-name@6.24.1
├─ babel-helper-get-function-arity@6.24.1
├─ babel-helper-hoist-variables@6.24.1
├─ babel-helper-optimise-call-expression@6.24.1
├─ babel-helper-regex@6.26.0
├─ babel-helper-remap-async-to-generator@6.24.1
├─ babel-helper-replace-supers@6.24.1
├─ babel-helpers@6.24.1
├─ babel-loader@8.0.0-beta.3
├─ babel-messages@6.23.0
├─ babel-plugin-check-es2015-constants@6.22.0
├─ babel-plugin-react-require@3.0.0
├─ babel-plugin-syntax-async-functions@6.13.0
├─ babel-plugin-syntax-class-properties@6.13.0
├─ babel-plugin-syntax-exponentiation-operator@6.13.0
├─ babel-plugin-syntax-flow@6.18.0
├─ babel-plugin-syntax-jsx@6.18.0
├─ babel-plugin-syntax-object-rest-spread@7.0.0-beta.3
├─ babel-plugin-syntax-trailing-function-commas@6.22.0
├─ babel-plugin-transform-async-to-generator@6.24.1
├─ babel-plugin-transform-class-properties@6.24.1
├─ babel-plugin-transform-es2015-arrow-functions@6.22.0
├─ babel-plugin-transform-es2015-block-scoped-functions@6.22.0
├─ babel-plugin-transform-es2015-block-scoping@6.26.0
├─ babel-plugin-transform-es2015-classes@6.24.1
├─ babel-plugin-transform-es2015-computed-properties@6.24.1
├─ babel-plugin-transform-es2015-destructuring@6.23.0
├─ babel-plugin-transform-es2015-duplicate-keys@6.24.1
├─ babel-plugin-transform-es2015-for-of@6.23.0
├─ babel-plugin-transform-es2015-function-name@6.24.1
├─ babel-plugin-transform-es2015-literals@6.22.0
├─ babel-plugin-transform-es2015-modules-amd@6.24.1
├─ babel-plugin-transform-es2015-modules-commonjs@6.26.2
├─ babel-plugin-transform-es2015-modules-systemjs@6.24.1
├─ babel-plugin-transform-es2015-modules-umd@6.24.1
├─ babel-plugin-transform-es2015-object-super@6.24.1
├─ babel-plugin-transform-es2015-parameters@6.24.1
├─ babel-plugin-transform-es2015-shorthand-properties@6.24.1
├─ babel-plugin-transform-es2015-spread@6.22.0
├─ babel-plugin-transform-es2015-sticky-regex@6.24.1
├─ babel-plugin-transform-es2015-template-literals@6.22.0
├─ babel-plugin-transform-es2015-typeof-symbol@6.23.0
├─ babel-plugin-transform-es2015-unicode-regex@6.24.1
├─ babel-plugin-transform-exponentiation-operator@6.24.1
├─ babel-plugin-transform-flow-strip-types@6.22.0
├─ babel-plugin-transform-object-assign@6.22.0
├─ babel-plugin-transform-object-rest-spread@7.0.0-beta.3
├─ babel-plugin-transform-react-display-name@6.25.0
├─ babel-plugin-transform-react-jsx-self@6.22.0
├─ babel-plugin-transform-react-jsx-source@6.22.0
├─ babel-plugin-transform-react-jsx@6.24.1
├─ babel-plugin-transform-react-remove-prop-types@0.4.13
├─ babel-plugin-transform-regenerator@6.26.0
├─ babel-plugin-transform-strict-mode@6.24.1
├─ babel-preset-env@1.7.0
├─ babel-preset-flow@6.23.0
├─ babel-preset-react@6.24.1
├─ babel-register@6.26.0
│ └─ babel-core@6.26.3
├─ babel-runtime@6.26.0
├─ babel-template@6.26.0
├─ babel-traverse@6.26.0
├─ babel-types@6.26.0
└─ netlify-lambda@0.4.0
├─ babel-core@6.26.3
└─ babel-loader@7.1.4
Parts of my project use ```@babel/*``` and parts of is use ```babel-*```
- Node/npm version: 9.11.1
- OS: Mac OS X 10.13.6 (17G65)
- Monorepo no
- How you are using Babel: register
**Possible Solution**
I guess it should internally ignore "caller" since it adds it internally.
|
## Bug Report
* I would like to work on a fix!
**Current behavior**
Code
declare global { namespace globalThis { var i18n: any; } }
export class i18n {}
trigger an scope error:
TypeError: Duplicate declaration "i18n"
1 | declare global { namespace globalThis { var i18n: any; } }
2 |
> 3 | export class i18n {}
| ^^^^
249 | }
250 |
> 251 | return new Error(msg);
| ^
252 | }
253 |
254 | }
at File.buildCodeFrameError (packages/babel-core/lib/transformation/file/file.js:251:12)
at Scope.checkBlockScopedCollisions (packages/babel-traverse/lib/scope/index.js:422:22)
at Scope.registerBinding (packages/babel-traverse/lib/scope/index.js:582:16)
at Scope.registerDeclaration (packages/babel-traverse/lib/scope/index.js:527:12)
at Object.BlockScoped (packages/babel-traverse/lib/scope/index.js:250:12)
at Object.newFn (packages/babel-traverse/lib/visitors.js:216:17)
at NodePath._call (packages/babel-traverse/lib/path/context.js:55:20)
at NodePath.call (packages/babel-traverse/lib/path/context.js:38:14)
at NodePath.visit (packages/babel-traverse/lib/path/context.js:90:31)
at TraversalContext.visitQueue (packages/babel-traverse/lib/context.js:116:16)
* REPL
**Input Code**
As described above.
**Expected behavior**
This piece of code is valid in TypeScript.
**Babel Configuration (babel.config.js, .babelrc, package.json#babel, cli
command, .eslintrc)**
* Filename: `babel.config.js`
{
"presets": ["@babel/preset-typescript"]
}
**Environment**
* Babel version(s): [e.g. v7.12.0]
* Node/npm version: [e.g. Node 12/npm 7]
* OS: [e.g. macOS 10.15.4, Windows 10]
* Monorepo: [e.g. yes/no/Lerna]
* How you are using Babel: [e.g. `webpack`, `rollup`, `parcel`, `babel-register`]
**Possible Solution**
**Additional context**
| 0 |
Describe what you were doing when the bug occurred:
1. Opened profiler
2. Recorded
3. Switched to waterfall
* * *
## Please do not remove the text below this line
DevTools version: 4.0.6-a39d9c3
Call stack: at chrome-
extension://fmkadmapgofadopljbjfkapdkoienihi/build/main.js:11:11442
at Map.forEach ()
at commitIndex (chrome-
extension://fmkadmapgofadopljbjfkapdkoienihi/build/main.js:11:11388)
at e.getRankedChartData (chrome-
extension://fmkadmapgofadopljbjfkapdkoienihi/build/main.js:11:11921)
at xi (chrome-
extension://fmkadmapgofadopljbjfkapdkoienihi/build/main.js:56:277807)
at Ha (chrome-
extension://fmkadmapgofadopljbjfkapdkoienihi/build/main.js:43:55891)
at Xl (chrome-
extension://fmkadmapgofadopljbjfkapdkoienihi/build/main.js:43:98281)
at Hl (chrome-
extension://fmkadmapgofadopljbjfkapdkoienihi/build/main.js:43:84256)
at Fl (chrome-
extension://fmkadmapgofadopljbjfkapdkoienihi/build/main.js:43:81286)
at chrome-extension://fmkadmapgofadopljbjfkapdkoienihi/build/main.js:43:25364
Component stack: in xi
in div
in div
in div
in Ir
in Unknown
in n
in Unknown
in div
in div
in Wa
in ce
in be
in So
in Vl
|
PLEASE INCLUDE REPRO INSTRUCTIONS AND EXAMPLE CODE
I got this error when I click 'Ranked'.
* * *
## Please do not remove the text below this line
DevTools version: 4.0.4-3c6a219
Call stack: at chrome-
extension://fmkadmapgofadopljbjfkapdkoienihi/build/main.js:11:11441
at Map.forEach ()
at commitIndex (chrome-
extension://fmkadmapgofadopljbjfkapdkoienihi/build/main.js:11:11387)
at e.getRankedChartData (chrome-
extension://fmkadmapgofadopljbjfkapdkoienihi/build/main.js:11:11920)
at _i (chrome-
extension://fmkadmapgofadopljbjfkapdkoienihi/build/main.js:56:277123)
at Ha (chrome-
extension://fmkadmapgofadopljbjfkapdkoienihi/build/main.js:43:55890)
at Xl (chrome-
extension://fmkadmapgofadopljbjfkapdkoienihi/build/main.js:43:98280)
at Hl (chrome-
extension://fmkadmapgofadopljbjfkapdkoienihi/build/main.js:43:84255)
at Fl (chrome-
extension://fmkadmapgofadopljbjfkapdkoienihi/build/main.js:43:81285)
at chrome-extension://fmkadmapgofadopljbjfkapdkoienihi/build/main.js:43:25363
Component stack: in _i
in div
in div
in div
in Or
in Unknown
in n
in Unknown
in div
in div
in Ha
in le
in ve
in ko
in Ul
| 1 |
"name": "doctrine/orm",
"version": "v2.4.8",
"source": {
"type": "git",
"url": "https://github.com/doctrine/doctrine2.git",
"reference": "464b5fdbfbbeb4a65465ac173c4c5d90960f41ff"
}
When the doctrine/ORM bundle is downloaded by composer (URL:
https://api.github.com/repos/doctrine/doctrine2/zipball/464b5fdbfbbeb4a65465ac173c4c5d90960f41ff)
always the default=last version is returned. I don't know if other references
are faulty, too.
| Q | A
---|---
Bug report? | yes
Feature request? | no
BC Break report? | no
Hello the community,
A hunter from our Bug Bounty Program reported that when an input is manually
changed from text to an array on the form payload, he receives a 500 instead
of a proper violation.
For example, if `email=bob@lamouche.com` is changed to
`email[]=bob@lamouche.com` on an email field having an EmailValidator, we end
up with this exception thrown:
symfony/src/Symfony/Component/Validator/Constraints/EmailValidator.php
Lines 49 to 51 in 3e4f978
| if (!is_scalar($value) && !(is_object($value) && method_exists($value,
'__toString'))) {
---|---
| throw new UnexpectedTypeException($value, 'string');
| }
This is obvious, and not really an issue as field was manually changed, but
this might create false positives on error logs / monitoring if abused.
Wouldn't it be cleaner to add a violation to the field, simply telling that
the value is not an email?
| 0 |
I am upgrading Pandas from 0.8.1 to 0.10.1.dev-f7f7e13 . My environment is
Window XP with below: Python: 2.7.3 Numpy: 1.6.2 MPL: 1.1.1 Pandas:
0.10.1.dev-f7f7e13.
Then OK application on 0.8.1 now meets errors. I trace the root cause to
filtering the duplicated index of Series. Detail in :
http://stackoverflow.com/questions/14395678/how-to-drop-extra-copy-of-
duplicate-index-of-pandas-series
simply put: below snippet has two issues :
import pandas as pd
idx_tp = [('600809', '20061231'), ('600809', '20070331'), ('600809',
'20070630'), ('600809', '20070331')]
dt = ['demo','demo','demo','demo']
idx = pd.MultiIndex.from_tuples(idx_tp,names = ['STK_ID','RPT_Date'])
s = pd.Series(dt,index=idx)
# Issue 1: s[s.index.unique()] works well on 0.8.1 but not 0.10.1
# Issue 2: s.groupby(s.index).first() will crash on my machine
|
#### Code:
Python 3.6.0 |Anaconda 4.3.1 (64-bit)| (default, Dec 23 2016, 11:57:41) [MSC v.1900 64 bit (AMD64)] on win32
>>> pd.__version__
'0.20.1'
>>> import platform
>>> platform.platform()
'Windows-7-6.1.7601-SP1'
>>> import pandas as pd
>>> df = pd.read_csv(r'c:\tmp\中文.csv')
Traceback (most recent call last):
File "C:\ProgramData\Anaconda3\lib\site-packages\IPython\core\interactiveshell.py", line 2881, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-6-0cd6317422e5>", line 1, in <module>
df = pd.read_csv(r'c:\tmp\中文.csv')
File "C:\ProgramData\Anaconda3\lib\site-packages\pandas\io\parsers.py", line 655, in parser_f
return _read(filepath_or_buffer, kwds)
File "C:\ProgramData\Anaconda3\lib\site-packages\pandas\io\parsers.py", line 405, in _read
parser = TextFileReader(filepath_or_buffer, **kwds)
File "C:\ProgramData\Anaconda3\lib\site-packages\pandas\io\parsers.py", line 762, in __init__
self._make_engine(self.engine)
File "C:\ProgramData\Anaconda3\lib\site-packages\pandas\io\parsers.py", line 966, in _make_engine
self._engine = CParserWrapper(self.f, **self.options)
File "C:\ProgramData\Anaconda3\lib\site-packages\pandas\io\parsers.py", line 1582, in __init__
self._reader = parsers.TextReader(src, **kwds)
File "pandas\_libs\parsers.pyx", line 394, in pandas._libs.parsers.TextReader.__cinit__ (pandas\_libs\parsers.c:4209)
File "pandas\_libs\parsers.pyx", line 712, in pandas._libs.parsers.TextReader._setup_parser_source (pandas\_libs\parsers.c:8895)
OSError: Initializing from file failed
#### Problem description
python 3.6 changed sys.getfilesystemencoding() to return "utf-8" instead of
"mbcs"
see PEP 529.
### How to fix
Here is the problem: parsers.pyx
if isinstance(source, basestring):
if not isinstance(source, bytes):
source = source.encode(sys.getfilesystemencoding() or 'utf-8')
the source parameter is our filename, and will be encoded to 'utf-8', not
legacy 'mbcs' in python 3.6
and finally passed to open() in io.c:new_file_source
thus interpreted as a mbcs string, so, the "File not found" exception is not
suprised
maybe this should be the responsiblity of cython for python 3.6 to handle
these things by using unicode version of windows API,
but for now, we just replace sys.getfilesystemencoding() to "mbcs"
| 0 |
For example:
var geometry = new THREE.BoxBufferGeometry( 1, 1, 1 );
var material = new THREE.MeshNormalMaterial();
var mesh = new THREE.Mesh( geometry, material );
var m = new THREE.Mesh().copy( mesh );
console.log( m.geometry ); // BufferGeometry with no attributes
console.log( m.material ); // MeshBasicMaterial
Likewise for `Line`, `LineSegments`, and `Point`.
`Sprite`, does not copy the material.
|
Hi Friends!
SpriteMaterial.sizeAttenuation = false
seems not working as expected with `rayCaster`.
I doubt the sprite size(dimension) estimation may be a little bit off, ray is
very hard to hit them after apply `scale()`, especially scale down.
I can upload a simplified code that regenerate this problem a bit later.
* r96
##### Browser
* All of them
Thanks!
| 0 |
# I encountered an animation error
This is the case, I exported a gltf model with animation through the 3dmax
export plugin provided by babylon.js. It plays normally in the 3d viewer that
comes with windows, and it plays normally in the online gltf viewer provided
by babylon.js, but when I use three.js, it plays incorrectly. Looks like a
matrix transformation error. I really try to fix it. I don’t know if any
friends have encountered similar problems. . .
## In the three.js

## In the windows 3D viewer

## In the babylon.js sandbox

|
##### Description of the problem
i want to highlight the corner of my OBJ .and i used this url
https://threejs.org/examples/webgl_postprocessing_outline.html for my code
but now the problem is:
i need the background transparent....
but transparency will be lost when the outline effect is working
before outline effect the background is transparent and i can see the red body
background

after outline effect

##### Three.js version
| 0 |
# Environment
Windows build number: not relevant
PowerToys version: not relevant
PowerToy module for which you are reporting the bug (if applicable): not relevant
# Steps to reproduce
Checkout PowerToys repo into the folder where you have nuget.config file (or
any parent of this folder contains that file).
Nuget.config file contains:
<config>
<add key="repositorypath" value="c:\CxCache" />
</config>
# Expected behavior
Nuget packages are resolved correctly and the build succeeds.
# Actual behavior
Packages are restored incorrectly and the build fails.
Nuget restore tries to use defined cache for packages and will add duplicate
references into every project file

Build will then complain that packages are missing

# Screenshots
|
Hi,
Wanted to be able to save split window formations or even custom positions for
windows . So after computer restart all those windows can be opened all
together if the configurations for few windows are saved.
There should be a possibility to retrieve both a long term version of these
windows in user specified files and also retrieve the previous positions used
(the last time they were used together).and obviously long term versions could
be deleted if user wants.
IF these could be accessed with some neat shortcuts it would be great.And if a
user wants to just open a singular window independently without using the
saved framework they should open as they already do (like full screen )
Thanks
P.S. Dont know if this already is possible somehow
| 0 |
# Summary of the new feature/enhancement
I would like to be able to maximize windows to span across multiple monitors
and not just carve up each monitor with its own zones.
# Proposed technical implementation details (optional)
Ideally it would be great if I get just click the maximize button and have the
window maximize to more than 1 screen,
For me I have 3 screens but I only want to have 2 screens linked to be 1
virtual big screen.
It would be great if we could create a virtual screen linking more than one
monitor and then be able to carve that up into zones.
|
# Support zones which span several monitors
Could you please add the capability to define zones across several monitors?
For instance I would like to define one single zone spanning two monitors
(this is my current setup with ultramon for instance).
Thanks very much
| 1 |


I have only added a space before the `<` on line 5.
Is it incorrect highlight ? I have remove all extensions and reinstall VSCode
and set `javascript.validate.enable` to false, it didn't work.
Besides, I had used `npm` install `jshint` and `eslint` in global but the
highlight didn't change after I removed all of them.
VSCode version :

|
Ported from microsoft/TypeScript-Sublime-Plugin#285
Related to microsoft/TypeScript-Sublime-Plugin#265.
Issue:

Correct:

| 1 |
I'm doing some work in filtering and having a bit of a rough time. I went
looking for some examples and I found this:
http://azitech.wordpress.com/2011/03/15/designing-a-butterworth-low-pass-
filter-with-scipy/
It looks to me like a perfectly valid set of examples, and he's even generated
some example output. It looks reasonable.
So I download the code and try running it on my machine. I see this error:
/usr/lib/python2.7/dist-packages/scipy/signal/filter_design.py:288:
BadCoefficients: Badly conditioned filter coefficients (numerator): the
results may be meaningless
"results may be meaningless", BadCoefficients)
b=[ 1.50249921e-18], a=[ 1. -9.78671066 43.10310124 -112.50164481 192.7084543
-226.36430741 184.66074807 -103.30140929 37.92528936 -8.25143211
0.80791132]
And then the graph looks substantially different:

The thing that strikes me is that in the filter passbands are HUGELY
different. The example output he shows has a passband gain of 1. My passband
gain is 10^-5 so it's all attenuation.
I found this ticket #2140 and it mentioned problems with the "small enough to
zero out" number. I'm running a newer version of scipy which already has the
1e-14 threshold so that doesn't seem to be the problem.
This is really puzzling me.
|
The `iirfilter()` function internally generates filter prototypes in zpk
format, transforms them to tf format, and then back to zpk format:
z, p, k = typefunc(N)
b, a = zpk2tf(z, p, k)
b, a = lp2lp(b, a, wo=warped)
b, a = bilinear(b, a, fs=fs)
return tf2zpk(b, a)
But conversion to tf format introduces numerical errors, causing higher-order
filters to fail, even though the higher-order prototypes work fine. It should
use zpk format (or state-space format?) throughout, and these functions should
be changed to accept zpk format or be replaced by ones that do.
I started to implement this, but then realized it involves lots of changes to
lots of functions, some of which I'm not sure the best way to do, so
registering this issue in case others want to work on it, too.
For instance, matlab's bilinear function accepts tf, zpk, and ss formats, by
varying the number of input parameters, which I don't think is "Pythonic"
(though `lti()` does it). SciPy's bilinear function accepts `b` and `a` as
separate parameters, and then `fs`, so it can't be modified to use variable
number of input variables anyway, without breaking compatibility. A new
function could be created that takes zpk, or the existing function could take
a list as its first parameter instead, which could be zpk, tf, or ss, or
something else.
For comparison, Octave uses sftrans instead of lp2lp, and only accepts zpk
format.
| 1 |
I'm behind a proxy at work, and I'm having issues getting requests to use the
proxy. urllib2 is able to use it just fine, but requests fails. I've tried
both setting an environment variable (both HTTPS_PROXY and https_proxy) and
passing in a dict, but neither work.
I'm on OSX 10.7.5 using Python 2.7.3 and requests 1.1.0 installed in a
virtualenv via pip.
(osx)gfairchild@stueyemac ~> set | grep -i proxy
HTTPS_PROXY=https://proxy.com:8080
https_proxy=https://proxy.com:8080
(osx)gfairchild@stueyemac ~> python
Python 2.7.3 (v2.7.3:70274d53c1dd, Apr 9 2012, 20:52:43)
[GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import urllib2
>>> r = urllib2.urlopen('https://google.com')
>>> print r.read()
<!doctype html><html itemscope="itemscope" itemtype="http://schema.org/WebPage"><head><meta content="Search the world's information, including webpages, images, videos and more. Google has many special features to help you find exactly what you're looking for." name="description"><meta content="noodp" name="robots"><meta itemprop="image" content="/images/google_favicon_128.png"><title>Google</title><script>(function(){
window.google={kEI:"I-gsUdvaLMn-0gHWhIHIAg",getEI:function(a){for(var b;a&&(!a.getAttribute||!(b=a.getAttribute("eid")));)a=a.parentNode;return b||google.kEI},https:function(){return"https:"==window.location.protocol},kEXPI:"17259,18168,39523,4000116,4001569,4001948,4001959,4001975,4002001,4002159,4002562,4002734,4002855,4002858,4003372,4003374,4003387,4003514,4003575,4003638,4003917,4003944,4003982,4004015,4004064,4004074,4004083,4004152,4004181,4004214,4004241,4004276,4004298",kCSI:{e:"17259,18168,39523,4000116,4001569,4001948,4001959,4001975,4002001,4002159,4002562,4002734,4002855,4002858,4003372,4003374,4003387,4003514,4003575,4003638,4003917,4003944,4003982,4004015,4004064,4004074,4004083,4004152,4004181,4004214,4004241,4004276,4004298",ei:"I-gsUdvaLMn-0gHWhIHIAg"},authuser:0,ml:function(){},kHL:"en",time:function(){return(new Date).getTime()},log:function(a,
b,c,k){var d=new Image,f=google.lc,e=google.li,g="";d.onerror=d.onload=d.onabort=function(){delete f[e]};f[e]=d;!c&&-1==b.search("&ei=")&&(g="&ei="+google.getEI(k));c=c||"/gen_204?atyp=i&ct="+a+"&cad="+b+g+"&zx="+google.time();a=/^http:/i;a.test(c)&&google.https()?(google.ml(Error("GLMM"),!1,{src:c}),delete f[e]):(d.src=c,google.li=e+1)},lc:[],li:0,Toolbelt:{},y:{},x:function(a,b){google.y[a.id]=[a,b];return!1},load:function(a,b){google.x({id:"l"+a},function(){google.load(a,b)})}};
})();
(function(){var d=!1;google.sn="webhp";google.timers={};google.startTick=function(a,b){google.timers[a]={t:{start:google.time()},bfr:!!b}};google.tick=function(a,b,h){google.timers[a]||google.startTick(a);google.timers[a].t[b]=h||google.time()};google.startTick("load",!0);
try{}catch(e){}})();
var _gjwl=location;function _gjuc(){var a=_gjwl.href.indexOf("#");if(0<=a&&(a=_gjwl.href.substring(a),0<a.indexOf("&q=")||0<=a.indexOf("#q=")))if(a=a.substring(1),-1==a.indexOf("#")){for(var d=0;d<a.length;){var b=d;"&"==a.charAt(b)&&++b;var c=a.indexOf("&",b);-1==c&&(c=a.length);b=a.substring(b,c);if(0==b.indexOf("fp="))a=a.substring(0,d)+a.substring(c,a.length),c=d;else if("cad=h"==b)return 0;d=c}_gjwl.href="/search?"+a+"&cad=h";return 1}return 0}
function _gjp(){(!window._gjwl.hash||!window._gjuc())&&setTimeout(_gjp,500)};
window._gjp&&_gjp();</script><style>#gbar,#guser{font-size:13px;padding-top:1px !important;}#gbar{height:22px}#guser{padding-bottom:7px !important;text-align:right}.gbh,.gbd{border-top:1px solid #c9d7f1;font-size:1px}.gbh{height:0;position:absolute;top:24px;width:100%}@media all{.gb1{height:22;margin-right:.5em;vertical-align:top}#gbar{float:left}}a.gb1,a.gb4{text-decoration:underline !important}a.gb1,a.gb4{color:#00c !important}.gbi .gb4{color:#dd8e27 !important}.gbf .gb4{color:#900 !important}</style><style>.h{font-family:arial,sans-serif}body{font-family:arial,sans-serif}td{font-family:arial,sans-serif}a{font-family:arial,sans-serif}p{font-family:arial,sans-serif}body{margin:0;overflow-y:scroll}#gog{padding:3px 8px 0}.h{color:#36c}.q{color:#00c}.ts{border-collapse:collapse}td{line-height:.8em}.gac_m td{line-height:17px}form{margin-bottom:20px}.ts td{padding:0}em{font-weight:bold;font-style:normal}.lst{height:25px;width:496px;font:18px arial,sans-serif}.gsfi{font:18px arial,sans-serif}.gsfs{font:17px arial,sans-serif}.ds{display:inline-box;display: inline-block;margin:3px 0 4px;margin-left:4px}input{font-family:inherit}body{background:#fff;color:black}a.gb1{color:#11c !important}a.gb2{color:#11c !important}a.gb3{color:#11c !important}a.gb4{color:#11c !important}.sblc{padding-top:5px}.lsbb{background:#eee;border:solid 1px;border-color:#ccc #999 #999 #ccc;height:30px}a{color:#11c;text-decoration:none}a:hover{text-decoration:underline}a:active{text-decoration:underline}.fl a{color:#36c}a:visited{color:#551a8b}a.gb1{text-decoration:underline}a.gb4{text-decoration:underline}a.gb3:hover{text-decoration:none}.sblc a{display:block;margin:2px 0;margin-left:13px;font-size:11px}#ghead a.gb2:hover{color:#fff !important}.lsbb{display:block}.ftl{display:inline-block;margin:0 12px}.lsb{background:url(/images/srpr/nav_logo80.png) 0 -258px repeat-x;border:none;color:#000;cursor:pointer;height:30px;margin:0;outline:0;font:15px arial,sans-serif;vertical-align:top}#fll a{display:inline-block;margin:0 12px}.lsb:active{background:#ccc}.lst:focus{outline:none}#addlang a{padding:0 3px}</style><script></script> </head><body dir="ltr" bgcolor="#fff"><script>(function(){var src='/images/srpr/nav_logo80.png';var iesg=false;document.body.onload = function(){window.n && window.n();if (document.images){new Image().src=src;}
if (!iesg){document.f&&document.f.q.focus();document.gbqf&&document.gbqf.q.focus();}
}
})();</script><textarea id="csi" style="display:none"></textarea><div id="mngb"><div id=gbar><nobr><b class=gb1>Search</b> <a class=gb1 href="https://www.google.com/imghp?hl=en&tab=wi">Images</a> <a class=gb1 href="https://maps.google.com/maps?hl=en&tab=wl">Maps</a> <a class=gb1 href="https://play.google.com/?hl=en&tab=w8">Play</a> <a class=gb1 href="https://www.youtube.com/?tab=w1">YouTube</a> <a class=gb1 href="https://news.google.com/nwshp?hl=en&tab=wn">News</a> <a class=gb1 href="https://mail.google.com/mail/?tab=wm">Gmail</a> <a class=gb1 href="https://drive.google.com/?tab=wo">Drive</a> <a class=gb1 style="text-decoration:none" href="http://www.google.com/intl/en/options/"><u>More</u> »</a></nobr></div><div id=guser width=100%><nobr><span id=gbn class=gbi></span><span id=gbf class=gbf></span><span id=gbe></span><a href="http://www.google.com/history/optout?hl=en" class=gb4>Web History</a> | <a href="/preferences?hl=en" class=gb4>Settings</a> | <a target=_top id=gb_70 href="https://accounts.google.com/ServiceLogin?hl=en&continue=https://www.google.com/" class=gb4>Sign in</a></nobr></div><div class=gbh style=left:0></div><div class=gbh style=right:0></div></div><iframe name="wgjf" style="display:none"></iframe><center><br clear="all" id="lgpd"><div id="lga"><img alt="Google" height="95" src="/intl/en_ALL/images/srpr/logo1w.png" width="275" id="hplogo" onload="window.lol&&lol()" style="padding:28px 0 14px"><br><br></div><form action="/search" name="f"><table cellpadding="0" cellspacing="0"><tr valign="top"><td width="25%"> </td><td align="center" nowrap="nowrap"><input name="ie" value="ISO-8859-1" type="hidden"><input value="en" name="hl" type="hidden"><input name="source" type="hidden" value="hp"><div class="ds" style="height:32px;margin:4px 0"><input autocomplete="off" class="lst" value="" title="Google Search" maxlength="2048" name="q" size="57" style="color:#000;margin:0;padding:5px 8px 0 6px;vertical-align:top"></div><br style="line-height:0"><span class="ds"><span class="lsbb"><input class="lsb" value="Google Search" name="btnG" type="submit"></span></span><span class="ds"><span class="lsbb"><input class="lsb" value="I'm Feeling Lucky" name="btnI" type="submit" onclick="if(this.form.q.value)this.checked=1; else top.location='/doodles/'"></span></span></td><td class="fl sblc" align="left" nowrap="nowrap" width="25%"><a href="/advanced_search?hl=en&authuser=0">Advanced search</a><a href="/language_tools?hl=en&authuser=0">Language tools</a></td></tr></table><input type="hidden" id="gbv" name="gbv" value="1"></form><div id="gac_scont"></div><div style="font-size:83%;min-height:3.5em"><br></div><span id="footer"><div style="font-size:10pt"><div id="fll" style="margin:19px auto;text-align:center"><a href="/intl/en/ads/">Advertising Programs</a><a href="/services/">Business Solutions</a><a href="https://plus.google.com/116899029375914044550" rel="publisher">+Google</a><a href="/intl/en/about.html">About Google</a></div></div><p style="color:#767676;font-size:8pt">© 2012 - <a href="/intl/en/policies/">Privacy & Terms</a></p></span></center><div id=xjsd></div><div id=xjsi><script>if(google.y)google.y.first=[];(function(){var b;function c(a){window.setTimeout(function(){var d=document.createElement("script");d.src=a;document.getElementById("xjsd").appendChild(d)},0)}google.dljp=function(a){b=a;google.xjsi||(google.xjsu=a,c(b))};google.dlj=c;})();
if(!google.xjs){google.dstr=[];google.rein=[];window._=window._||{};window._._DumpException=function(e){throw e};if(google.timers&&google.timers.load.t){google.timers.load.t.xjsls=new Date().getTime();}google.dljp('/xjs/_/js/hp/sb_he,pcc/rt\x3dj/ver\x3ddUr8sRKOHrQ.en_US./d\x3d1/sv\x3d1/rs\x3dAItRSTMOuVd64TXf59sLZpeyfeaOnqHKgQ');google.xjs=1;}google.pmc={sb:{"agen":false,"cgen":true,"client":"heirloom-hp","dh":true,"ds":"","eqch":true,"fl":true,"host":"google.com","jsonp":true,"msgs":{"lcky":"I\u0026#39;m Feeling Lucky","lml":"Learn more","oskt":"Input tools","psrc":"This search was removed from your \u003Ca href=\"/history\"\u003EWeb History\u003C/a\u003E","psrl":"Remove","sbit":"Search by image","srch":"Google Search"},"ovr":{"l":1,"ms":1},"pq":"","qcpw":false,"scd":10,"sce":5,"stok":"2TLNAUa1t9yVii-aHCy9IeO1J4M"},hp:{},pcc:{}};google.y.first.push(function(){if(google.med){google.med('init');google.initHistory();google.med('history');}google.History&&google.History.initialize('/');google.hs&&google.hs.init&&google.hs.init()});if(google.j&&google.j.en&&google.j.xi){window.setTimeout(google.j.xi,0);}</script></div><script>(function(){var b,c,d,e;function g(a,f){a.removeEventListener?(a.removeEventListener("load",f,!1),a.removeEventListener("error",f,!1)):(a.detachEvent("onload",f),a.detachEvent("onerror",f))}function h(a){e=(new Date).getTime();++c;a=a||window.event;a=a.target||a.srcElement;g(a,h)}var k=document.getElementsByTagName("img");b=k.length;
for(var l=c=0,m;l<b;++l)m=k[l],m.complete||"string"!=typeof m.src||!m.src?++c:m.addEventListener?(m.addEventListener("load",h,!1),m.addEventListener("error",h,!1)):(m.attachEvent("onload",h),m.attachEvent("onerror",h));d=b-c;
function n(){if(google.timers.load.t){google.timers.load.t.ol=(new Date).getTime();google.timers.load.t.iml=e;google.kCSI.imc=c;google.kCSI.imn=b;google.kCSI.imp=d;void 0!==google.stt&&(google.kCSI.stt=google.stt);google.csiReport&&google.csiReport()}}window.addEventListener?window.addEventListener("load",n,!1):window.attachEvent&&window.attachEvent("onload",n);google.timers.load.t.prt=e=(new Date).getTime();})();
</script></body></html>
>>> import requests
>>> r = requests.get('https://google.com')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/gfairchild/Documents/work/compepi/Twitter/lib/osx/lib/python2.7/site-packages/requests/api.py", line 55, in get
return request('get', url, **kwargs)
File "/Users/gfairchild/Documents/work/compepi/Twitter/lib/osx/lib/python2.7/site-packages/requests/api.py", line 44, in request
return session.request(method=method, url=url, **kwargs)
File "/Users/gfairchild/Documents/work/compepi/Twitter/lib/osx/lib/python2.7/site-packages/requests/sessions.py", line 279, in request
resp = self.send(prep, stream=stream, timeout=timeout, verify=verify, cert=cert, proxies=proxies)
File "/Users/gfairchild/Documents/work/compepi/Twitter/lib/osx/lib/python2.7/site-packages/requests/sessions.py", line 374, in send
r = adapter.send(request, **kwargs)
File "/Users/gfairchild/Documents/work/compepi/Twitter/lib/osx/lib/python2.7/site-packages/requests/adapters.py", line 209, in send
raise ConnectionError(e)
requests.exceptions.ConnectionError: HTTPSConnectionPool(host='proxy.com', port=8080): Max retries exceeded with url: https://google.com/ (Caused by <class 'socket.error'>: [Errno 54] Connection reset by peer)
>>> p = {'https': 'https://proxy.com:8080',}
>>> r = requests.get('https://google.com', proxies=p)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/gfairchild/Documents/work/compepi/Twitter/lib/osx/lib/python2.7/site-packages/requests/api.py", line 55, in get
return request('get', url, **kwargs)
File "/Users/gfairchild/Documents/work/compepi/Twitter/lib/osx/lib/python2.7/site-packages/requests/api.py", line 44, in request
return session.request(method=method, url=url, **kwargs)
File "/Users/gfairchild/Documents/work/compepi/Twitter/lib/osx/lib/python2.7/site-packages/requests/sessions.py", line 279, in request
resp = self.send(prep, stream=stream, timeout=timeout, verify=verify, cert=cert, proxies=proxies)
File "/Users/gfairchild/Documents/work/compepi/Twitter/lib/osx/lib/python2.7/site-packages/requests/sessions.py", line 374, in send
r = adapter.send(request, **kwargs)
File "/Users/gfairchild/Documents/work/compepi/Twitter/lib/osx/lib/python2.7/site-packages/requests/adapters.py", line 209, in send
raise ConnectionError(e)
requests.exceptions.ConnectionError: HTTPSConnectionPool(host='proxy.com', port=8080): Max retries exceeded with url: https://google.com/ (Caused by <class 'socket.error'>: [Errno 54] Connection reset by peer)
|
Sending a POST request with an empty body read from an empty file and a
`Content-Length: 0` header times out after a long time instead of doing normal
processing.
## Expected Result
Sending an empty body from a file with `Content-Length: 0` should send the
request and return a result in a similar amount of time as a non-empty
request, or a request without `Content-Length: 0`, or an empty body provided
as a string.
For example:
$ cat repro2.py
import requests, tempfile
with tempfile.NamedTemporaryFile() as tf:
print requests.Session().send(requests.Request('POST', 'https://www.google.com', data=open(tf.name)).prepare())
$ python repro2.py
<Response [405]>
Or using a string body:
$ cat repro3.py
import requests, tempfile
print requests.Session().send(requests.Request('POST', 'https://www.google.com', headers={'Content-Length': u'0'}, data='').prepare())
$ python repro3.py
<Response [405]>
Or a non-empty file:
$ cat repro5.py
import requests, tempfile
with tempfile.NamedTemporaryFile() as tf:
tf.write('x')
tf.flush()
print requests.post('https://www.google.com', headers={'Content-Length': u'0'}, data=open(tf.name))
$ python repro5.py
<Response [405]>
## Actual Result
However, with the specific combination of a `Content-Length: 0` header and an
empty open file descriptor, the request times out after 4 minutes with an
error.
$ pip freeze | grep requests
requests==2.18.2
$ cat repro1.py
import requests, tempfile
with tempfile.NamedTemporaryFile() as tf:
requests.Session().send(requests.Request('POST', 'https://www.google.com', headers={'Content-Length': u'0'}, data=open(tf.name)).prepare())
$ time python repro1.py
Traceback (most recent call last):
File "repro1.py", line 4, in <module>
requests.Session().send(requests.Request('POST', 'https://www.google.com', headers={'Content-Length': u'0'}, data=open(tf.name)).prepare())
File "/home/ubuntu/env/lib/python2.7/site-packages/requests/sessions.py", line 612, in send
r = adapter.send(request, **kwargs)
File "/home/ubuntu/env/lib/python2.7/site-packages/requests/adapters.py", line 490, in send
raise ConnectionError(err, request=request)
requests.exceptions.ConnectionError: ('Connection aborted.', BadStatusLine("''",))
real 4m0.480s
user 0m0.296s
sys 0m0.040s
$ cat repro4.py
import requests, tempfile
with tempfile.NamedTemporaryFile() as tf:
print requests.post('https://www.google.com', headers={'Content-Length': u'0'}, data=open(tf.name))
$ time python repro4.py
Traceback (most recent call last):
File "repro4.py", line 4, in <module>
print requests.post('https://www.google.com', headers={'Content-Length': u'0'}, data=open(tf.name))
File "/home/ubuntu/env/lib/python2.7/site-packages/requests/api.py", line 112, in post
return request('post', url, data=data, json=json, **kwargs)
File "/home/ubuntu/env/lib/python2.7/site-packages/requests/api.py", line 58, in request
return session.request(method=method, url=url, **kwargs)
File "/home/ubuntu/env/lib/python2.7/site-packages/requests/sessions.py", line 502, in request
resp = self.send(prep, **send_kwargs)
File "/home/ubuntu/env/lib/python2.7/site-packages/requests/sessions.py", line 612, in send
r = adapter.send(request, **kwargs)
File "/home/ubuntu/env/lib/python2.7/site-packages/requests/adapters.py", line 490, in send
raise ConnectionError(err, request=request)
requests.exceptions.ConnectionError: ('Connection aborted.', BadStatusLine("''",))
real 4m5.474s
user 0m0.288s
sys 0m0.036s
## Reproduction Steps
import requests, tempfile
with tempfile.NamedTemporaryFile() as tf:
print requests.post('https://www.google.com', headers={'Content-Length': u'0'}, data=open(tf.name))
## System Information
$ python -m requests.help
{
"chardet": {
"version": "3.0.4"
},
"cryptography": {
"version": "1.5.2"
},
"implementation": {
"name": "CPython",
"version": "2.7.13"
},
"platform": {
"release": "4.4.0-78-generic",
"system": "Linux"
},
"pyOpenSSL": {
"openssl_version": "1000207f",
"version": "16.1.0"
},
"requests": {
"version": "2.18.2"
},
"system_ssl": {
"version": "1000207f"
},
"urllib3": {
"version": "1.22"
},
"using_pyopenssl": true
}
## Workaround
Check if the file is empty before sending and, if it is, substitute the file
descriptor for an empty string.
| 0 |
Clicking the button "close map aside" does not collapse the map on the
screen's side, overlapping all other pages.
|
#### Issue Description
When a user uses email and GitHub alternately the activity and successful
challenges are not linked for both although the user is same.
| 0 |
##### Issue Type:
* Bug Report
##### Ansible Version:
ansible 1.9.2
configured module search path = None
##### Ansible Configuration:
# inventory
127.0.0.1
##### Environment:
CentOS 6.6
##### Summary:
this issue is similar to #11695
I have the case, that ansible want use ssh connection, if i run ansible-
playbook with argument `"--connection=local"` or `"-c local"`.
If I set in playbook file `"connection: local"` we have this problem, too.
The only way, how it works is, if we set in inventory file
`"ansible_connection=local"` or if we give an extra argument to ansible-
playbook command: `"-e ansible_connection=local"`.
##### Steps To Reproduce:
---
- name: Test ping on localhost
hosts: localhost
connection: local
tasks:
- name: Ping!
ping:
run: `ansible-playbook -v test.yml`
Remove the line connection: `local from test.yml`
run: `ansible-playbook -v test.yml --connection=local`
##### Expected Results:
Ansible being able to run locally without the need to have ssh installed.
##### Actual Results:
Ansible doesn't require ssh when ansible_connection=local is defined in the
hosts file.
Ansible requires ssh for a playbook when run with flag --connection=local.
Ansible requires ssh for a playbook when declared connection: local in the
playbook.
|
##### Issue Type:
* Bug Report
##### Ansible Version:
ansible 1.9.2
configured module search path = None
##### Ansible Configuration:
# /etc/ansible/hosts
localhost ansible_connection=local ansible_python_interpreter=/usr/bin/python2
##### Environment:
Archlinux (running ansible locally)
##### Summary:
Ansible can't run playbooks locally without ssh if `ansible_connection=local`
is defined in the hosts file, although it can run playbooks locally without
ssh with `connection: local` in the playbook or with flag
`--connection=local`.
##### Steps To Reproduce:
* Create `test.yml`:
---
- name: Test ping on localhost
hosts: localhost
connection: local
tasks:
- name: Ping!
ping:
* run: `ansible-playbook -v test.yml`
* Remove the line `connection: local` from `test.yml`
* run: `ansible-playbook -v test.yml --connection=local`
Notice that until now it works perfectly fine.
* Run `ansible-playbook -v test.yml` and notice output:
Traceback (most recent call last):
File "/usr/bin/ansible", line 197, in <module>
(runner, results) = cli.run(options, args)
File "/usr/bin/ansible", line 163, in run
extra_vars=extra_vars,
File "/usr/lib/python2.7/site-packages/ansible/runner/__init__.py", line 233, in __init__
cmd = subprocess.Popen(['ssh','-o','ControlPersist'], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
File "/usr/lib/python2.7/subprocess.py", line 710, in __init__
errread, errwrite)
File "/usr/lib/python2.7/subprocess.py", line 1335, in _execute_child
raise child_exception
OSError: [Errno 2] No such file or directory
* Install `ssh` (e.g. on archlinux: `pacman -S openssh`) and run again the command `ansible-playbook -v test.yml`. Notice it works again.
##### Expected Results:
Ansible being able to run locally without the need to have ssh installed.
##### Actual Results:
Ansible **requires ssh** when `ansible_connection=local` is defined in the
hosts file.
Ansible **doesn't require ssh** for a playbook when run with flag
`--connection=local`.
Ansible **doesn't require ssh** for a playbook when declared `connection:
local` in the playbook.
| 1 |
Not sure if this is supposed to be like this.
original issue here: pandas-dev/pandas#7332
cross post to numexpr here:
https://code.google.com/p/numexpr/issues/detail?id=126
essentially in pandas were looking up a `dtype,type` in a cython dictionary
turns out that for `int64` (and `int32` but NOT `int64` on 32-bit platforms),
the hashes are DIFFERENT, but same for other dtypes (including `float64`).
Is this maybe an implementation detail on `numexpr` and/or incorrect usage of
`dtype.type` and/or invalid guarantees on this object?
FYI, we switched to using `dtype.name` for the lookup and no issues.
In [20]: import numexpr as ne
In [21]: import numpy as np
In [22]: ne.__version__
Out[22]: '2.4'
In [23]: np.__version__
Out[23]: '1.8.1'
In [24]: a = np.arange(10,dtype='int64')
In [25]: b = np.arange(10,dtype='int64')
In [26]: result_ne = ne.evaluate('a+b')
In [27]: result_numpy = a+b
In [28]: (result_ne == result_numpy).all()
Out[28]: True
In [29]: result_ne.dtype.type
Out[29]: numpy.int64
In [30]: result_numpy.dtype.type
Out[30]: numpy.int64
In [31]: hash(result_ne.dtype.type)
Out[31]: 8768103730016
In [32]: hash(result_numpy.dtype.type)
Out[32]: 8768103729990
For the floats the same though
In [1]: a = np.arange(10.)
In [2]: b = np.arange(10.)
n [4]: hash(ne.evaluate('a+b').dtype.type)
Out[4]: 8751212391216
In [5]: hash((a+b).dtype.type)
Out[5]: 8751212391216
|
The current documentation of enumerated types makes me think there are 8
values possible for integers ((signed, unsigned) * (8, 16, 32, 64) bits).
However, if I dump the values of the enumeration I have results that are not
consistent with the documentation:
Windows x64 (with `sizeof(long) = 4`)
NPY_BYTE = 1
NPY_UBYTE = 2
NPY_INT8 = 1
NPY_UINT8 = 2
NPY_SHORT = 3
NPY_USHORT = 4
NPY_INT16 = 3
NPY_UINT16 = 4
NPY_INT = 5
NPY_UINT = 6
NPY_INT32 = 7
NPY_UINT32 = 8
NPY_LONG = 7
NPY_ULONG = 8
NPY_INT64 = 9
NPY_UINT64 = 10
NPY_LONGLONG = 9
NPY_ULONGLONG = 10
`NPY_INT` and `NPY_INT32` differ (the same for `NPY_UINT` and `NPY_UIN32`).
Linux x64 (with `sizeof(long) = 8`)
NPY_BYTE = 1
NPY_UBYTE = 2
NPY_INT8 = 1
NPY_UINT8 = 2
NPY_SHORT = 3
NPY_USHORT = 4
NPY_INT16 = 3
NPY_UINT16 = 4
NPY_INT = 5
NPY_UINT = 6
NPY_INT32 = 5
NPY_UINT32 = 6
NPY_LONG = 7
NPY_ULONG = 8
NPY_INT64 = 7
NPY_UINT64 = 8
NPY_LONGLONG = 9
NPY_ULONGLONG = 10
Here, `NPY_INT64` and `NPY_LONGLONG` differ (the same for `NPY_UIN64` and
`NPY_ULONGLONG`)
In both cases (Windows and Linux) we have 10 values for integer instead of 8.
Is that expected ? I'm using numpy v1.13.3.
cc @wolfv @SylvainCorlay
| 1 |
Hey,
had to describe the issue in one line, here is the possibiltiy to reproduce
DELETE campaigns
PUT campaigns
{
"mappings" : {
"campaign" : {
"properties" : {
"location" : {
"type": "geo_shape",
"tree": "quadtree"
}
}
}
}
}
POST /campaigns/campaign/1
{
"location" : {
"type" : "circle",
"coordinates" : [45.01, 2.26],
"radius" : "9000m"
}
}
POST campaigns/campaign/
{
"_id": { "c_id": "1891"},
"location" : {
"type" : "circle",
"coordinates" : [45.01, 2.26],
"radius" : "9000m"
}
}
GET campaigns/campaign/_search
GET campaigns/campaign/_search
{
"query":{
"filtered": {
"query": {
"match_all": {}
},
"filter": {
"geo_shape": {
"location": {
"relation": "intersects",
"shape": {
"type": "circle",
"coordinates" : [45.01001, 2.26],
"radius":"1m"
}
}
}
}
}
}
}
You can see the difference between match all and the geo shape query, even
though the points are the same. If you remove the `_id` field from the
document, everything works, so maybe the IdFieldMapper throws an Exception
(rightfully, as an id cannot be an object) and then the rest of the document
is not indexed.
If you replace the `_id` object with a string, you get an exception, that the
id values dont match as one ID is autogenerated. Not sure, if this is the
desired behaviour.
|
Currently routing and ID values can be passed in the query string and url
respectively, but they can also be extracted from fields within the document.
This has a significant performance impact because each doc needs to be parsed
on the node which receives the index/get/update/delete request in order to
know which node should handle the request.
On top of that, there are clashes between (eg) routing values set in fields
and parent settings.
We should deprecate the functionality to extract these values from fields, and
make it the responsibility of the client code instead.
It should still be possible to set `_routing` to required. Perhaps we should
set this automatically if the user ever passes in a routing or parent value at
index time?
Relates to #8870
| 1 |
EDIT: Summary: `B = A; (t->B()).(spzeros(10))` constructs a sparse vector of
type `Any`, while `(t->A()).(spzeros(10))` constructs correctly a sparse array
of type `A`
I hope this is not a duplicate.
julia> struct A end
julia> function test(x)
B = A
(t->B()).(x)
end
julia> test(zeros(1))
1-element Array{A,1}:
A()
all seems ok. However,
julia> using SparseArrays
julia> test(spzeros(1))
1-element SparseVector{Any,Int64} with 1 stored entry:
[1] = A()
So the eltype could not be infered. Note that
julia> (t->A()).(spzeros(1))
1-element SparseVector{A,Int64} with 1 stored entry:
[1] = A()
has no problem whatsoever
|
I could produce the following segfault with fresh Julia 1.7.0 installs on two
different HPC clusters (redhat enterprise) and a regular desktop machine
(ubuntu). Note that while on one machine it would already segfault for `N=5`
on a different machine I had to set `N>20`. So try varying this if you can't
reproduce.
(Julia started with multiple threads, e.g. `julia -t 8`)
julia> n = 1000;
julia> N = 20;
julia> Threads.@threads for i in 1:N
A = randn(Float64, n, n); inv(A);
end
signal (11): Segmentation fault
in expression starting at REPL[2]:1
dgetrf_parallel at /cm/shared/apps/pc2/EB-SW/software/Julia/1.7.0-linux-x86_64/bin/../lib/julia/libopenblas64_.so (unknown line)
dgetrf_parallel at /cm/shared/apps/pc2/EB-SW/software/Julia/1.7.0-linux-x86_64/bin/../lib/julia/libopenblas64_.so (unknown line)
dgetrf_parallel at /cm/shared/apps/pc2/EB-SW/software/Julia/1.7.0-linux-x86_64/bin/../lib/julia/libopenblas64_.so (unknown line)
dgetrf_parallel at /cm/shared/apps/pc2/EB-SW/software/Julia/1.7.0-linux-x86_64/bin/../lib/julia/libopenblas64_.so (unknown line)
dgetrf_parallel at /cm/shared/apps/pc2/EB-SW/software/Julia/1.7.0-linux-x86_64/bin/../lib/julia/libopenblas64_.so (unknown line)
dgetrf_64_ at /cm/shared/apps/pc2/EB-SW/software/Julia/1.7.0-linux-x86_64/bin/../lib/julia/libopenblas64_.so (unknown line)
getrf! at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.7/LinearAlgebra/src/lapack.jl:575
#lu!#146 at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.7/LinearAlgebra/src/lu.jl:81 [inlined]
lu!##kw at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.7/LinearAlgebra/src/lu.jl:81 [inlined]
#lu#153 at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.7/LinearAlgebra/src/lu.jl:279 [inlined]
lu at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.7/LinearAlgebra/src/lu.jl:278 [inlined]
lu at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.7/LinearAlgebra/src/lu.jl:278 [inlined]
inv at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.7/LinearAlgebra/src/dense.jl:876
macro expansion at ./REPL[2]:2 [inlined]
#40#threadsfor_fun at ./threadingconstructs.jl:85
#40#threadsfor_fun at ./threadingconstructs.jl:52
unknown function (ip: 0x1554f0112d5f)
_jl_invoke at /buildworker/worker/package_linux64/build/src/gf.c:2247 [inlined]
jl_apply_generic at /buildworker/worker/package_linux64/build/src/gf.c:2429
jl_apply at /buildworker/worker/package_linux64/build/src/julia.h:1788 [inlined]
start_task at /buildworker/worker/package_linux64/build/src/task.c:877
Allocations: 4321396 (Pool: 4319432; Big: 1964); GC: 5
Segmentation fault (core dumped)
The segfault disappears when one sets `BLAS.set_num_threads(1)`.
Likely related to #43301 and #43008 (caused by the same issue?). However, not
really a duplicate because these issues are all about macOS and also `Float64`
seems to be fine there.
(Let me note that this is the result of me trying to create a MWE. Originally
I encountered this segfault and `StackOverflowError`s (as in the issues linked
above) as part of CI testing a private package with 1.6.4 and 1.7.0. See
https://discourse.julialang.org/t/inv-causes-stack-overflow-on-
julia-1-7-0-and-mac-os/72411/10). When switching back to 1.6.3 or 1.7.0-rc1
the issues went away.)
| 0 |
**TypeScript Version:**
1.7.5 / 1.8.0-beta / nightly (1.9.0-dev.20160217)
**Code**
// A self-contained demonstration of the problem follows...
switch (true) {
case true:
let foo = true;
}
switch (true) {
case true:
let foo = false;
}
console.log(foo);
// Static type checking correctly says, "Cannot find name 'foo'"
// but compiler transforms it anyway and does not perform hygiene
// on each foo variable.
// Should be Uncaught ReferenceError: foo is not defined
**Expected behavior:**
Identifier hygiene is performed on each lexically bound `let foo` to their
parent `SwitchBlock`.
Output example of expected:
switch (true) {
case true:
var foo_1 = true;
}
switch (true) {
case true:
var foo_2 = false;
}
console.log(foo);
// Uncaught ReferenceError: foo is not defined
**Actual behavior:**
Static type checker sees issue, but transform does not perform hygiene so no
`ReferenceError` happens and instead `console.log(foo) === false`.
The same problem also applies even if you provide explicit blocks inside each
`CaseClause|DefaultClause`, which is even more unexpected:
switch (true) {
case true: {
let foo = true;
}
}
switch (true) {
case true: {
let foo = false;
}
}
console.log(foo);
// false but should be Uncaught ReferenceError: foo is not defined
One thing to note is that if the variable declaration inside the `SwitchBlock`
shadows, then it _will_ correctly perform hygiene:
let foo = 'this works correctly';
switch (true) {
case true:
let foo = true;
}
switch (true) {
case true:
let foo = false;
}
console.log(foo);
// "this works correctly"
|
**TypeScript Version:**
1.8.7
**Code**
// A self-contained demonstration of the problem follows...
class A {
public propA; any = {};
private propB: number = 1;
}
class B implements A {
public propA: any;
private propB: number; // here
}
applyMixin(A, [B]);
**Expected behavior:**
Compile success
**Actual behavior:**
Compile failed.
When `private propB: number;` exists, `tsc` failed with:
error TS2420: Class 'A' incorrectly implements interface 'B'.
Types have separate declarations of a private property 'propB'.`
When `private propB: number;` is removed, `tsc` failed with:
error TS2420: Class 'A' incorrectly implements interface 'B'.
Property 'propB' is missing in Type 'A'.
| 0 |
## Steps to Reproduce
1. Create new Android Studio Flutter project
2. Run it in simulator to check that it's fine.
3. Create a duplicate of `lib/main.dart`. Let's call it `lib/main_alternate.dart`.
4. Create a new Run/Debug configuration in IntelliJ

5. Run `main_alternate.dart`
You'll see that this confuses Flutter. I've had instances when this worked
well, but other instances when `main.dart` was clearly running instead of
`main_alternate.dart`. For example, breakpoints in `main_alternate.dart`
weren't hit while breakpoints in `main.dart` were. Despite this, Flutter
console said `Launching lib/main_alternate.dart`.
## Flutter Doctor
Paste the output of running `flutter doctor -v` here.
$ flutter doctor -v
[✓] Flutter (Channel master, v0.3.3-pre.8, on Mac OS X 10.13.3 17D102, locale en-US)
• Flutter version 0.3.3-pre.8 at /Users/filiph/dev/flutter
• Framework revision 36cf1158ec (2 hours ago), 2018-04-20 17:39:32 -0700
• Engine revision 232060828a
• Dart version 2.0.0-dev.48.0.flutter-fe606f890b
[✓] Android toolchain - develop for Android devices (Android SDK 27.0.2)
• Android SDK at /Users/filiph/Library/Android/sdk
• Android NDK location not configured (optional; useful for native profiling support)
• Platform android-27, build-tools 27.0.2
• Java binary at: /Applications/Android Studio.app/Contents/jre/jdk/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1024-b01)
• All Android licenses accepted.
[✓] iOS toolchain - develop for iOS devices (Xcode 9.2)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Xcode 9.2, Build version 9C40b
• ios-deploy 1.9.2
• CocoaPods version 1.4.0
[✓] Android Studio (version 3.1)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin version 23.2.2
• Dart plugin version 173.4700
• Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1024-b01)
[✓] IntelliJ IDEA Community Edition (version 2017.3.3)
• IntelliJ at /Applications/IntelliJ IDEA CE.app
• Flutter plugin version 20.0.3
• Dart plugin version 173.4127.31
[✓] VS Code (version 1.22.2)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Dart Code extension version 2.9.2
[✓] Connected devices (1 available)
• iPhone X • 9072CF25-2137-4EB4-984F-EB3DC9A3F418 • ios • iOS 11.2 (simulator)
• No issues found!
|
Well, I think that's not my last issue here, but I have one more 😅
The problem is that I can not normally use Firebase Auth. When I touch the
button in my ui with event to choose Google account, I get error.
**Before All, I have two different methods to sign in with Google. One I got
from herefirebase_auth at dart site and another from example**## My Code:
import 'package:flutter/material.dart';
import 'package:firebase_auth/firebase_auth.dart';
import 'package:google_sign_in/google_sign_in.dart';
import 'dart:async';
final FirebaseAuth _auth = FirebaseAuth.instance;
final GoogleSignIn _googleSignIn = new GoogleSignIn();
Future<FirebaseUser> _handleSignIn() async {
GoogleSignInAccount googleUser = await _googleSignIn.signIn();
GoogleSignInAuthentication googleAuth = await googleUser.authentication;
FirebaseUser user = await _auth.signInWithGoogle(
accessToken: googleAuth.accessToken,
idToken: googleAuth.idToken,
);
print("signed in " + user.displayName);
return user;
}
Future<String> _signInWithGoogle() async {
final GoogleSignInAccount googleUser = await _googleSignIn.signIn();
final GoogleSignInAuthentication googleAuth =
await googleUser.authentication;
final FirebaseUser user = await _auth.signInWithGoogle(
accessToken: googleAuth.accessToken,
idToken: googleAuth.idToken,
);
assert(user.email != null);
assert(user.displayName != null);
assert(!user.isAnonymous);
assert(await user.getIdToken() != null);
final FirebaseUser currentUser = await _auth.currentUser();
assert(user.uid == currentUser.uid);
return 'signInWithGoogle succeeded: $user';
}
class _LoginPageState extends State<LoginPage> {
Future<String> _message = new Future<String>.value('');
@override
Widget build(BuildContext context) {
return new Scaffold(
body: RaisedButton(
child: Text('NEXT'),
textColor: Colors.white,
onPressed: () {
setState(() {
_message = _signInWithGoogle();
});
if (_auth.currentUser() != null) { Navigator.pop(context); }
}
)
);
}
}
class LoginPage extends StatefulWidget {
@override
_LoginPageState createState() => new _LoginPageState();
}
## Log
[{"event":"app.progress","params":{"appId":"something tells me that i shouldn't paste it here","id":"6","progressId":"hot.restart","message":"Performing hot restart..."}}]Performing hot restart...
E/flutter (28769): [ERROR:topaz/lib/tonic/logging/dart_error.cc(16)] Unhandled exception:
E/flutter (28769): PlatformException(sign_in_failed, Status{statusCode=DEVELOPER_ERROR, resolution=null}, null)
E/flutter (28769): #0 StandardMethodCodec.decodeEnvelope (package:flutter/src/services/message_codecs.dart:547:7)
E/flutter (28769): #1 MethodChannel.invokeMethod (package:flutter/src/services/platform_channel.dart:279:18)
E/flutter (28769): <asynchronous suspension>
E/flutter (28769): #2 GoogleSignIn._callMethod (package:google_sign_in/google_sign_in.dart:185:58)
E/flutter (28769): <asynchronous suspension>
E/flutter (28769): #3 GoogleSignIn._addMethodCall (package:google_sign_in/google_sign_in.dart:224:20)
E/flutter (28769): #4 GoogleSignIn.signIn (package:google_sign_in/google_sign_in.dart:295:48)
E/flutter (28769): #5 _handleSignIn (file:///C:/Users/Arsen/AndroidStudioProjects/travel_met/lib/login.dart:15:56)
E/flutter (28769): <asynchronous suspension>
E/flutter (28769): #6 _LoginPageState.build.<anonymous closure> (file:///C:/Users/Arsen/AndroidStudioProjects/travel_met/lib/login.dart:97:21)
E/flutter (28769): #7 _InkResponseState._handleTap (package:flutter/src/material/ink_well.dart:494:14)
E/flutter (28769): #8 _InkResponseState.build.<anonymous closure> (package:flutter/src/material/ink_well.dart:549:30)
E/flutter (28769): #9 GestureRecognizer.invokeCallback (package:flutter/src/gestures/recognizer.dart:102:24)
E/flutter (28769): #10 TapGestureRecognizer._checkUp (package:flutter/src/gestures/tap.dart:161:9)
E/flutter (28769): #11 TapGestureRecognizer.acceptGesture (package:flutter/src/gestures/tap.dart:123:7)
E/flutter (28769): #12 GestureArenaManager.sweep (package:flutter/src/gestures/arena.dart:156:27)
E/flutter (28769): #13 _WidgetsFlutterBinding&BindingBase&GestureBinding.handleEvent (package:flutter/src/gestures/binding.dart:147:20)
E/flutter (28769): #14 _WidgetsFlutterBinding&BindingBase&GestureBinding.dispatchEvent (package:flutter/src/gestures/binding.dart:121:22)
E/flutter (28769): #15 _WidgetsFlutterBinding&BindingBase&GestureBinding._handlePointerEvent (package:flutter/src/gestures/binding.dart:101:7)
E/flutter (28769): #16 _WidgetsFlutterBinding&BindingBase&GestureBinding._flushPointerEventQueue (package:flutter/src/gestures/binding.dart:64:7)
E/flutter (28769): #17 _WidgetsFlutterBinding&BindingBase&GestureBinding._handlePointerDataPacket (package:flutter/src/gestures/binding.dart:48:7)
E/flutter (28769): #18 _invoke1 (dart:ui/hooks.dart:134:13)
E/flutter (28769): #19 _dispatchPointerDataPacket (dart:ui/hooks.dart:91:5)
Restarted app in 4 492ms.
D/ViewRootImpl@7dec195[MainActivity](28769): ViewPostIme pointer 0
D/ViewRootImpl@7dec195[MainActivity](28769): ViewPostIme pointer 1
D/ViewRootImpl@7dec195[MainActivity](28769): MSG_WINDOW_FOCUS_CHANGED 0
D/ViewRootImpl@48809d0[SignInHubActivity](28769): setView = DecorView@7c9bdc9[SignInHubActivity] TM=true MM=false
D/ViewRootImpl@48809d0[SignInHubActivity](28769): dispatchAttachedToWindow
V/Surface (28769): sf_framedrop debug : 0x4f4c, game : false, logging : 0
D/ViewRootImpl@48809d0[SignInHubActivity](28769): Relayout returned: old=[0,0][0,0] new=[0,0][1080,1920] result=0x7 surface={valid=true 484893175808} changed=true
D/mali_winsys(28769): EGLint new_window_surface(egl_winsys_display *, void *, EGLSurface, EGLConfig, egl_winsys_surface **, egl_color_buffer_format *, EGLBoolean) returns 0x3000, [1080x1920]-format:1
D/OpenGLRenderer(28769): eglCreateWindowSurface = 0x70e8d08440
D/ViewRootImpl@48809d0[SignInHubActivity](28769): MSG_RESIZED_REPORT: frame=Rect(0, 0 - 1080, 1920) ci=Rect(0, 72 - 0, 0) vi=Rect(0, 72 - 0, 0) or=1
D/ViewRootImpl@48809d0[SignInHubActivity](28769): MSG_WINDOW_FOCUS_CHANGED 1
V/InputMethodManager(28769): Starting input: tba=android.view.inputmethod.EditorInfo@bc342e8 nm : com.arsnyan.some ic=null
I/InputMethodManager(28769): startInputInner - mService.startInputOrWindowGainedFocus
D/ViewRootImpl@48809d0[SignInHubActivity](28769): MSG_WINDOW_FOCUS_CHANGED 0
D/ViewRootImpl@7dec195[MainActivity](28769): MSG_WINDOW_FOCUS_CHANGED 1
V/InputMethodManager(28769): Starting input: tba=android.view.inputmethod.EditorInfo@7f65d01 nm : com.arsnyan.some ic=null
I/InputMethodManager(28769): startInputInner - mService.startInputOrWindowGainedFocus
I/FlutterActivityDelegate(28769): onResume setting current activity to this
D/OpenGLRenderer(28769): eglDestroySurface = 0x70e8d08440
D/ViewRootImpl@48809d0[SignInHubActivity](28769): Relayout returned: old=[0,0][1080,1920] new=[0,0][1080,1920] result=0x5 surface={valid=false 0} changed=true
D/ViewRootImpl@48809d0[SignInHubActivity](28769): dispatchDetachedFromWindow
D/InputEventReceiver(28769): channel '5d77edb com.arsnyan.some/com.google.android.gms.auth.api.signin.internal.SignInHubActivity (client)' ~ Disposing input event receiver.
D/InputEventReceiver(28769): channel '5d77edb com.arsnyan.some/com.google.android.gms.auth.api.signin.internal.SignInHubActivity (client)' ~NativeInputEventReceiver.
## Debugger says:
Picture 1
Picture 2
##Even when I changed something and used another method, I got the same log:
RaisedButton(
child: Text('NEXT'),
textColor: Colors.white,
onPressed: () {
_handleSignIn()
.then((FirebaseUser user) {
if (user.displayName != null) { Navigator.pop(context); }
})
.catchError((e) => print(e));
},
),
Thanks in advance.
| 0 |
One thing that always bugs me when using `var_dump` (and now, with the great
VarDumper component `dump()`) is removing all calls after debugging. I always
have a hard time to find these and end up doing `grep var_dump`, which isn't
very quick on a windows PC and in a project with many vendor files.
It would be very awesome if we can show the location of the dump call,
somewhere small in the top right corner of the dump box. I don't know if this
is possible, it's just something I would like to see.
|
Hi,
What about adding a stack trace information when dumping?
I think it may help to see which were the steps to get there.
| 1 |
After install the last update:
Version 0.10.1
Commit `df35236`
Shell 0.34.1
Render 45.0.2454.85
Node 4.1.1
With only paste a text in a new file the entire screen freezes, only the menu
bar response but don't work.
Operating system windows 8.1.
|
Sometimes editor just freeze. Kill task helps only. Have this problem at work
and home.
I noticed that this started after latest update (0.10.1).
Working in PHP files.
Windows 10 64.
| 1 |
It would be nice if you could select i.e. a different font for comments in
comparison to code just for better reading. It's not a pressing matter but
nice to have :)
|
I have a problem matcher that works on the output of the Delphi compller.
Sometimes it gives relative and absolute file names at the same time. When I
set `"fileLocation": ["absolute"]` the messages having relative paths can't be
opened. When I set `"fileLocation": ["relative", "${workspaceRoot}"]` the
messages having absolute paths can't be opened anymore.
Is it possible to define a a problem matcher that uses relative paths as long
as the mentioned file exists and uses absolute paths as a fallback mode?
| 0 |
I have an next/js app behind nginx with custom server node/express .
I launch it to port 3000. Nginx redirect the request myserver.com/mynextapp/
to port 3000 , but this is not working.
I can see the page server rendered, but the js doesn't working . In browser
tool network i got 404 response for app.js and page/, because the request is
myserver.com/_next/228ef92e59d376f055fc2c6d01c93b82/app.js instead of
myserver.com/mynextapp/_next/228ef92e59d376f055fc2c6d01c93b82/app.js
It worked well when i simply opened the port 3000 in myserver.com but i try to
avoid that.
Any idea ?
Thanks
|
Next uses routes like `/_next` and `/static` by default. If you wanted to run
multiple different Next apps on the same domain, they would collide with each
other. Can these paths be made configurable?
| 1 |
From @alextricity25 on 2016-04-08T15:37:25Z
##### ISSUE TYPE
* Feature Idea
##### COMPONENT NAME
os_router
##### ANSIBLE VERSION
ansible 2.0.1.0
config file = /root/setup-infra/ansible.cfg
configured module search path = Default w/o overrides
##### CONFIGURATION
No changes made to ansible.cfg
##### OS / ENVIRONMENT
I'm running Ubuntu 14.04, but this module is not platform-specific I don't
think.
##### SUMMARY
os_router can't take in a port ID as an internal interface, only a subnet.
See:
https://github.com/ansible/ansible-modules-
core/blob/devel/cloud/openstack/os_router.py#L321
The neutron CLI allows you to specify a port ID as an interface, and therefore
allow you to specify an arbitrary IP for that interface. It would be nice if
the Ansible os_router module would allow you to do that.
##### STEPS TO REPRODUCE
This added feature would allow you to do something like:
- name: Create port for my_net
os_port:
state: present
name: "my_net_port"
network: "my_net"
fixed_ips:
- ip_address: "192.168.100.50"
register: my_net_port_results
- name: Create my router
os_router:
name: my_router
state: present
network: "ext-net"
interfaces:
- port: "{{ my_net_port_results.id }}"
- "some_other_priv_subnet"
This would allow the user to specify either a subnet or a port for a router
internal interface.
##### EXPECTED RESULTS
The router would have two interfaces with the example playbook shown above. It
would have the default gateway of "some_other_priv_subnet", and it would have
the ip assigned to "my_net_port".
This would allow subnets to be attached to multiple routers, which currently
isn't do-able through the os_router module.
##### ACTUAL RESULTS
TBD
Copied from original issue: ansible/ansible-modules-core#3390
|
##### ISSUE TYPE
* Feature Idea
##### COMPONENT NAME
ansible-galaxy
##### ANSIBLE VERSION
Future
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
##### SUMMARY
Local roles may depend on galaxy roles defined in `meta/main.yml`. `ansible-
galaxy install -r` does not support this file format, although the value of
the `dependencies` key appears to match that in a `requirements.yml` file
##### STEPS TO REPRODUCE
$ cat meta/main.yml
galaxy_info:
author: ome-devel@lists.openmicroscopy.org.uk
dependencies:
- role: openmicroscopy.basedeps
$ ansible-galaxy install -r meta/main.yml`
- downloading role 'dependencies', owned by
[WARNING]: - dependencies was NOT installed successfully: Role has no field
named u'owner'
ERROR! - you can use --ignore-errors to skip failed roles and finish processing the list.
Proposed behaviour: Install all roles listed in the `dependencies` section of
`meta/main.yml`, in this case `openmicroscopy.basedeps`
##### BACKGROUND
* Galaxy dependencies file can be the role meta file
This is the original motivation for this feature suggestion. When using
molecule for testing Ansible roles dependencies need to be in a requirements
file. If this is a deliberate design decision it would be useful if the docs
could be updated with an official description of the `meta/main.yml` file.
* Add ability to automatically download roles when running play
This is a closely related feature request.
| 0 |
Hello Everone
How do i install Bootstrap 3.1.1 please tell me i tried this
https://github.com/KKBOX/FireApp/wiki/Use-compass-extensions#windows but was
not able to make it work please guide me!!
thanks!!
|
Hello Everone
How do i install Bootstrap 3.1.1 please tell me i tried this
https://github.com/KKBOX/FireApp/wiki/Use-compass-extensions#windows but was
not able to make it work please guide me!!
thanks!!
| 1 |
A minor observation about an otherwise great feature! Using the Insiders
build, the following Lua code causes weird folding:
function someFunc()
local longString = [[this is a long
string that spans multiple lines
and has some weird folding]]
print longString
end
You'll notice that a + shows up on the same "function someFunc()" line _and_
on the last line of the longString declaration. The first + folds up to the
declaration line and the second + folds the rest of the function.
|
The current implementation of folding uses an indentation based folding
strategy that is unaware of the language it works on. Knowledge on the
underlying language allows us to solve the following requests:
* [folding] Cannot code fold empty functions #3349
* [folding] Collapse ending brace to the same line #3352
* [folding] Folded block comments should show */ #3350
* [folding] should not fold whitespace after function #3353
* [folding] Add code folding for markdown based on heading level #3347
* [folding] [lua] Weird code folding with Lua #3602
* [folding] Collapse code block ignore comments inside this block #3957
* [folding] Code Folding: support golang multiline strings #5994
* [folding] Optionally fold chained method calls #6991
| 1 |
Hello, This is a step by step of what I remember happening leading up to the
duplication of the file.
Opened a Wordpress theme package in Atom, when I was finished I closed that
window (I did not close atom).
Selected 'Open...' from the menu and opened a different project.
Opened a few pages and made some edits.
Opened settings and saw that a theme was out of date so I updated it.
Pasted some info from a website into a file and saved it.
It was when hitting 'control s' that I noticed a new file appear (page.php):

This file was from the previous Wordpress project and it was now in the new
project.
I can't recall if I had been editing that file or if it was open when I closed
the previous project.
I have since upgraded to version 0.132.0 (The request to upgrade was in the
menu) So I guess the version was the one before this.
The theme I updated was Monokai (https://atom.io/packages/monokai).
|
Halp ticket:
* support/95b88d8ac5d011e388dbf5393005f408
> Please make Atom ask before downloading an update or make the auto-update
> configurable.
>
> I am working on the road a lot and it ruins my mobile data package.
So, asking before downloading an update or providing an option to turn off
auto-updating might be helpful in these cases, I suppose.
| 0 |
http://getbootstrap.com/customize/
I have this error while compilating a custom packet
Ruh roh! Could not parse less files.
.btn-group-xs > .btn { .btn-xs(); }
.btn-group-sm > .btn { .btn-sm(); }
I try to create packet with only:
> Grid system, Forms, Button groups, Input groups, Labels, Alerts
|
Could not parse less files.
.btn-group-xs > .btn { .btn-xs(); }
.btn-group-sm > .btn { .btn-sm(); }
| 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.