qid
int64 46k
74.7M
| question
stringlengths 54
37.8k
| date
stringlengths 10
10
| metadata
sequencelengths 3
3
| response_j
stringlengths 17
26k
| response_k
stringlengths 26
26k
|
---|---|---|---|---|---|
820,671 | I want to create an object in python that has a few attributes and I want to protect myself from accidentally using the wrong attribute name. The code is as follows:
```
class MyClass( object ) :
m = None # my attribute
__slots__ = ( "m" ) # ensure that object has no _m etc
a = MyClass() # create one
a.m = "?" # here is a PROBLEM
```
But after running this simple code, I get a very strange error:
```
Traceback (most recent call last):
File "test.py", line 8, in <module>
a.m = "?"
AttributeError: 'test' object attribute 'm' is read-only
```
Is there any wise programmer who can spare a bit of their time and enlighten me about "read-only" errors? | 2009/05/04 | [
"https://Stackoverflow.com/questions/820671",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/69882/"
] | `__slots__` works with instance variables, whereas what you have there is a class variable. This is how you should be doing it:
```
class MyClass( object ) :
__slots__ = ( "m", )
def __init__(self):
self.m = None
a = MyClass()
a.m = "?" # No error
``` | ```
class MyClass( object ) :
m = None # my attribute
```
The `m` here is the class attributes, rather than the instance attribute. You need to connect it with your instance by self in `__init__`. |
820,671 | I want to create an object in python that has a few attributes and I want to protect myself from accidentally using the wrong attribute name. The code is as follows:
```
class MyClass( object ) :
m = None # my attribute
__slots__ = ( "m" ) # ensure that object has no _m etc
a = MyClass() # create one
a.m = "?" # here is a PROBLEM
```
But after running this simple code, I get a very strange error:
```
Traceback (most recent call last):
File "test.py", line 8, in <module>
a.m = "?"
AttributeError: 'test' object attribute 'm' is read-only
```
Is there any wise programmer who can spare a bit of their time and enlighten me about "read-only" errors? | 2009/05/04 | [
"https://Stackoverflow.com/questions/820671",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/69882/"
] | When you declare instance variables using `__slots__`, Python creates a [descriptor object](https://docs.python.org/2/howto/descriptor.html) as a class variable with the same name. In your case, this descriptor is overwritten by the class variable `m` that you are defining at the following line:
```
m = None # my attribute
```
Here is what you need to do: Do not define a class variable called `m`, and initialize the instance variable `m` in the `__init__` method.
```
class MyClass(object):
__slots__ = ("m",)
def __init__(self):
self.m = None
a = MyClass()
a.m = "?"
```
As a side note, tuples with single elements need a comma after the element. Both work in your code because `__slots__` accepts a single string or an iterable/sequence of strings. In general, to define a tuple containing the element `1`, use `(1,)` or `1,` and not `(1)`. | Consider this.
```
class SuperSafe( object ):
allowed= ( "this", "that" )
def __init__( self ):
self.this= None
self.that= None
def __setattr__( self, attr, value ):
if attr not in self.allowed:
raise Exception( "No such attribute: %s" % (attr,) )
super( SuperSafe, self ).__setattr__( attr, value )
```
A better approach is to use unit tests for this kind of checking. This is a fair amount of run-time overhead. |
820,671 | I want to create an object in python that has a few attributes and I want to protect myself from accidentally using the wrong attribute name. The code is as follows:
```
class MyClass( object ) :
m = None # my attribute
__slots__ = ( "m" ) # ensure that object has no _m etc
a = MyClass() # create one
a.m = "?" # here is a PROBLEM
```
But after running this simple code, I get a very strange error:
```
Traceback (most recent call last):
File "test.py", line 8, in <module>
a.m = "?"
AttributeError: 'test' object attribute 'm' is read-only
```
Is there any wise programmer who can spare a bit of their time and enlighten me about "read-only" errors? | 2009/05/04 | [
"https://Stackoverflow.com/questions/820671",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/69882/"
] | When you declare instance variables using `__slots__`, Python creates a [descriptor object](https://docs.python.org/2/howto/descriptor.html) as a class variable with the same name. In your case, this descriptor is overwritten by the class variable `m` that you are defining at the following line:
```
m = None # my attribute
```
Here is what you need to do: Do not define a class variable called `m`, and initialize the instance variable `m` in the `__init__` method.
```
class MyClass(object):
__slots__ = ("m",)
def __init__(self):
self.m = None
a = MyClass()
a.m = "?"
```
As a side note, tuples with single elements need a comma after the element. Both work in your code because `__slots__` accepts a single string or an iterable/sequence of strings. In general, to define a tuple containing the element `1`, use `(1,)` or `1,` and not `(1)`. | You are completely misusing `__slots__`. It prevents the creation of `__dict__` for the instances. This only makes sense if you run into memory problems with many small objects, because getting rid of `__dict__` can reduce the footprint. This is a hardcore optimization that is not needed in 99.9% of all cases.
If you need the kind of safety you described then Python really is the wrong language. Better use something strict like Java (instead of trying to write Java in Python).
If you couldn't figure out yourself why the class attributes caused these problems in your code then maybe you should think twice about introducing language hacks like this. It would probably be wiser to become more familiar with the language first.
Just for completeness, here is the [documentation link for slots](https://docs.python.org/2/reference/datamodel.html#slots). |
820,671 | I want to create an object in python that has a few attributes and I want to protect myself from accidentally using the wrong attribute name. The code is as follows:
```
class MyClass( object ) :
m = None # my attribute
__slots__ = ( "m" ) # ensure that object has no _m etc
a = MyClass() # create one
a.m = "?" # here is a PROBLEM
```
But after running this simple code, I get a very strange error:
```
Traceback (most recent call last):
File "test.py", line 8, in <module>
a.m = "?"
AttributeError: 'test' object attribute 'm' is read-only
```
Is there any wise programmer who can spare a bit of their time and enlighten me about "read-only" errors? | 2009/05/04 | [
"https://Stackoverflow.com/questions/820671",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/69882/"
] | When you declare instance variables using `__slots__`, Python creates a [descriptor object](https://docs.python.org/2/howto/descriptor.html) as a class variable with the same name. In your case, this descriptor is overwritten by the class variable `m` that you are defining at the following line:
```
m = None # my attribute
```
Here is what you need to do: Do not define a class variable called `m`, and initialize the instance variable `m` in the `__init__` method.
```
class MyClass(object):
__slots__ = ("m",)
def __init__(self):
self.m = None
a = MyClass()
a.m = "?"
```
As a side note, tuples with single elements need a comma after the element. Both work in your code because `__slots__` accepts a single string or an iterable/sequence of strings. In general, to define a tuple containing the element `1`, use `(1,)` or `1,` and not `(1)`. | ```
class MyClass( object ) :
m = None # my attribute
```
The `m` here is the class attributes, rather than the instance attribute. You need to connect it with your instance by self in `__init__`. |
820,671 | I want to create an object in python that has a few attributes and I want to protect myself from accidentally using the wrong attribute name. The code is as follows:
```
class MyClass( object ) :
m = None # my attribute
__slots__ = ( "m" ) # ensure that object has no _m etc
a = MyClass() # create one
a.m = "?" # here is a PROBLEM
```
But after running this simple code, I get a very strange error:
```
Traceback (most recent call last):
File "test.py", line 8, in <module>
a.m = "?"
AttributeError: 'test' object attribute 'm' is read-only
```
Is there any wise programmer who can spare a bit of their time and enlighten me about "read-only" errors? | 2009/05/04 | [
"https://Stackoverflow.com/questions/820671",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/69882/"
] | Consider this.
```
class SuperSafe( object ):
allowed= ( "this", "that" )
def __init__( self ):
self.this= None
self.that= None
def __setattr__( self, attr, value ):
if attr not in self.allowed:
raise Exception( "No such attribute: %s" % (attr,) )
super( SuperSafe, self ).__setattr__( attr, value )
```
A better approach is to use unit tests for this kind of checking. This is a fair amount of run-time overhead. | ```
class MyClass( object ) :
m = None # my attribute
```
The `m` here is the class attributes, rather than the instance attribute. You need to connect it with your instance by self in `__init__`. |
820,671 | I want to create an object in python that has a few attributes and I want to protect myself from accidentally using the wrong attribute name. The code is as follows:
```
class MyClass( object ) :
m = None # my attribute
__slots__ = ( "m" ) # ensure that object has no _m etc
a = MyClass() # create one
a.m = "?" # here is a PROBLEM
```
But after running this simple code, I get a very strange error:
```
Traceback (most recent call last):
File "test.py", line 8, in <module>
a.m = "?"
AttributeError: 'test' object attribute 'm' is read-only
```
Is there any wise programmer who can spare a bit of their time and enlighten me about "read-only" errors? | 2009/05/04 | [
"https://Stackoverflow.com/questions/820671",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/69882/"
] | You are completely misusing `__slots__`. It prevents the creation of `__dict__` for the instances. This only makes sense if you run into memory problems with many small objects, because getting rid of `__dict__` can reduce the footprint. This is a hardcore optimization that is not needed in 99.9% of all cases.
If you need the kind of safety you described then Python really is the wrong language. Better use something strict like Java (instead of trying to write Java in Python).
If you couldn't figure out yourself why the class attributes caused these problems in your code then maybe you should think twice about introducing language hacks like this. It would probably be wiser to become more familiar with the language first.
Just for completeness, here is the [documentation link for slots](https://docs.python.org/2/reference/datamodel.html#slots). | ```
class MyClass( object ) :
m = None # my attribute
```
The `m` here is the class attributes, rather than the instance attribute. You need to connect it with your instance by self in `__init__`. |
30,252,726 | I am generating pdf using html template with python `pisa.CreatePDF` API,
It works well with small html, but in case of huge html it takes lot of time. Is there any alternative ? | 2015/05/15 | [
"https://Stackoverflow.com/questions/30252726",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2373367/"
] | I did few changes in html which results pisa.createPDF works fast for me.
I am using html of almost **2 MB**, contains single table with almost more than **10,000 rows**. So I break them into multiple tables and tried again. Its surprised me, initially with single table it took almost **40 minutes (2590 seconds)** to generate **PDF** and with multiple tables it has taken only **80 Seconds**. | You can try [pdfkit](https://pypi.python.org/pypi/pdfkit):
```
import pdfkit
pdfkit.from_file('test.html', 'out.pdf')
```
Also see [this question](https://stackoverflow.com/q/23359083/3489230) which describes solutions using PyQt. |
51,271,225 | header
output:
```
array(['Subject_ID', 'tube_label', 'sample_#', 'Relabel',
'sample_ID','cortisol_value', 'Group'], dtype='<U14')
```
body
output:
```
array([['STM002', '170714_STM002_1', 1, 1, 1, 1.98, 'HC'],
['STM002', '170714_STM002_2', 2, 2, 2, 2.44, 'HC'],], dtype=object)
testing = np.concatenate((header, body), axis=0)
```
```none
ValueError Traceback (most recent call last) <ipython-input-302-efb002602b4b> in <module>()
1 # Merge names and the rest of the data in np array
2
----> 3 testing = np.concatenate((header, body), axis=0)
ValueError: all the input arrays must have same number of dimensions
```
Might someone be able to troubleshoot this?
I have tried different commands to merge the two (including stack) and am getting the same error. The dimensions (columns) do seem to be the same though. | 2018/07/10 | [
"https://Stackoverflow.com/questions/51271225",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10029062/"
] | You need to align array dimensions first. You are currently trying to combine 1-dimensional and 2-dimensional arrays. After alignment, you can use [`numpy.vstack`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.vstack.html).
Note `np.array([A]).shape` returns `(1, 7)`, while `B.shape` returns `(2, 7)`. A more efficient alternative would be to use `A[None, :]`.
Also note your array will become of dtype `object`, as this will accept arbitrary / mixed types.
```
A = np.array(['Subject_ID', 'tube_label', 'sample_#', 'Relabel',
'sample_ID','cortisol_value', 'Group'], dtype='<U14')
B = np.array([['STM002', '170714_STM002_1', 1, 1, 1, 1.98, 'HC'],
['STM002', '170714_STM002_2', 2, 2, 2, 2.44, 'HC'],], dtype=object)
res = np.vstack((np.array([A]), B))
print(res)
array([['Subject_ID', 'tube_label', 'sample_#', 'Relabel', 'sample_ID',
'cortisol_value', 'Group'],
['STM002', '170714_STM002_1', 1, 1, 1, 1.98, 'HC'],
['STM002', '170714_STM002_2', 2, 2, 2, 2.44, 'HC']], dtype=object)
``` | Look at numpy.vstack and hstack, as well as the axis argument in np.append. Here it looks like you want vstack (i.e. the output array will have 3 columns, each with the same number of rows). You can also look into numpy.reshape, to change the shape of the input arrays so you can concatenate them. |
51,271,225 | header
output:
```
array(['Subject_ID', 'tube_label', 'sample_#', 'Relabel',
'sample_ID','cortisol_value', 'Group'], dtype='<U14')
```
body
output:
```
array([['STM002', '170714_STM002_1', 1, 1, 1, 1.98, 'HC'],
['STM002', '170714_STM002_2', 2, 2, 2, 2.44, 'HC'],], dtype=object)
testing = np.concatenate((header, body), axis=0)
```
```none
ValueError Traceback (most recent call last) <ipython-input-302-efb002602b4b> in <module>()
1 # Merge names and the rest of the data in np array
2
----> 3 testing = np.concatenate((header, body), axis=0)
ValueError: all the input arrays must have same number of dimensions
```
Might someone be able to troubleshoot this?
I have tried different commands to merge the two (including stack) and am getting the same error. The dimensions (columns) do seem to be the same though. | 2018/07/10 | [
"https://Stackoverflow.com/questions/51271225",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10029062/"
] | You're right in trying to use [`numpy.concatenate()`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.concatenate.html) but you've to *promote* the first array to 2D before concatenating. Here's a simple example:
```
In [1]: import numpy as np
In [2]: arr1 = np.array(['Subject_ID', 'tube_label', 'sample_#', 'Relabel',
...: 'sample_ID','cortisol_value', 'Group'], dtype='<U14')
...:
In [3]: arr2 = np.array([['STM002', '170714_STM002_1', 1, 1, 1, 1.98, 'HC'],
...: ['STM002', '170714_STM002_2', 2, 2, 2, 2.44, 'HC'],], dtype=object)
...:
In [4]: arr1.shape
Out[4]: (7,)
In [5]: arr2.shape
Out[5]: (2, 7)
In [8]: concatenated = np.concatenate((arr1[None, :], arr2), axis=0)
In [9]: concatenated.shape
Out[9]: (3, 7)
```
And the resultant concatenated array would look like:
```
In [10]: concatenated
Out[10]:
array([['Subject_ID', 'tube_label', 'sample_#', 'Relabel', 'sample_ID',
'cortisol_value', 'Group'],
['STM002', '170714_STM002_1', 1, 1, 1, 1.98, 'HC'],
['STM002', '170714_STM002_2', 2, 2, 2, 2.44, 'HC']], dtype=object)
```
---
### Explanation:
The reason you were getting the `ValueError` is because one of the arrays is 1D while the other is 2D. But, `numpy.concatenate` expects the arrays to be of same dimension in this case. That's why we *promoted* the array dimension of `arr1` using `None`. But, you can also use [`numpy.newaxis`](https://stackoverflow.com/questions/29241056/how-does-numpy-newaxis-work-and-when-to-use-it) in place of `None` | You need to align array dimensions first. You are currently trying to combine 1-dimensional and 2-dimensional arrays. After alignment, you can use [`numpy.vstack`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.vstack.html).
Note `np.array([A]).shape` returns `(1, 7)`, while `B.shape` returns `(2, 7)`. A more efficient alternative would be to use `A[None, :]`.
Also note your array will become of dtype `object`, as this will accept arbitrary / mixed types.
```
A = np.array(['Subject_ID', 'tube_label', 'sample_#', 'Relabel',
'sample_ID','cortisol_value', 'Group'], dtype='<U14')
B = np.array([['STM002', '170714_STM002_1', 1, 1, 1, 1.98, 'HC'],
['STM002', '170714_STM002_2', 2, 2, 2, 2.44, 'HC'],], dtype=object)
res = np.vstack((np.array([A]), B))
print(res)
array([['Subject_ID', 'tube_label', 'sample_#', 'Relabel', 'sample_ID',
'cortisol_value', 'Group'],
['STM002', '170714_STM002_1', 1, 1, 1, 1.98, 'HC'],
['STM002', '170714_STM002_2', 2, 2, 2, 2.44, 'HC']], dtype=object)
``` |
51,271,225 | header
output:
```
array(['Subject_ID', 'tube_label', 'sample_#', 'Relabel',
'sample_ID','cortisol_value', 'Group'], dtype='<U14')
```
body
output:
```
array([['STM002', '170714_STM002_1', 1, 1, 1, 1.98, 'HC'],
['STM002', '170714_STM002_2', 2, 2, 2, 2.44, 'HC'],], dtype=object)
testing = np.concatenate((header, body), axis=0)
```
```none
ValueError Traceback (most recent call last) <ipython-input-302-efb002602b4b> in <module>()
1 # Merge names and the rest of the data in np array
2
----> 3 testing = np.concatenate((header, body), axis=0)
ValueError: all the input arrays must have same number of dimensions
```
Might someone be able to troubleshoot this?
I have tried different commands to merge the two (including stack) and am getting the same error. The dimensions (columns) do seem to be the same though. | 2018/07/10 | [
"https://Stackoverflow.com/questions/51271225",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10029062/"
] | You're right in trying to use [`numpy.concatenate()`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.concatenate.html) but you've to *promote* the first array to 2D before concatenating. Here's a simple example:
```
In [1]: import numpy as np
In [2]: arr1 = np.array(['Subject_ID', 'tube_label', 'sample_#', 'Relabel',
...: 'sample_ID','cortisol_value', 'Group'], dtype='<U14')
...:
In [3]: arr2 = np.array([['STM002', '170714_STM002_1', 1, 1, 1, 1.98, 'HC'],
...: ['STM002', '170714_STM002_2', 2, 2, 2, 2.44, 'HC'],], dtype=object)
...:
In [4]: arr1.shape
Out[4]: (7,)
In [5]: arr2.shape
Out[5]: (2, 7)
In [8]: concatenated = np.concatenate((arr1[None, :], arr2), axis=0)
In [9]: concatenated.shape
Out[9]: (3, 7)
```
And the resultant concatenated array would look like:
```
In [10]: concatenated
Out[10]:
array([['Subject_ID', 'tube_label', 'sample_#', 'Relabel', 'sample_ID',
'cortisol_value', 'Group'],
['STM002', '170714_STM002_1', 1, 1, 1, 1.98, 'HC'],
['STM002', '170714_STM002_2', 2, 2, 2, 2.44, 'HC']], dtype=object)
```
---
### Explanation:
The reason you were getting the `ValueError` is because one of the arrays is 1D while the other is 2D. But, `numpy.concatenate` expects the arrays to be of same dimension in this case. That's why we *promoted* the array dimension of `arr1` using `None`. But, you can also use [`numpy.newaxis`](https://stackoverflow.com/questions/29241056/how-does-numpy-newaxis-work-and-when-to-use-it) in place of `None` | Look at numpy.vstack and hstack, as well as the axis argument in np.append. Here it looks like you want vstack (i.e. the output array will have 3 columns, each with the same number of rows). You can also look into numpy.reshape, to change the shape of the input arrays so you can concatenate them. |
67,044,398 | to import the absolute path from my laptop I type:
==================================================
```
import os
print(os.getcwd())
```
he gives me the path no problem, but when I create a Document "ayoub.txt" in the path absolute, and I #call this document with:
===============================================================================================================================
```
file = open("ayoub.txt")
# I get an error:
#Traceback (most recent enter code herecall last):
#File "C:\Users\HPPRO~1\AppData\Local\Temp\tempCodeRunnerFile.python", line 4, in <module>
f#ile = open("ayoub.txt")
#FileNotFoundError: [Errno 2] No such file or directory: 'ayoub.txt'
#PS C:\Users\HP PRO>
``` | 2021/04/11 | [
"https://Stackoverflow.com/questions/67044398",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14819475/"
] | I ran your code with some dummy data, the `cast<String>` for `categories` works for me. However, you have not added the cast to `skills` and `otherLanguages`. Have you checked the line number of the error? If the problem is definitely with `categories`, could you please add some sample data to the question. | Try to replace `List<String> categories, skills, otherLanguages;` to dynamic and remove the casting
`List<dynamic> categories, skills, otherLanguages;` |
43,716,699 | ```
python manage.py runserver
Performing system checks...
Unhandled exception in thread started by <function wrapper at 0x03BBC1F0>
Traceback (most recent call last):
File "C:\Python27\lib\site-packages\django\utils\autoreload.py", line 227, in wrapper
fn(*args, **kwargs)
File "C:\Python27\lib\site-packages\django\core\management\commands\runserver.py", line 125, in inner_run
self.check(display_num_errors=True)
File "C:\Python27\lib\site-packages\django\core\management\base.py", line 359, in check
include_deployment_checks=include_deployment_checks,
File "C:\Python27\lib\site-packages\django\core\management\base.py", line 346, in _run_checks
return checks.run_checks(**kwargs)
File "C:\Python27\lib\site-packages\django\core\checks\registry.py", line 81, in run_checks
new_errors = check(app_configs=app_configs)
File "C:\Python27\lib\site-packages\django\core\checks\urls.py", line 16, in check_url_config
return check_resolver(resolver)
File "C:\Python27\lib\site-packages\django\core\checks\urls.py", line 26, in check_resolver
return check_method()
File "C:\Python27\lib\site-packages\django\urls\resolvers.py", line 254, in check
for pattern in self.url_patterns:
File "C:\Python27\lib\site-packages\django\utils\functional.py", line 35, in __get__
res = instance.__dict__[self.name] = self.func(instance)
File "C:\Python27\lib\site-packages\django\urls\resolvers.py", line 405, in url_patterns
patterns = getattr(self.urlconf_module, "urlpatterns", self.urlconf_module)
File "C:\Python27\lib\site-packages\django\utils\functional.py", line 35, in __get__
res = instance.__dict__[self.name] = self.func(instance)
File "C:\Python27\lib\site-packages\django\urls\resolvers.py", line 398, in urlconf_module
return import_module(self.urlconf_name)
File "C:\Python27\lib\importlib\__init__.py", line 37, in import_module
__import__(name)
File "C:\Users\Kaidi\Desktop\CM2\CM\CM\urls.py", line 18, in <module>
from mysite import views
File "C:\Users\Kaidi\Desktop\CM2\CM\mysite\views.py", line 2, in <module>
from rest_framework import viewsets, permissions, status
File "C:\Python27\lib\site-packages\rest_framework\viewsets.py", line 26, in <module>
from rest_framework import generics, mixins, views
File "C:\Python27\lib\site-packages\rest_framework\generics.py", line 10, in <module>
from rest_framework import mixins, views
File "C:\Python27\lib\site-packages\rest_framework\views.py", line 98, in <module>
class APIView(View):
File "C:\Python27\lib\site-packages\rest_framework\views.py", line 103, in APIView
authentication_classes = api_settings.DEFAULT_AUTHENTICATION_CLASSES
File "C:\Python27\lib\site-packages\rest_framework\settings.py", line 220, in __getattr__
val = perform_import(val, attr)
File "C:\Python27\lib\site-packages\rest_framework\settings.py", line 165, in perform_import
return [import_from_string(item, setting_name) for item in val]
File "C:\Python27\lib\site-packages\rest_framework\settings.py", line 177, in import_from_string
module = import_module(module_path)
File "C:\Python27\lib\importlib\__init__.py", line 37, in import_module
__import__(name)
File "C:\Python27\lib\site-packages\rest_framework_jwt\authentication.py", line 1, in <module>
import jwt
File "C:\Python27\lib\site-packages\jwt\__init__.py", line 17, in <module>
from .jwk import (
File "C:\Python27\lib\site-packages\jwt\jwk.py", line 60
def is_sign_key(self) -> bool:
^
SyntaxError: invalid syntax
``` | 2017/05/01 | [
"https://Stackoverflow.com/questions/43716699",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7946534/"
] | You seem to have installed the JWT package, which is only compatible with Python 3.4+. The rest-framework-jwt app is trying to import that rather than PyJWT which is compatible with 2.7.
Remove that installation with `pip uninstall jwt`. Once removed you'll want to install PyJWT like so:
```
pip install PyJWT
``` | Not need to uninstall jwt. Just upgrade your PyJWT
```
pip install PyJWT --upgrade
``` |
1,507,091 | I'm trying to enforce a time limit on queries in python MySQLDB. I have a situation where I have no control over the queries, but need to ensure that they do not run over a set time limit. I've tried using signal.SIGALRM to interrupt the call to execute, but this does not seem to work. The signal gets sent, but does not get caught until after the call to execute finishes.
I wrote a test case to prove this behavior:
```
#!/usr/local/bin/python2.6
import time
import signal
from somewhere import get_dbc
class Timeout(Exception):
""" Time Exceded """
def _alarm_handler(*args):
raise Timeout
dbc = get_dbc()
signal.signal(signal.SIGALRM, _alarm_handler)
signal.alarm(1)
try:
print "START: ", time.time()
dbc.execute("SELECT SLEEP(10)")
except Timeout:
print "TIMEOUT!", time.time()'
```
The "SELECT SLEEP(10)" is simulating a slow query, but I do see the same behavior with an actual slow query.
The Result:
```
START: 1254440686.69
TIMEOUT! 1254440696.69
```
As you can see, it's sleeping for 10 seconds then I get the Timeout Exception.
Questions:
1. Why do I not get the signal until after execute finishes?
2. Is there another reliable way to limit query execution time? | 2009/10/01 | [
"https://Stackoverflow.com/questions/1507091",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4678/"
] | [@nosklo's twisted-based solution](https://stackoverflow.com/a/1507370/8053001) is elegant and workable, but if you want to avoid the dependency on twisted, the task is still doable, e.g:
```
import multiprocessing
def query_with_timeout(dbc, timeout, query, *a, **k):
conn1, conn2 = multiprocessing.Pipe(False)
subproc = multiprocessing.Process(target=do_query,
args=(dbc, query, conn2)+a,
kwargs=k)
subproc.start()
subproc.join(timeout)
if conn1.poll():
return conn1.recv()
subproc.terminate()
raise TimeoutError("Query %r ran for >%r" % (query, timeout))
def do_query(dbc, query, conn, *a, **k):
cu = dbc.cursor()
cu.execute(query, *a, **k)
return cu.fetchall()
``` | Use [adbapi](http://twistedmatrix.com/documents/current/api/twisted.enterprise.adbapi.html). It allows you to do a db call asynchronously.
```
from twisted.internet import reactor
from twisted.enterprise import adbapi
def bogusQuery():
return dbpool.runQuery("SELECT SLEEP(10)")
def printResult(l):
# function that would be called if it didn't time out
for item in l:
print item
def handle_timeout():
# function that will be called when it timeout
reactor.stop()
dbpool = adbapi.ConnectionPool("MySQLdb", user="me", password="myself", host="localhost", database="async")
bogusQuery().addCallback(printResult)
reactor.callLater(4, handle_timeout)
reactor.run()
``` |
1,507,091 | I'm trying to enforce a time limit on queries in python MySQLDB. I have a situation where I have no control over the queries, but need to ensure that they do not run over a set time limit. I've tried using signal.SIGALRM to interrupt the call to execute, but this does not seem to work. The signal gets sent, but does not get caught until after the call to execute finishes.
I wrote a test case to prove this behavior:
```
#!/usr/local/bin/python2.6
import time
import signal
from somewhere import get_dbc
class Timeout(Exception):
""" Time Exceded """
def _alarm_handler(*args):
raise Timeout
dbc = get_dbc()
signal.signal(signal.SIGALRM, _alarm_handler)
signal.alarm(1)
try:
print "START: ", time.time()
dbc.execute("SELECT SLEEP(10)")
except Timeout:
print "TIMEOUT!", time.time()'
```
The "SELECT SLEEP(10)" is simulating a slow query, but I do see the same behavior with an actual slow query.
The Result:
```
START: 1254440686.69
TIMEOUT! 1254440696.69
```
As you can see, it's sleeping for 10 seconds then I get the Timeout Exception.
Questions:
1. Why do I not get the signal until after execute finishes?
2. Is there another reliable way to limit query execution time? | 2009/10/01 | [
"https://Stackoverflow.com/questions/1507091",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4678/"
] | [@nosklo's twisted-based solution](https://stackoverflow.com/a/1507370/8053001) is elegant and workable, but if you want to avoid the dependency on twisted, the task is still doable, e.g:
```
import multiprocessing
def query_with_timeout(dbc, timeout, query, *a, **k):
conn1, conn2 = multiprocessing.Pipe(False)
subproc = multiprocessing.Process(target=do_query,
args=(dbc, query, conn2)+a,
kwargs=k)
subproc.start()
subproc.join(timeout)
if conn1.poll():
return conn1.recv()
subproc.terminate()
raise TimeoutError("Query %r ran for >%r" % (query, timeout))
def do_query(dbc, query, conn, *a, **k):
cu = dbc.cursor()
cu.execute(query, *a, **k)
return cu.fetchall()
``` | ### Generic notes
I've experienced the same issue lately with several conditions I had to met:
* solution must be thread safe
* multiple connections to database from the same machine may be active at the same time, kill the exact one connection/query
* application contains connections to many different databases - portable handler for each DB host
We had following class layout (*unfortunately I cannot post real sources*):
```python
class AbstractModel: pass
class FirstDatabaseModel(AbstractModel): pass # Connection to one DB host
class SecondDatabaseModel(AbstractModel): pass # Connection to one DB host
```
And created several threads for each model.
---
### Solution Python 3.2
In our application *one model = one database*. So I've created "*service connection*" for each model (so we could execute `KILL` in parallel connection). Therefore if one instance of `FirstDatabaseModel` was created, 2 database connection were created; if 5 instances were created only 6 connections were used:
```python
class AbstractModel:
_service_connection = None # Formal declaration
def __init__(self):
''' Somehow load config and create connection
'''
self.config = # ...
self.connection = MySQLFromConfig(self.config)
self._init_service_connection()
# Get connection ID (pseudocode)
self.connection_id = self.connection.FetchOneCol('SELECT CONNECTION_ID()')
def _init_service_connection(self):
''' Initialize one singleton connection for model
'''
cls = type(self)
if cls._service_connection is not None:
return
cls._service_connection = MySQLFromConfig(self.config)
```
Now we need a killer:
```python
def _kill_connection(self):
# Add your own mysql data escaping
sql = 'KILL CONNECTION {}'.format(self.connection_id)
# Do your own connection check and renewal
type(self)._service_connection.execute(sql)
```
*Note: `connection.execute` = create cursor, execute, close cursor.*
And make killer thread safe using `threading.Lock`:
```python
def _init_service_connection(self):
''' Initialize one singleton connection for model
'''
cls = type(self)
if cls._service_connection is not None:
return
cls._service_connection = MySQLFromConfig(self.config)
cls._service_connection_lock = threading.Lock()
def _kill_connection(self):
# Add your own mysql data escaping
sql = 'KILL CONNECTION {}'.format(self.connection_id)
cls = type(self)
# Do your own connection check and renewal
try:
cls._service_connection_lock.acquire()
cls._service_connection.execute(sql)
finally:
cls._service_connection_lock.release()
```
And finally add timed execution method using `threading.Timer`:
```python
def timed_query(self, sql, timeout=5):
kill_query_timer = threading.Timer(timeout, self._kill_connection)
kill_query_timer.start()
try:
self.connection.long_query()
finally:
kill_query_timer.cancel()
``` |
1,507,091 | I'm trying to enforce a time limit on queries in python MySQLDB. I have a situation where I have no control over the queries, but need to ensure that they do not run over a set time limit. I've tried using signal.SIGALRM to interrupt the call to execute, but this does not seem to work. The signal gets sent, but does not get caught until after the call to execute finishes.
I wrote a test case to prove this behavior:
```
#!/usr/local/bin/python2.6
import time
import signal
from somewhere import get_dbc
class Timeout(Exception):
""" Time Exceded """
def _alarm_handler(*args):
raise Timeout
dbc = get_dbc()
signal.signal(signal.SIGALRM, _alarm_handler)
signal.alarm(1)
try:
print "START: ", time.time()
dbc.execute("SELECT SLEEP(10)")
except Timeout:
print "TIMEOUT!", time.time()'
```
The "SELECT SLEEP(10)" is simulating a slow query, but I do see the same behavior with an actual slow query.
The Result:
```
START: 1254440686.69
TIMEOUT! 1254440696.69
```
As you can see, it's sleeping for 10 seconds then I get the Timeout Exception.
Questions:
1. Why do I not get the signal until after execute finishes?
2. Is there another reliable way to limit query execution time? | 2009/10/01 | [
"https://Stackoverflow.com/questions/1507091",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4678/"
] | Use [adbapi](http://twistedmatrix.com/documents/current/api/twisted.enterprise.adbapi.html). It allows you to do a db call asynchronously.
```
from twisted.internet import reactor
from twisted.enterprise import adbapi
def bogusQuery():
return dbpool.runQuery("SELECT SLEEP(10)")
def printResult(l):
# function that would be called if it didn't time out
for item in l:
print item
def handle_timeout():
# function that will be called when it timeout
reactor.stop()
dbpool = adbapi.ConnectionPool("MySQLdb", user="me", password="myself", host="localhost", database="async")
bogusQuery().addCallback(printResult)
reactor.callLater(4, handle_timeout)
reactor.run()
``` | >
> Why do I not get the signal until after execute finishes?
>
>
>
The process waiting for network I/O is in an uninterruptible state (UNIX thing, not related to Python or MySQL). It gets the signal after the system call finishes (probably as `EINTR` error code, although I am not sure).
>
> Is there another reliable way to limit query execution time?
>
>
>
I think that it is usually done by an external tool like `mkill` that monitors MySQL for long running queries and kills them. |
1,507,091 | I'm trying to enforce a time limit on queries in python MySQLDB. I have a situation where I have no control over the queries, but need to ensure that they do not run over a set time limit. I've tried using signal.SIGALRM to interrupt the call to execute, but this does not seem to work. The signal gets sent, but does not get caught until after the call to execute finishes.
I wrote a test case to prove this behavior:
```
#!/usr/local/bin/python2.6
import time
import signal
from somewhere import get_dbc
class Timeout(Exception):
""" Time Exceded """
def _alarm_handler(*args):
raise Timeout
dbc = get_dbc()
signal.signal(signal.SIGALRM, _alarm_handler)
signal.alarm(1)
try:
print "START: ", time.time()
dbc.execute("SELECT SLEEP(10)")
except Timeout:
print "TIMEOUT!", time.time()'
```
The "SELECT SLEEP(10)" is simulating a slow query, but I do see the same behavior with an actual slow query.
The Result:
```
START: 1254440686.69
TIMEOUT! 1254440696.69
```
As you can see, it's sleeping for 10 seconds then I get the Timeout Exception.
Questions:
1. Why do I not get the signal until after execute finishes?
2. Is there another reliable way to limit query execution time? | 2009/10/01 | [
"https://Stackoverflow.com/questions/1507091",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4678/"
] | Use [adbapi](http://twistedmatrix.com/documents/current/api/twisted.enterprise.adbapi.html). It allows you to do a db call asynchronously.
```
from twisted.internet import reactor
from twisted.enterprise import adbapi
def bogusQuery():
return dbpool.runQuery("SELECT SLEEP(10)")
def printResult(l):
# function that would be called if it didn't time out
for item in l:
print item
def handle_timeout():
# function that will be called when it timeout
reactor.stop()
dbpool = adbapi.ConnectionPool("MySQLdb", user="me", password="myself", host="localhost", database="async")
bogusQuery().addCallback(printResult)
reactor.callLater(4, handle_timeout)
reactor.run()
``` | >
> Why do I not get the signal until after execute finishes?
>
>
>
The query is executed through a C function, which blocks the Python VM from executing until it returns.
>
> Is there another reliable way to limit query execution time?
>
>
>
This is (IMO) a really ugly solution, but it *does* work. You could run the query in a separate process (either via `fork()` or the [`multiprocessing` module](http://docs.python.org/library/multiprocessing.html)). Run the alarm timer in your main process, and when you receive it, send a `SIGINT` or `SIGKILL` to the child process. If you use `multiprocessing`, you can use the `Process.terminate()` method. |
1,507,091 | I'm trying to enforce a time limit on queries in python MySQLDB. I have a situation where I have no control over the queries, but need to ensure that they do not run over a set time limit. I've tried using signal.SIGALRM to interrupt the call to execute, but this does not seem to work. The signal gets sent, but does not get caught until after the call to execute finishes.
I wrote a test case to prove this behavior:
```
#!/usr/local/bin/python2.6
import time
import signal
from somewhere import get_dbc
class Timeout(Exception):
""" Time Exceded """
def _alarm_handler(*args):
raise Timeout
dbc = get_dbc()
signal.signal(signal.SIGALRM, _alarm_handler)
signal.alarm(1)
try:
print "START: ", time.time()
dbc.execute("SELECT SLEEP(10)")
except Timeout:
print "TIMEOUT!", time.time()'
```
The "SELECT SLEEP(10)" is simulating a slow query, but I do see the same behavior with an actual slow query.
The Result:
```
START: 1254440686.69
TIMEOUT! 1254440696.69
```
As you can see, it's sleeping for 10 seconds then I get the Timeout Exception.
Questions:
1. Why do I not get the signal until after execute finishes?
2. Is there another reliable way to limit query execution time? | 2009/10/01 | [
"https://Stackoverflow.com/questions/1507091",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4678/"
] | [@nosklo's twisted-based solution](https://stackoverflow.com/a/1507370/8053001) is elegant and workable, but if you want to avoid the dependency on twisted, the task is still doable, e.g:
```
import multiprocessing
def query_with_timeout(dbc, timeout, query, *a, **k):
conn1, conn2 = multiprocessing.Pipe(False)
subproc = multiprocessing.Process(target=do_query,
args=(dbc, query, conn2)+a,
kwargs=k)
subproc.start()
subproc.join(timeout)
if conn1.poll():
return conn1.recv()
subproc.terminate()
raise TimeoutError("Query %r ran for >%r" % (query, timeout))
def do_query(dbc, query, conn, *a, **k):
cu = dbc.cursor()
cu.execute(query, *a, **k)
return cu.fetchall()
``` | >
> I've tried using signal.SIGALRM to interrupt the call to execute, but this does not seem to work. The signal gets sent, but does not get caught until after the call to execute finishes.
>
>
>
mysql library handles interrupted systems calls internally so you won't see side effects of SIGALRM until after API call completes (short of killing the current thread or process)
You can try patching MySQL-Python and use MYSQL\_OPT\_READ\_TIMEOUT option (added in mysql 5.0.25) |
1,507,091 | I'm trying to enforce a time limit on queries in python MySQLDB. I have a situation where I have no control over the queries, but need to ensure that they do not run over a set time limit. I've tried using signal.SIGALRM to interrupt the call to execute, but this does not seem to work. The signal gets sent, but does not get caught until after the call to execute finishes.
I wrote a test case to prove this behavior:
```
#!/usr/local/bin/python2.6
import time
import signal
from somewhere import get_dbc
class Timeout(Exception):
""" Time Exceded """
def _alarm_handler(*args):
raise Timeout
dbc = get_dbc()
signal.signal(signal.SIGALRM, _alarm_handler)
signal.alarm(1)
try:
print "START: ", time.time()
dbc.execute("SELECT SLEEP(10)")
except Timeout:
print "TIMEOUT!", time.time()'
```
The "SELECT SLEEP(10)" is simulating a slow query, but I do see the same behavior with an actual slow query.
The Result:
```
START: 1254440686.69
TIMEOUT! 1254440696.69
```
As you can see, it's sleeping for 10 seconds then I get the Timeout Exception.
Questions:
1. Why do I not get the signal until after execute finishes?
2. Is there another reliable way to limit query execution time? | 2009/10/01 | [
"https://Stackoverflow.com/questions/1507091",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4678/"
] | >
> I've tried using signal.SIGALRM to interrupt the call to execute, but this does not seem to work. The signal gets sent, but does not get caught until after the call to execute finishes.
>
>
>
mysql library handles interrupted systems calls internally so you won't see side effects of SIGALRM until after API call completes (short of killing the current thread or process)
You can try patching MySQL-Python and use MYSQL\_OPT\_READ\_TIMEOUT option (added in mysql 5.0.25) | >
> Why do I not get the signal until after execute finishes?
>
>
>
The query is executed through a C function, which blocks the Python VM from executing until it returns.
>
> Is there another reliable way to limit query execution time?
>
>
>
This is (IMO) a really ugly solution, but it *does* work. You could run the query in a separate process (either via `fork()` or the [`multiprocessing` module](http://docs.python.org/library/multiprocessing.html)). Run the alarm timer in your main process, and when you receive it, send a `SIGINT` or `SIGKILL` to the child process. If you use `multiprocessing`, you can use the `Process.terminate()` method. |
1,507,091 | I'm trying to enforce a time limit on queries in python MySQLDB. I have a situation where I have no control over the queries, but need to ensure that they do not run over a set time limit. I've tried using signal.SIGALRM to interrupt the call to execute, but this does not seem to work. The signal gets sent, but does not get caught until after the call to execute finishes.
I wrote a test case to prove this behavior:
```
#!/usr/local/bin/python2.6
import time
import signal
from somewhere import get_dbc
class Timeout(Exception):
""" Time Exceded """
def _alarm_handler(*args):
raise Timeout
dbc = get_dbc()
signal.signal(signal.SIGALRM, _alarm_handler)
signal.alarm(1)
try:
print "START: ", time.time()
dbc.execute("SELECT SLEEP(10)")
except Timeout:
print "TIMEOUT!", time.time()'
```
The "SELECT SLEEP(10)" is simulating a slow query, but I do see the same behavior with an actual slow query.
The Result:
```
START: 1254440686.69
TIMEOUT! 1254440696.69
```
As you can see, it's sleeping for 10 seconds then I get the Timeout Exception.
Questions:
1. Why do I not get the signal until after execute finishes?
2. Is there another reliable way to limit query execution time? | 2009/10/01 | [
"https://Stackoverflow.com/questions/1507091",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4678/"
] | >
> I've tried using signal.SIGALRM to interrupt the call to execute, but this does not seem to work. The signal gets sent, but does not get caught until after the call to execute finishes.
>
>
>
mysql library handles interrupted systems calls internally so you won't see side effects of SIGALRM until after API call completes (short of killing the current thread or process)
You can try patching MySQL-Python and use MYSQL\_OPT\_READ\_TIMEOUT option (added in mysql 5.0.25) | ### Generic notes
I've experienced the same issue lately with several conditions I had to met:
* solution must be thread safe
* multiple connections to database from the same machine may be active at the same time, kill the exact one connection/query
* application contains connections to many different databases - portable handler for each DB host
We had following class layout (*unfortunately I cannot post real sources*):
```python
class AbstractModel: pass
class FirstDatabaseModel(AbstractModel): pass # Connection to one DB host
class SecondDatabaseModel(AbstractModel): pass # Connection to one DB host
```
And created several threads for each model.
---
### Solution Python 3.2
In our application *one model = one database*. So I've created "*service connection*" for each model (so we could execute `KILL` in parallel connection). Therefore if one instance of `FirstDatabaseModel` was created, 2 database connection were created; if 5 instances were created only 6 connections were used:
```python
class AbstractModel:
_service_connection = None # Formal declaration
def __init__(self):
''' Somehow load config and create connection
'''
self.config = # ...
self.connection = MySQLFromConfig(self.config)
self._init_service_connection()
# Get connection ID (pseudocode)
self.connection_id = self.connection.FetchOneCol('SELECT CONNECTION_ID()')
def _init_service_connection(self):
''' Initialize one singleton connection for model
'''
cls = type(self)
if cls._service_connection is not None:
return
cls._service_connection = MySQLFromConfig(self.config)
```
Now we need a killer:
```python
def _kill_connection(self):
# Add your own mysql data escaping
sql = 'KILL CONNECTION {}'.format(self.connection_id)
# Do your own connection check and renewal
type(self)._service_connection.execute(sql)
```
*Note: `connection.execute` = create cursor, execute, close cursor.*
And make killer thread safe using `threading.Lock`:
```python
def _init_service_connection(self):
''' Initialize one singleton connection for model
'''
cls = type(self)
if cls._service_connection is not None:
return
cls._service_connection = MySQLFromConfig(self.config)
cls._service_connection_lock = threading.Lock()
def _kill_connection(self):
# Add your own mysql data escaping
sql = 'KILL CONNECTION {}'.format(self.connection_id)
cls = type(self)
# Do your own connection check and renewal
try:
cls._service_connection_lock.acquire()
cls._service_connection.execute(sql)
finally:
cls._service_connection_lock.release()
```
And finally add timed execution method using `threading.Timer`:
```python
def timed_query(self, sql, timeout=5):
kill_query_timer = threading.Timer(timeout, self._kill_connection)
kill_query_timer.start()
try:
self.connection.long_query()
finally:
kill_query_timer.cancel()
``` |
1,507,091 | I'm trying to enforce a time limit on queries in python MySQLDB. I have a situation where I have no control over the queries, but need to ensure that they do not run over a set time limit. I've tried using signal.SIGALRM to interrupt the call to execute, but this does not seem to work. The signal gets sent, but does not get caught until after the call to execute finishes.
I wrote a test case to prove this behavior:
```
#!/usr/local/bin/python2.6
import time
import signal
from somewhere import get_dbc
class Timeout(Exception):
""" Time Exceded """
def _alarm_handler(*args):
raise Timeout
dbc = get_dbc()
signal.signal(signal.SIGALRM, _alarm_handler)
signal.alarm(1)
try:
print "START: ", time.time()
dbc.execute("SELECT SLEEP(10)")
except Timeout:
print "TIMEOUT!", time.time()'
```
The "SELECT SLEEP(10)" is simulating a slow query, but I do see the same behavior with an actual slow query.
The Result:
```
START: 1254440686.69
TIMEOUT! 1254440696.69
```
As you can see, it's sleeping for 10 seconds then I get the Timeout Exception.
Questions:
1. Why do I not get the signal until after execute finishes?
2. Is there another reliable way to limit query execution time? | 2009/10/01 | [
"https://Stackoverflow.com/questions/1507091",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4678/"
] | ### Generic notes
I've experienced the same issue lately with several conditions I had to met:
* solution must be thread safe
* multiple connections to database from the same machine may be active at the same time, kill the exact one connection/query
* application contains connections to many different databases - portable handler for each DB host
We had following class layout (*unfortunately I cannot post real sources*):
```python
class AbstractModel: pass
class FirstDatabaseModel(AbstractModel): pass # Connection to one DB host
class SecondDatabaseModel(AbstractModel): pass # Connection to one DB host
```
And created several threads for each model.
---
### Solution Python 3.2
In our application *one model = one database*. So I've created "*service connection*" for each model (so we could execute `KILL` in parallel connection). Therefore if one instance of `FirstDatabaseModel` was created, 2 database connection were created; if 5 instances were created only 6 connections were used:
```python
class AbstractModel:
_service_connection = None # Formal declaration
def __init__(self):
''' Somehow load config and create connection
'''
self.config = # ...
self.connection = MySQLFromConfig(self.config)
self._init_service_connection()
# Get connection ID (pseudocode)
self.connection_id = self.connection.FetchOneCol('SELECT CONNECTION_ID()')
def _init_service_connection(self):
''' Initialize one singleton connection for model
'''
cls = type(self)
if cls._service_connection is not None:
return
cls._service_connection = MySQLFromConfig(self.config)
```
Now we need a killer:
```python
def _kill_connection(self):
# Add your own mysql data escaping
sql = 'KILL CONNECTION {}'.format(self.connection_id)
# Do your own connection check and renewal
type(self)._service_connection.execute(sql)
```
*Note: `connection.execute` = create cursor, execute, close cursor.*
And make killer thread safe using `threading.Lock`:
```python
def _init_service_connection(self):
''' Initialize one singleton connection for model
'''
cls = type(self)
if cls._service_connection is not None:
return
cls._service_connection = MySQLFromConfig(self.config)
cls._service_connection_lock = threading.Lock()
def _kill_connection(self):
# Add your own mysql data escaping
sql = 'KILL CONNECTION {}'.format(self.connection_id)
cls = type(self)
# Do your own connection check and renewal
try:
cls._service_connection_lock.acquire()
cls._service_connection.execute(sql)
finally:
cls._service_connection_lock.release()
```
And finally add timed execution method using `threading.Timer`:
```python
def timed_query(self, sql, timeout=5):
kill_query_timer = threading.Timer(timeout, self._kill_connection)
kill_query_timer.start()
try:
self.connection.long_query()
finally:
kill_query_timer.cancel()
``` | >
> Why do I not get the signal until after execute finishes?
>
>
>
The process waiting for network I/O is in an uninterruptible state (UNIX thing, not related to Python or MySQL). It gets the signal after the system call finishes (probably as `EINTR` error code, although I am not sure).
>
> Is there another reliable way to limit query execution time?
>
>
>
I think that it is usually done by an external tool like `mkill` that monitors MySQL for long running queries and kills them. |
1,507,091 | I'm trying to enforce a time limit on queries in python MySQLDB. I have a situation where I have no control over the queries, but need to ensure that they do not run over a set time limit. I've tried using signal.SIGALRM to interrupt the call to execute, but this does not seem to work. The signal gets sent, but does not get caught until after the call to execute finishes.
I wrote a test case to prove this behavior:
```
#!/usr/local/bin/python2.6
import time
import signal
from somewhere import get_dbc
class Timeout(Exception):
""" Time Exceded """
def _alarm_handler(*args):
raise Timeout
dbc = get_dbc()
signal.signal(signal.SIGALRM, _alarm_handler)
signal.alarm(1)
try:
print "START: ", time.time()
dbc.execute("SELECT SLEEP(10)")
except Timeout:
print "TIMEOUT!", time.time()'
```
The "SELECT SLEEP(10)" is simulating a slow query, but I do see the same behavior with an actual slow query.
The Result:
```
START: 1254440686.69
TIMEOUT! 1254440696.69
```
As you can see, it's sleeping for 10 seconds then I get the Timeout Exception.
Questions:
1. Why do I not get the signal until after execute finishes?
2. Is there another reliable way to limit query execution time? | 2009/10/01 | [
"https://Stackoverflow.com/questions/1507091",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4678/"
] | >
> Why do I not get the signal until after execute finishes?
>
>
>
The query is executed through a C function, which blocks the Python VM from executing until it returns.
>
> Is there another reliable way to limit query execution time?
>
>
>
This is (IMO) a really ugly solution, but it *does* work. You could run the query in a separate process (either via `fork()` or the [`multiprocessing` module](http://docs.python.org/library/multiprocessing.html)). Run the alarm timer in your main process, and when you receive it, send a `SIGINT` or `SIGKILL` to the child process. If you use `multiprocessing`, you can use the `Process.terminate()` method. | >
> Why do I not get the signal until after execute finishes?
>
>
>
The process waiting for network I/O is in an uninterruptible state (UNIX thing, not related to Python or MySQL). It gets the signal after the system call finishes (probably as `EINTR` error code, although I am not sure).
>
> Is there another reliable way to limit query execution time?
>
>
>
I think that it is usually done by an external tool like `mkill` that monitors MySQL for long running queries and kills them. |
1,507,091 | I'm trying to enforce a time limit on queries in python MySQLDB. I have a situation where I have no control over the queries, but need to ensure that they do not run over a set time limit. I've tried using signal.SIGALRM to interrupt the call to execute, but this does not seem to work. The signal gets sent, but does not get caught until after the call to execute finishes.
I wrote a test case to prove this behavior:
```
#!/usr/local/bin/python2.6
import time
import signal
from somewhere import get_dbc
class Timeout(Exception):
""" Time Exceded """
def _alarm_handler(*args):
raise Timeout
dbc = get_dbc()
signal.signal(signal.SIGALRM, _alarm_handler)
signal.alarm(1)
try:
print "START: ", time.time()
dbc.execute("SELECT SLEEP(10)")
except Timeout:
print "TIMEOUT!", time.time()'
```
The "SELECT SLEEP(10)" is simulating a slow query, but I do see the same behavior with an actual slow query.
The Result:
```
START: 1254440686.69
TIMEOUT! 1254440696.69
```
As you can see, it's sleeping for 10 seconds then I get the Timeout Exception.
Questions:
1. Why do I not get the signal until after execute finishes?
2. Is there another reliable way to limit query execution time? | 2009/10/01 | [
"https://Stackoverflow.com/questions/1507091",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4678/"
] | [@nosklo's twisted-based solution](https://stackoverflow.com/a/1507370/8053001) is elegant and workable, but if you want to avoid the dependency on twisted, the task is still doable, e.g:
```
import multiprocessing
def query_with_timeout(dbc, timeout, query, *a, **k):
conn1, conn2 = multiprocessing.Pipe(False)
subproc = multiprocessing.Process(target=do_query,
args=(dbc, query, conn2)+a,
kwargs=k)
subproc.start()
subproc.join(timeout)
if conn1.poll():
return conn1.recv()
subproc.terminate()
raise TimeoutError("Query %r ran for >%r" % (query, timeout))
def do_query(dbc, query, conn, *a, **k):
cu = dbc.cursor()
cu.execute(query, *a, **k)
return cu.fetchall()
``` | >
> Why do I not get the signal until after execute finishes?
>
>
>
The process waiting for network I/O is in an uninterruptible state (UNIX thing, not related to Python or MySQL). It gets the signal after the system call finishes (probably as `EINTR` error code, although I am not sure).
>
> Is there another reliable way to limit query execution time?
>
>
>
I think that it is usually done by an external tool like `mkill` that monitors MySQL for long running queries and kills them. |
17,213,455 | Im kind of new to python. Im trying to remove the first sentence from a string using the full stop as the delimiter. Is split the right method to be using in this instance? Im not getting the desired result...
```
def get_summary(self):
if self.description:
s2 = self.description.split('.', 1)[1]
return s2
else:
return None
``` | 2013/06/20 | [
"https://Stackoverflow.com/questions/17213455",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2342568/"
] | You can use [`String.split`](http://docs.oracle.com/javase/6/docs/api/java/lang/String.html#split%28java.lang.String%29):
```
String cmd = "command atr1 art22 atr333 art4444";
String[] parts = cmd.split(" ");
```
The split method permits using a regular expression. This is useful for example if the amount of whitespace varies:
```
String cmd = "command atr1 art22 atr333 art4444";
String[] parts = cmd.split(" +"); // split by spans of one or more spaces
``` | Here a few options, sorted from easy/annoying-in-the-end to powerful/hard-to-learn
* "your command pattern".split( " " ) gives you an array of strings
* [`java.util.Scanner`](http://docs.oracle.com/javase/1.5.0/docs/api/java/util/Scanner.html) lets you take out one token after the other, and it has some handy helpers for parsing like `nextInt()` or `nextFloat()`
* a command line parser library, like [commons cli](http://commons.apache.org/proper/commons-cli/). those are a bit of work to learn, but they have the upside of solving some other problems you will be facing shortly :)
p.s. to generally find more help on the internet the search term you are looking for is "java parsing command line arguments", thats pretty much what you're trying to do, in case you didn't know :) |
17,213,455 | Im kind of new to python. Im trying to remove the first sentence from a string using the full stop as the delimiter. Is split the right method to be using in this instance? Im not getting the desired result...
```
def get_summary(self):
if self.description:
s2 = self.description.split('.', 1)[1]
return s2
else:
return None
``` | 2013/06/20 | [
"https://Stackoverflow.com/questions/17213455",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2342568/"
] | You can use [`String.split`](http://docs.oracle.com/javase/6/docs/api/java/lang/String.html#split%28java.lang.String%29):
```
String cmd = "command atr1 art22 atr333 art4444";
String[] parts = cmd.split(" ");
```
The split method permits using a regular expression. This is useful for example if the amount of whitespace varies:
```
String cmd = "command atr1 art22 atr333 art4444";
String[] parts = cmd.split(" +"); // split by spans of one or more spaces
``` | Or try [this](http://docs.oracle.com/javase/1.4.2/docs/api/java/util/StringTokenizer.html) StringTokenizer class
```
StringTokenizer st = new StringTokenizer("this is a test");
while (st.hasMoreTokens()) {
System.out.println(st.nextToken());
}
``` |
17,213,455 | Im kind of new to python. Im trying to remove the first sentence from a string using the full stop as the delimiter. Is split the right method to be using in this instance? Im not getting the desired result...
```
def get_summary(self):
if self.description:
s2 = self.description.split('.', 1)[1]
return s2
else:
return None
``` | 2013/06/20 | [
"https://Stackoverflow.com/questions/17213455",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2342568/"
] | Here a few options, sorted from easy/annoying-in-the-end to powerful/hard-to-learn
* "your command pattern".split( " " ) gives you an array of strings
* [`java.util.Scanner`](http://docs.oracle.com/javase/1.5.0/docs/api/java/util/Scanner.html) lets you take out one token after the other, and it has some handy helpers for parsing like `nextInt()` or `nextFloat()`
* a command line parser library, like [commons cli](http://commons.apache.org/proper/commons-cli/). those are a bit of work to learn, but they have the upside of solving some other problems you will be facing shortly :)
p.s. to generally find more help on the internet the search term you are looking for is "java parsing command line arguments", thats pretty much what you're trying to do, in case you didn't know :) | Or try [this](http://docs.oracle.com/javase/1.4.2/docs/api/java/util/StringTokenizer.html) StringTokenizer class
```
StringTokenizer st = new StringTokenizer("this is a test");
while (st.hasMoreTokens()) {
System.out.println(st.nextToken());
}
``` |
44,486,483 | So I've begun working on this little translator program that translates English to German with an input. However, when I enter more than one word I get the words I've entered, followed by the correct translation.
This is what I have so far:
```
data = [input()]
dictionary = {'i':'ich', 'am':'bin', 'a':'ein', 'student':'schueler', 'of
the':'der', 'german':'deutschen', 'language': 'sprache'}
from itertools import takewhile
def find_suffix(s):
return ''.join(takewhile(str.isalpha, s[::-1]))[::-1]
for d in data:
sfx = find_suffix(d)
print (d.replace(sfx, dictionary.get(sfx, sfx)))
```
I'm trying to get the following output:
```
"i am a student of the german sprache"
```
as opposed to:
```
"ich bin ein schueler der deutschen spracher"
```
I'm quite new to python so any help would be greatly appreciated | 2017/06/11 | [
"https://Stackoverflow.com/questions/44486483",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8144709/"
] | Changing your code to this should provide a first step to what you're looking for.
```
data = raw_input()
dictionary = {'i':'ich', 'am':'bin', 'a':'ein', 'student':'schueler', 'of':'der', 'german':'deutschen', 'language': 'sprache'}
from itertools import takewhile
def find_suffix(s):
return ''.join(takewhile(str.isalpha, s[::-1]))[::-1]
for d in data.split():
sfx = find_suffix(d)
print (d.replace(sfx, dictionary.get(sfx,''))),
```
What you have right now does not take every separate word into consideration as data is not a list of words as you intended but a list holding one string, the input you provided. Try print-debugging your snippet to see what I am talking about.
Notice that with such logic corner cases in your project appear. Taking each word and translating it with its German counterpart prohibits dictionary entries longer than 1 word, such as `'of the':'der'`. For demo purposes I chose to keep a dictionary with keys of length 1, so the above key:value pair becomes `'of':'der'` which is not correct, as German grammar is a little more complicated than that.
You now have more problems than what you started with, which is what toy projects are for. If I was you, I'd look into how open source projects deal with such cases and try to see what fits. Good luck with your project. | ```
data = [input()]
dictionary = {'i':'ich', 'am':'bin', 'a':'ein', 'student':'schueler', 'of the':'der', 'german':'deutschen', 'language': 'sprache'}
for word in data:
if word in dictionary:
print dictionary[word],
```
Explanation:
for every word in your input if that word in present in your dictionary
It will print the value associated with that word and comma (,) is to skip newline character. |
44,486,483 | So I've begun working on this little translator program that translates English to German with an input. However, when I enter more than one word I get the words I've entered, followed by the correct translation.
This is what I have so far:
```
data = [input()]
dictionary = {'i':'ich', 'am':'bin', 'a':'ein', 'student':'schueler', 'of
the':'der', 'german':'deutschen', 'language': 'sprache'}
from itertools import takewhile
def find_suffix(s):
return ''.join(takewhile(str.isalpha, s[::-1]))[::-1]
for d in data:
sfx = find_suffix(d)
print (d.replace(sfx, dictionary.get(sfx, sfx)))
```
I'm trying to get the following output:
```
"i am a student of the german sprache"
```
as opposed to:
```
"ich bin ein schueler der deutschen spracher"
```
I'm quite new to python so any help would be greatly appreciated | 2017/06/11 | [
"https://Stackoverflow.com/questions/44486483",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8144709/"
] | Changing your code to this should provide a first step to what you're looking for.
```
data = raw_input()
dictionary = {'i':'ich', 'am':'bin', 'a':'ein', 'student':'schueler', 'of':'der', 'german':'deutschen', 'language': 'sprache'}
from itertools import takewhile
def find_suffix(s):
return ''.join(takewhile(str.isalpha, s[::-1]))[::-1]
for d in data.split():
sfx = find_suffix(d)
print (d.replace(sfx, dictionary.get(sfx,''))),
```
What you have right now does not take every separate word into consideration as data is not a list of words as you intended but a list holding one string, the input you provided. Try print-debugging your snippet to see what I am talking about.
Notice that with such logic corner cases in your project appear. Taking each word and translating it with its German counterpart prohibits dictionary entries longer than 1 word, such as `'of the':'der'`. For demo purposes I chose to keep a dictionary with keys of length 1, so the above key:value pair becomes `'of':'der'` which is not correct, as German grammar is a little more complicated than that.
You now have more problems than what you started with, which is what toy projects are for. If I was you, I'd look into how open source projects deal with such cases and try to see what fits. Good luck with your project. | I have noticed two things in your `input`. First thing is that you can have two words to be translated into one (two word `key` in `dictionary`) and the other thing is that `input` can have german words that shouldn't be translated. Having those two conditions I think the best approach is to `split()` the `input` and `loop` through to check the words. Follow the comments in the code below:
```
dictionary = {'i': 'ich', 'am': 'bin', 'a': 'ein', 'student': 'schueler', 'of the': 'der', 'german': 'deutschen', 'language': 'sprache'}
data = "i am a student of the german sprache"
lst = data.split()
result = ''
i = 0
while i < len(lst):
# try/except to see if the key is one word or two words
try:
if lst[i] in dictionary.values(): # Check if the word is german
result += lst[i] + ' '
i += 1
else:
result += dictionary[lst[i]] + ' ' # get the word from the dictionary
i += 1
except KeyError:
result += dictionary[lst[i] + ' ' + lst[i+1]] + ' ' # if the word is not german and not in dictionary, add the 2nd word and get from dictionary
i += 2
print result
```
output:
```
ich bin ein schueler der deutschen sprache
```
This will also fail if you have a 3 word `key` for example, but if you only have two words max then it should be fine. |
63,826,975 | I get the following error when I want to import matplotlib.pyplot on the Visual Studio's jupyter-notebook.
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
in
----> 1 import matplotlib.pyplot as plt
~/miniconda3/envs/firstSteps/lib/python3.8/site-packages/matplotlib/__init__.py in
903 # triggering resolution of _auto_backend_sentinel.
904 rcParamsDefault = _rc_params_in_file(
--> 905 cbook._get_data_path("matplotlibrc"),
906 # Strip leading comment.
907 transform=lambda line: line[1:] if line.startswith("#") else line,
~/.local/lib/python3.8/site-packages/matplotlib/cbook/__init__.py in _get_data_path(*args)
AttributeError: module 'matplotlib' has no attribute 'get_data_path'
```
But I don't have this error if I try the same code on the navigator's jupyter-notebook.
So I don't understand why I get this error since both notebook are running under the same kernel which have the matplotlib 3.3.1 version installed on.
I would be grateful if someone can give me any enlightenment. :) | 2020/09/10 | [
"https://Stackoverflow.com/questions/63826975",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12409079/"
] | If you want all Gold customers, then `Customers` should be the first table in the `LEFT JOIN`. There is also no need for a subquery on `customers`. However, MS Access does want one on `Transactions`:
```
SELECT c.CustId, NZ(SUM(t.Value)) AS Total
FROM Customers as c LEFT JOIN
(SELECT t.*
FROM Transactions as t
WHERE t.xDate BETWEEN #2020/01/03# AND #2020/01/04#
) as t
ON t.CustId = c.CustId
WHERE c.CustType = 'Gold'
GROUP BY c.CustId;
``` | ***Edit:*** Simplified query
```
SELECT Customers.CustID, Sum(Transactions.tValue) AS Total
FROM Customers LEFT JOIN Transactions ON Customers.CustID = Transactions.CustID
WHERE (Transactions.xDate BETWEEN #2020/01/03# AND #2020/01/04#) AND (Customers.CustType='Gold')
GROUP BY Customers.CustID;
```
You can sum total of union query result group by customer id. try below
I assume your `CustID` field is `Number` data type. If it is string data type then you need to change `DLookup()` function criteria part like `DLookup("CustType","Customers","CustID='" & t.CustID & "'")`
```
SELECT ut.CustID, Sum(ut.Total) AS Total
FROM (SELECT c.CustId, 0 as Total, c.CustType
FROM Customers AS c
GROUP BY c.CustId,c.CustType
UNION
SELECT t.CustId, SUM(t.tValue) AS Total, DLookup("CustType","Customers","CustID=" & t.CustID ) as CustType
FROM Transactions AS t
WHERE t.xDate BETWEEN #2020/01/03# AND #2020/01/04# GROUP BY t.CustId) AS ut GROUP BY ut.CustID, ut.CustType
HAVING (((ut.CustType)='gold'));
``` |
63,826,975 | I get the following error when I want to import matplotlib.pyplot on the Visual Studio's jupyter-notebook.
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
in
----> 1 import matplotlib.pyplot as plt
~/miniconda3/envs/firstSteps/lib/python3.8/site-packages/matplotlib/__init__.py in
903 # triggering resolution of _auto_backend_sentinel.
904 rcParamsDefault = _rc_params_in_file(
--> 905 cbook._get_data_path("matplotlibrc"),
906 # Strip leading comment.
907 transform=lambda line: line[1:] if line.startswith("#") else line,
~/.local/lib/python3.8/site-packages/matplotlib/cbook/__init__.py in _get_data_path(*args)
AttributeError: module 'matplotlib' has no attribute 'get_data_path'
```
But I don't have this error if I try the same code on the navigator's jupyter-notebook.
So I don't understand why I get this error since both notebook are running under the same kernel which have the matplotlib 3.3.1 version installed on.
I would be grateful if someone can give me any enlightenment. :) | 2020/09/10 | [
"https://Stackoverflow.com/questions/63826975",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12409079/"
] | If you want all Gold customers, then `Customers` should be the first table in the `LEFT JOIN`. There is also no need for a subquery on `customers`. However, MS Access does want one on `Transactions`:
```
SELECT c.CustId, NZ(SUM(t.Value)) AS Total
FROM Customers as c LEFT JOIN
(SELECT t.*
FROM Transactions as t
WHERE t.xDate BETWEEN #2020/01/03# AND #2020/01/04#
) as t
ON t.CustId = c.CustId
WHERE c.CustType = 'Gold'
GROUP BY c.CustId;
``` | Consider:
Query1:
```
SELECT CustId, SUM(Value) AS Total
FROM Transactions
WHERE xDate BETWEEN #2020/02/01# AND #2020/03/01#
GROUP BY CustId;
```
Query2:
```
SELECT Customers.CustID, Query1.Total
FROM Customers LEFT JOIN Query1 ON Customers.CustID = Query1.CustId
WHERE (((Customers.[CustType])="Gold"));
``` |
63,826,975 | I get the following error when I want to import matplotlib.pyplot on the Visual Studio's jupyter-notebook.
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
in
----> 1 import matplotlib.pyplot as plt
~/miniconda3/envs/firstSteps/lib/python3.8/site-packages/matplotlib/__init__.py in
903 # triggering resolution of _auto_backend_sentinel.
904 rcParamsDefault = _rc_params_in_file(
--> 905 cbook._get_data_path("matplotlibrc"),
906 # Strip leading comment.
907 transform=lambda line: line[1:] if line.startswith("#") else line,
~/.local/lib/python3.8/site-packages/matplotlib/cbook/__init__.py in _get_data_path(*args)
AttributeError: module 'matplotlib' has no attribute 'get_data_path'
```
But I don't have this error if I try the same code on the navigator's jupyter-notebook.
So I don't understand why I get this error since both notebook are running under the same kernel which have the matplotlib 3.3.1 version installed on.
I would be grateful if someone can give me any enlightenment. :) | 2020/09/10 | [
"https://Stackoverflow.com/questions/63826975",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12409079/"
] | If you want all Gold customers, then `Customers` should be the first table in the `LEFT JOIN`. There is also no need for a subquery on `customers`. However, MS Access does want one on `Transactions`:
```
SELECT c.CustId, NZ(SUM(t.Value)) AS Total
FROM Customers as c LEFT JOIN
(SELECT t.*
FROM Transactions as t
WHERE t.xDate BETWEEN #2020/01/03# AND #2020/01/04#
) as t
ON t.CustId = c.CustId
WHERE c.CustType = 'Gold'
GROUP BY c.CustId;
``` | `LEFT OUTER JOIN` is just called `LEFT JOIN` with Access-SQL, and works generally as expected.
See [here](https://support.microsoft.com/en-us/office/left-join-right-join-operations-ebb18b36-7976-4c6e-9ea1-c701e9f7f5fb#:%7E:text=Use%20a%20LEFT%20JOIN%20operation,create%20a%20right%20outer%20join.) for more information on its usage and limitations. |
34,783,867 | I have two pandas series like following.
```
bulk_order_id
Out[283]:
3 523
Name: order_id, dtype: object
```
and
```
luster_6_loc
Out[285]:
3 Cluster 3
Name: Clusters, dtype: object
```
Now I want a new series which would look like this.
```
Cluster 3 523
```
I am doing following in python
```
cluster_final = pd.Series()
for i in range(len(cluster_6_loc)):
cluster_final.append(pd.Series(bulk_order_id.values[i], index =
cluster_6_loc.iloc[i]))
```
Which gives me an error saying
```
TypeError: Index(...) must be called with a collection of some kind, 'Cluster 3' was passed
``` | 2016/01/14 | [
"https://Stackoverflow.com/questions/34783867",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2927983/"
] | You could pass to `pd.Series` values of `luster_6_loc` as index and values of `bulk_order_id` as values:
```
bulk_order_id = pd.Series(523, index=[3])
cluster_6_loc= pd.Series('Cluster 3', index=[3])
cluster_final = pd.Series(bulk_order_id.values, cluster_6_loc.values)
In [149]: cluster_final
Out[149]:
Cluster 3 523
dtype: int64
```
**EDIT**
It's strange but it seems that `append` to `Series` doesn't work correctly (at least in version `0.17.1`):
```
s = pd.Series()
In [199]: s.append(pd.Series(1, index=[0]))
Out[199]:
0 1
dtype: int64
In [200]: s
Out[200]: Series([], dtype: float64)
```
Btw for your case your could do `set_value`:
```
cluster_final = pd.Series()
for i in range(len(cluster_6_loc)):
cluster_final.set_value(cluster_6_loc.iloc[i], bulk_order_id.values[i])
In [209]: cluster_final
Out[209]:
Cluster 3 523
dtype: int64
``` | Not sure whether I'm understanding your question correctly, but what's wrong with `pd.concat()` ([see docs](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.concat.html)):
```
s1 = pd.Series(data=['523'], index=[3])
3 523
dtype: object
s2 = pd.Series(data=['Cluster 3'], index=[3])
3 Cluster 3
dtype: object
```
and using `pd.concat()`, which would also work for several values:
```
pd.concat([s1, s2], axis=1)
0 1
3 523 Cluster 3
```
resulting in a `DataFrame` which is what you'll probably need anyway when combining `Series` with several values. You can move any of the `values` to the `index` using `.set_index()`, or add `.squeeze()` to get a `Series` instead.
So `pd.concat([s1, s2], axis=1).set_index(1)` gives:
```
0
1
Cluster 3 523
``` |
34,783,867 | I have two pandas series like following.
```
bulk_order_id
Out[283]:
3 523
Name: order_id, dtype: object
```
and
```
luster_6_loc
Out[285]:
3 Cluster 3
Name: Clusters, dtype: object
```
Now I want a new series which would look like this.
```
Cluster 3 523
```
I am doing following in python
```
cluster_final = pd.Series()
for i in range(len(cluster_6_loc)):
cluster_final.append(pd.Series(bulk_order_id.values[i], index =
cluster_6_loc.iloc[i]))
```
Which gives me an error saying
```
TypeError: Index(...) must be called with a collection of some kind, 'Cluster 3' was passed
``` | 2016/01/14 | [
"https://Stackoverflow.com/questions/34783867",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2927983/"
] | Maybe better is use [`concat`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.concat.html) and [`set_index`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.set_index.html):
```
print bulk_order_id
1 523
2 528
3 527
4 573
Name: order_id, dtype: object
print cluster_6_loc
1 Cluster 1
2 Cluster 2
3 Cluster 3
4 Cluster 4
Name: Clusters, dtype: object
cluster_final = pd.concat([bulk_order_id, cluster_6_loc], axis=1).set_index('Clusters')
#reset index name
cluster_final.index.name = ''
print cluster_final.ix[:,0]
Cluster 1 523
Cluster 2 528
Cluster 3 527
Cluster 4 573
Name: order_id, dtype: object
``` | Not sure whether I'm understanding your question correctly, but what's wrong with `pd.concat()` ([see docs](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.concat.html)):
```
s1 = pd.Series(data=['523'], index=[3])
3 523
dtype: object
s2 = pd.Series(data=['Cluster 3'], index=[3])
3 Cluster 3
dtype: object
```
and using `pd.concat()`, which would also work for several values:
```
pd.concat([s1, s2], axis=1)
0 1
3 523 Cluster 3
```
resulting in a `DataFrame` which is what you'll probably need anyway when combining `Series` with several values. You can move any of the `values` to the `index` using `.set_index()`, or add `.squeeze()` to get a `Series` instead.
So `pd.concat([s1, s2], axis=1).set_index(1)` gives:
```
0
1
Cluster 3 523
``` |
34,783,867 | I have two pandas series like following.
```
bulk_order_id
Out[283]:
3 523
Name: order_id, dtype: object
```
and
```
luster_6_loc
Out[285]:
3 Cluster 3
Name: Clusters, dtype: object
```
Now I want a new series which would look like this.
```
Cluster 3 523
```
I am doing following in python
```
cluster_final = pd.Series()
for i in range(len(cluster_6_loc)):
cluster_final.append(pd.Series(bulk_order_id.values[i], index =
cluster_6_loc.iloc[i]))
```
Which gives me an error saying
```
TypeError: Index(...) must be called with a collection of some kind, 'Cluster 3' was passed
``` | 2016/01/14 | [
"https://Stackoverflow.com/questions/34783867",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2927983/"
] | Maybe better is use [`concat`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.concat.html) and [`set_index`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.set_index.html):
```
print bulk_order_id
1 523
2 528
3 527
4 573
Name: order_id, dtype: object
print cluster_6_loc
1 Cluster 1
2 Cluster 2
3 Cluster 3
4 Cluster 4
Name: Clusters, dtype: object
cluster_final = pd.concat([bulk_order_id, cluster_6_loc], axis=1).set_index('Clusters')
#reset index name
cluster_final.index.name = ''
print cluster_final.ix[:,0]
Cluster 1 523
Cluster 2 528
Cluster 3 527
Cluster 4 573
Name: order_id, dtype: object
``` | You could pass to `pd.Series` values of `luster_6_loc` as index and values of `bulk_order_id` as values:
```
bulk_order_id = pd.Series(523, index=[3])
cluster_6_loc= pd.Series('Cluster 3', index=[3])
cluster_final = pd.Series(bulk_order_id.values, cluster_6_loc.values)
In [149]: cluster_final
Out[149]:
Cluster 3 523
dtype: int64
```
**EDIT**
It's strange but it seems that `append` to `Series` doesn't work correctly (at least in version `0.17.1`):
```
s = pd.Series()
In [199]: s.append(pd.Series(1, index=[0]))
Out[199]:
0 1
dtype: int64
In [200]: s
Out[200]: Series([], dtype: float64)
```
Btw for your case your could do `set_value`:
```
cluster_final = pd.Series()
for i in range(len(cluster_6_loc)):
cluster_final.set_value(cluster_6_loc.iloc[i], bulk_order_id.values[i])
In [209]: cluster_final
Out[209]:
Cluster 3 523
dtype: int64
``` |
67,434,998 | I'm new to python / pandas. I've got multiple csv files in a directory. I want to remove duplicates in all the files and save new files to another directory.
Below is what I've tried:
```
import pandas as pd
import glob
list_files = (glob.glob("directory path/*.csv"))
for file in list_files:
df = pd.read_csv(file)
df_new = df.drop_duplicates()
df_new.to_csv(file)
```
This code runs but doesn't yield expected results. A couple of issues.
1. files are overwritten in the existing directory.
2. there is an additional index column being added which is not required.
what changes need to be done in the code to get the same set of files with the same file names without duplicate rows to another directory? | 2021/05/07 | [
"https://Stackoverflow.com/questions/67434998",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15862144/"
] | The problem is because of the `{}` that are around your file, pandas thinks that the first level of the JSON are the columns and thus it uses just Browser History as a column. You can use this code to solve your problem:
```
import pandas as pd
df = pd.DataFrame(json.load(open('BrowserHistory.json', encoding='cp850'))['Browser History'])
print(df)
``` | Because your objects are in a list at the second level down of your JSON, you can't read it directly into a dataframe using `read_json`. Instead, you could read the json into a variable, and then create the dataframe from that:
```py
import pandas as pd
import json
f = open("BrowserHistory.json")
js = json.load(f)
df = pd.DataFrame(js['Browser History'])
df
# favicon_url page_transition ... client_id time_usec
# 0 https://www.google.com/favicon.ico LINK ... cliendid 1620386529857946
# 1 https://www.google.com/favicon.ico LINK ... cliendid 1620386514845201
# 2 https://www.google.com/favicon.ico LINK ... cliendid 1620386499014063
# 3 https://ssl.gstatic.com/ui/v1/icons/mail/rfr/g... LINK ... cliendid 1620386492788783
```
Note you may need to specify the file encoding on the `open` call e.g.
```py
f = open("BrowserHistory.json", encoding="utf8")
``` |
52,949,128 | I'm doing a project that involves analyzing WhatsApp log data.
After preprocessing the log file I have a table that looks like this:
```
DD/MM/YY | hh:mm | name | text |
```
I could build a graph where, using a chat with a friend of mine, I plotted a graph of the number of text per month and the mean number of words per month but I have some problems:
* If in a month we didn't exchange text the algorithm doesn't count that month, therefore in the graph I want to see that month with 0 messages
* there is a better way to utilize dates and time in python? Using them as strings isn't so intuitive but online I didn't found anything useful.
[this is the GitLab page of my project.](https://gitlab.com/GiuseppeMinardi/whatsgraph)
```
def wapp_split(line):
splitted = line.split(',')
Data['date'].append(splitted[0])
splitted = splitted[1].split(' - ')
Data['time'].append(splitted[0])
splitted = splitted[1].split(':')
Data['name'].append(splitted[0])
Data['msg'].append(splitted[1][0:-1])
def wapp_parsing(file):
with open(file) as f:
data = f.readlines()
for line in data:
if (line[17:].find(':')!= -1):
if (line[0] in numbers) and (line[1]in numbers):
prev = line[0:35]
wapp_split(line)
else:
line = prev + line
wapp_split(line)
```
Those are the main function of the script. The WhatsApp log is formatted like so:
```
DD/MM/YY, hh:mm - Name Surname: This is a text sent using WhatsApp
```
The parsing function just take the file and send each line to the split *function*. Those if in the parsing function just avoid that mssages from WhatsApp and not from the people in the chat being parsed. | 2018/10/23 | [
"https://Stackoverflow.com/questions/52949128",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8425613/"
] | Suppose that the table you have is a .csv file that looks like this (call it msgs.csv):
```
date;time;name;text
22/10/2018;11:30;Maria;Hello how are you
23/10/2018;11:30;Justin;Check this
23/10/2018;11:31;Justin;link
22/11/2018;11:30;Maria;Hello how are you
23/11/2018;11:30;Justin;Check this
23/12/2018;11:31;Justin;link
22/12/2018;11:30;Maria;Hello how are you
23/12/2018;11:30;Justin;Check this
23/01/2019;11:31;Justin;link
23/04/2019;11:30;Justin;Check this
23/07/2019;11:31;Justin;link
```
Now you can use pandas to import this csv in a table format that will recognise both date and time as a timestamp object and then for your calculations you can group the data by month.
```
import pandas as pd
dateparse = lambda x: pd.datetime.strptime(x, '%d/%m/%Y %H:%M')
df = pd.read_csv('msgs.csv', delimiter=';', parse_dates=[['date', 'time']], date_parser=dateparse)
per = df.date_time.dt.to_period("M")
g = df.groupby(per)
for i in g:
print('#######')
print('year: {year} ; month: {month} ; number of messages: {n_msgs}'
.format(year=i[0].year, month=i[0].month, n_msgs=len(i[1])))
```
EDIT - no information about specific month = 0 messages:
========================================================
In order to get the 0 for months in which no messages were sent you can do like this (looks better than above too):
```
import pandas as pd
dateparse = lambda x: pd.datetime.strptime(x, '%d/%m/%Y %H:%M')
df = pd.read_csv('msgs.csv', delimiter=';', parse_dates=[['date', 'time']], date_parser=dateparse)
# create date range from oldest message to newest message
dates = pd.date_range(*(pd.to_datetime([df.date_time.min(), df.date_time.max()]) + pd.offsets.MonthEnd()), freq='M')
for i in dates:
df_aux = df[(df.date_time.dt.month == i.month) & (df.date_time.dt.year == i.year)]
print('year: {year} ; month: {month} ; number of messages: {n_msgs}'
.format(year=i.year, month=i.month, n_msgs=len(df_aux)))
```
EDIT 2: parse logs into a pandas dataframe:
===========================================
```
df = pd.DataFrame({'logs':['DD/MM/YY, hh:mm - Name Surname: This is a text sent using WhatsApp',
'DD/MM/YY, hh:mm - Name Surname: This is a text sent using WhatsApp']})
pat = re.compile("(?P<date>.*?), (?P<time>.*?) - (?P<name>.*?): (?P<message>.*)")
df_parsed = df.logs.str.extractall(pat)
``` | It's best to convert the strings into datetime objects
```
from datetime import datetime
datetime_object = datetime.strptime('22/10/18', '%d/%m/%y')
```
When converting from a string, remember to use the correct seperators, ie "-" or "/" to match the string, and the letters in the format template on the right hand side of the function to parse with the date string too. Full details on the meaning of the letters can be found at [Python strptime() Method](https://www.tutorialspoint.com/python/time_strptime.htm) |
52,949,128 | I'm doing a project that involves analyzing WhatsApp log data.
After preprocessing the log file I have a table that looks like this:
```
DD/MM/YY | hh:mm | name | text |
```
I could build a graph where, using a chat with a friend of mine, I plotted a graph of the number of text per month and the mean number of words per month but I have some problems:
* If in a month we didn't exchange text the algorithm doesn't count that month, therefore in the graph I want to see that month with 0 messages
* there is a better way to utilize dates and time in python? Using them as strings isn't so intuitive but online I didn't found anything useful.
[this is the GitLab page of my project.](https://gitlab.com/GiuseppeMinardi/whatsgraph)
```
def wapp_split(line):
splitted = line.split(',')
Data['date'].append(splitted[0])
splitted = splitted[1].split(' - ')
Data['time'].append(splitted[0])
splitted = splitted[1].split(':')
Data['name'].append(splitted[0])
Data['msg'].append(splitted[1][0:-1])
def wapp_parsing(file):
with open(file) as f:
data = f.readlines()
for line in data:
if (line[17:].find(':')!= -1):
if (line[0] in numbers) and (line[1]in numbers):
prev = line[0:35]
wapp_split(line)
else:
line = prev + line
wapp_split(line)
```
Those are the main function of the script. The WhatsApp log is formatted like so:
```
DD/MM/YY, hh:mm - Name Surname: This is a text sent using WhatsApp
```
The parsing function just take the file and send each line to the split *function*. Those if in the parsing function just avoid that mssages from WhatsApp and not from the people in the chat being parsed. | 2018/10/23 | [
"https://Stackoverflow.com/questions/52949128",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8425613/"
] | Suppose that the table you have is a .csv file that looks like this (call it msgs.csv):
```
date;time;name;text
22/10/2018;11:30;Maria;Hello how are you
23/10/2018;11:30;Justin;Check this
23/10/2018;11:31;Justin;link
22/11/2018;11:30;Maria;Hello how are you
23/11/2018;11:30;Justin;Check this
23/12/2018;11:31;Justin;link
22/12/2018;11:30;Maria;Hello how are you
23/12/2018;11:30;Justin;Check this
23/01/2019;11:31;Justin;link
23/04/2019;11:30;Justin;Check this
23/07/2019;11:31;Justin;link
```
Now you can use pandas to import this csv in a table format that will recognise both date and time as a timestamp object and then for your calculations you can group the data by month.
```
import pandas as pd
dateparse = lambda x: pd.datetime.strptime(x, '%d/%m/%Y %H:%M')
df = pd.read_csv('msgs.csv', delimiter=';', parse_dates=[['date', 'time']], date_parser=dateparse)
per = df.date_time.dt.to_period("M")
g = df.groupby(per)
for i in g:
print('#######')
print('year: {year} ; month: {month} ; number of messages: {n_msgs}'
.format(year=i[0].year, month=i[0].month, n_msgs=len(i[1])))
```
EDIT - no information about specific month = 0 messages:
========================================================
In order to get the 0 for months in which no messages were sent you can do like this (looks better than above too):
```
import pandas as pd
dateparse = lambda x: pd.datetime.strptime(x, '%d/%m/%Y %H:%M')
df = pd.read_csv('msgs.csv', delimiter=';', parse_dates=[['date', 'time']], date_parser=dateparse)
# create date range from oldest message to newest message
dates = pd.date_range(*(pd.to_datetime([df.date_time.min(), df.date_time.max()]) + pd.offsets.MonthEnd()), freq='M')
for i in dates:
df_aux = df[(df.date_time.dt.month == i.month) & (df.date_time.dt.year == i.year)]
print('year: {year} ; month: {month} ; number of messages: {n_msgs}'
.format(year=i.year, month=i.month, n_msgs=len(df_aux)))
```
EDIT 2: parse logs into a pandas dataframe:
===========================================
```
df = pd.DataFrame({'logs':['DD/MM/YY, hh:mm - Name Surname: This is a text sent using WhatsApp',
'DD/MM/YY, hh:mm - Name Surname: This is a text sent using WhatsApp']})
pat = re.compile("(?P<date>.*?), (?P<time>.*?) - (?P<name>.*?): (?P<message>.*)")
df_parsed = df.logs.str.extractall(pat)
``` | A simple solution for adding missing dates and plotting the mean value of msg\_len is to create a date range your interested in then reindex:
```
df.set_index('date', inplace=True)
df1 = df[['msg_len','year']]
df1.index = df1.index.to_period('m')
msg_len year
date
2016-08 11 2016
2016-08 4 2016
2016-08 3 2016
2016-08 4 2016
2016-08 15 2016
2016-10 10 2016
# look for date range between 7/2016 and 11/2016
idx = pd.date_range('7-01-2016','12-01-2016',freq='M').to_period('m')
new_df = pd.DataFrame(df1.groupby(df1.index)['msg_len'].mean()).reindex(idx, fill_value=0)
new_df.plot()
msg_len
2016-07 0.0
2016-08 7.4
2016-09 0.0
2016-10 10.0
2016-11 0.0
```
you can change mean to anything if you want the count of messages for a given month etc. |
6,324,412 | After answering a question here on SO about finding a city in a
user-supplied question, I started thinking about the *best* way to
search for a string in a text when you have a limited data-set as this one.
`in` and `find` matches against a substring, which is not wanted. Reqular
expressions using "word boundaries" works but are quite slow. The
"punctuation" approach seems to be a candidate, but there is a lot of
punctuation [characters](http://en.wikipedia.org/wiki/Punctuation) that can appear both in question as well as
some in the name of a city (i.e. a period in "St. Louis").
Regexps are probably the best general-purpose solution, but I'm
curious if this can be solved using some other technique.
**The task is to:**
Find a city in the US in a user supplied text in the English language
regardless of case.
My code heavily inspired by <http://www.python.org/doc/essays/list2str/>
```
#!/usr/bin/env python
import time
import re
def timing(f, n):
print f.__name__,
r = range(n)
t1 = time.clock()
for i in r:
f(); f(); f(); f(); f(); f(); f(); f(); f(); f()
t2 = time.clock()
print round(t2-t1, 6)
def f0():
'''broken since it finds sub-strings, e.g.
city "Erie" is found in "series"'''
Q = question.upper()
for c in cities:
c = c.upper()
if c in Q:
pass
def f1():
'''slow, but working'''
for c in cities:
re.search('\\b%s\\b' % c, question, re.IGNORECASE)
def f2():
'''broken, same problem as f0()'''
Q = question.upper()
for c in cities:
c = c.upper()
if Q.find(c) > 0:
pass
def f3():
'''remove all punctuation, and then search for " str " '''
Q = question.upper()
punct = ['.', ',', '(', ')', '"', '\n', ' ', ' ', ' ']
for p in punct:
Q = Q.replace(p, ' ')
for c in cities:
c = ' ' + c.upper() + ' '
for p in punct:
c = c.replace(p, ' ')
if c in Q:
pass
with open('cities') as fd:
cities = [line.strip() for line in fd]
with open('question') as fd:
question = fd.readlines()[0]
testfuncs = f0, f1, f2, f3
for f in testfuncs:
print f
timing(f, 20)
```
On my old dodgy laptop, I get the following results
```
<function f0 at 0xb7730bc4>
f0 0.14
<function f1 at 0xb7730f7c>
f1 10.4
<function f2 at 0xb7730f44>
f2 0.15
<function f3 at 0xb7738684>
f3 0.61
```
If someone would like to have a go on my testdata, it can be found
[here](http://pastebin.com/VMAuVG9n) | 2011/06/12 | [
"https://Stackoverflow.com/questions/6324412",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/297323/"
] | In the **Language** menu select your corresponding language. For example **H** and then **html** | 1. Check, if you have saved the documents as .HTML and not as .txt
2. in the menu, choose Settings>Style configurator...
and in the list in the left pan select html, check if the colors for different tags are being shown in the color blocks. if yes, chosse a font and then save and exit.
3. Check only after you save the document in .html, whether it is working or not. |
6,324,412 | After answering a question here on SO about finding a city in a
user-supplied question, I started thinking about the *best* way to
search for a string in a text when you have a limited data-set as this one.
`in` and `find` matches against a substring, which is not wanted. Reqular
expressions using "word boundaries" works but are quite slow. The
"punctuation" approach seems to be a candidate, but there is a lot of
punctuation [characters](http://en.wikipedia.org/wiki/Punctuation) that can appear both in question as well as
some in the name of a city (i.e. a period in "St. Louis").
Regexps are probably the best general-purpose solution, but I'm
curious if this can be solved using some other technique.
**The task is to:**
Find a city in the US in a user supplied text in the English language
regardless of case.
My code heavily inspired by <http://www.python.org/doc/essays/list2str/>
```
#!/usr/bin/env python
import time
import re
def timing(f, n):
print f.__name__,
r = range(n)
t1 = time.clock()
for i in r:
f(); f(); f(); f(); f(); f(); f(); f(); f(); f()
t2 = time.clock()
print round(t2-t1, 6)
def f0():
'''broken since it finds sub-strings, e.g.
city "Erie" is found in "series"'''
Q = question.upper()
for c in cities:
c = c.upper()
if c in Q:
pass
def f1():
'''slow, but working'''
for c in cities:
re.search('\\b%s\\b' % c, question, re.IGNORECASE)
def f2():
'''broken, same problem as f0()'''
Q = question.upper()
for c in cities:
c = c.upper()
if Q.find(c) > 0:
pass
def f3():
'''remove all punctuation, and then search for " str " '''
Q = question.upper()
punct = ['.', ',', '(', ')', '"', '\n', ' ', ' ', ' ']
for p in punct:
Q = Q.replace(p, ' ')
for c in cities:
c = ' ' + c.upper() + ' '
for p in punct:
c = c.replace(p, ' ')
if c in Q:
pass
with open('cities') as fd:
cities = [line.strip() for line in fd]
with open('question') as fd:
question = fd.readlines()[0]
testfuncs = f0, f1, f2, f3
for f in testfuncs:
print f
timing(f, 20)
```
On my old dodgy laptop, I get the following results
```
<function f0 at 0xb7730bc4>
f0 0.14
<function f1 at 0xb7730f7c>
f1 10.4
<function f2 at 0xb7730f44>
f2 0.15
<function f3 at 0xb7738684>
f3 0.61
```
If someone would like to have a go on my testdata, it can be found
[here](http://pastebin.com/VMAuVG9n) | 2011/06/12 | [
"https://Stackoverflow.com/questions/6324412",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/297323/"
] | In the **Language** menu select your corresponding language. For example **H** and then **html** | The language setting solved the issue for (all) 3 Javascript files (.js) which suffered from it, which previously were all recognized correctly as Javascript. For some reason it forgot they were Javascript files apparently!? |
6,324,412 | After answering a question here on SO about finding a city in a
user-supplied question, I started thinking about the *best* way to
search for a string in a text when you have a limited data-set as this one.
`in` and `find` matches against a substring, which is not wanted. Reqular
expressions using "word boundaries" works but are quite slow. The
"punctuation" approach seems to be a candidate, but there is a lot of
punctuation [characters](http://en.wikipedia.org/wiki/Punctuation) that can appear both in question as well as
some in the name of a city (i.e. a period in "St. Louis").
Regexps are probably the best general-purpose solution, but I'm
curious if this can be solved using some other technique.
**The task is to:**
Find a city in the US in a user supplied text in the English language
regardless of case.
My code heavily inspired by <http://www.python.org/doc/essays/list2str/>
```
#!/usr/bin/env python
import time
import re
def timing(f, n):
print f.__name__,
r = range(n)
t1 = time.clock()
for i in r:
f(); f(); f(); f(); f(); f(); f(); f(); f(); f()
t2 = time.clock()
print round(t2-t1, 6)
def f0():
'''broken since it finds sub-strings, e.g.
city "Erie" is found in "series"'''
Q = question.upper()
for c in cities:
c = c.upper()
if c in Q:
pass
def f1():
'''slow, but working'''
for c in cities:
re.search('\\b%s\\b' % c, question, re.IGNORECASE)
def f2():
'''broken, same problem as f0()'''
Q = question.upper()
for c in cities:
c = c.upper()
if Q.find(c) > 0:
pass
def f3():
'''remove all punctuation, and then search for " str " '''
Q = question.upper()
punct = ['.', ',', '(', ')', '"', '\n', ' ', ' ', ' ']
for p in punct:
Q = Q.replace(p, ' ')
for c in cities:
c = ' ' + c.upper() + ' '
for p in punct:
c = c.replace(p, ' ')
if c in Q:
pass
with open('cities') as fd:
cities = [line.strip() for line in fd]
with open('question') as fd:
question = fd.readlines()[0]
testfuncs = f0, f1, f2, f3
for f in testfuncs:
print f
timing(f, 20)
```
On my old dodgy laptop, I get the following results
```
<function f0 at 0xb7730bc4>
f0 0.14
<function f1 at 0xb7730f7c>
f1 10.4
<function f2 at 0xb7730f44>
f2 0.15
<function f3 at 0xb7738684>
f3 0.61
```
If someone would like to have a go on my testdata, it can be found
[here](http://pastebin.com/VMAuVG9n) | 2011/06/12 | [
"https://Stackoverflow.com/questions/6324412",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/297323/"
] | I had the same problem and discovered it was because I had enabled global foreground color under Global Styles. | 1. Check, if you have saved the documents as .HTML and not as .txt
2. in the menu, choose Settings>Style configurator...
and in the list in the left pan select html, check if the colors for different tags are being shown in the color blocks. if yes, chosse a font and then save and exit.
3. Check only after you save the document in .html, whether it is working or not. |
6,324,412 | After answering a question here on SO about finding a city in a
user-supplied question, I started thinking about the *best* way to
search for a string in a text when you have a limited data-set as this one.
`in` and `find` matches against a substring, which is not wanted. Reqular
expressions using "word boundaries" works but are quite slow. The
"punctuation" approach seems to be a candidate, but there is a lot of
punctuation [characters](http://en.wikipedia.org/wiki/Punctuation) that can appear both in question as well as
some in the name of a city (i.e. a period in "St. Louis").
Regexps are probably the best general-purpose solution, but I'm
curious if this can be solved using some other technique.
**The task is to:**
Find a city in the US in a user supplied text in the English language
regardless of case.
My code heavily inspired by <http://www.python.org/doc/essays/list2str/>
```
#!/usr/bin/env python
import time
import re
def timing(f, n):
print f.__name__,
r = range(n)
t1 = time.clock()
for i in r:
f(); f(); f(); f(); f(); f(); f(); f(); f(); f()
t2 = time.clock()
print round(t2-t1, 6)
def f0():
'''broken since it finds sub-strings, e.g.
city "Erie" is found in "series"'''
Q = question.upper()
for c in cities:
c = c.upper()
if c in Q:
pass
def f1():
'''slow, but working'''
for c in cities:
re.search('\\b%s\\b' % c, question, re.IGNORECASE)
def f2():
'''broken, same problem as f0()'''
Q = question.upper()
for c in cities:
c = c.upper()
if Q.find(c) > 0:
pass
def f3():
'''remove all punctuation, and then search for " str " '''
Q = question.upper()
punct = ['.', ',', '(', ')', '"', '\n', ' ', ' ', ' ']
for p in punct:
Q = Q.replace(p, ' ')
for c in cities:
c = ' ' + c.upper() + ' '
for p in punct:
c = c.replace(p, ' ')
if c in Q:
pass
with open('cities') as fd:
cities = [line.strip() for line in fd]
with open('question') as fd:
question = fd.readlines()[0]
testfuncs = f0, f1, f2, f3
for f in testfuncs:
print f
timing(f, 20)
```
On my old dodgy laptop, I get the following results
```
<function f0 at 0xb7730bc4>
f0 0.14
<function f1 at 0xb7730f7c>
f1 10.4
<function f2 at 0xb7730f44>
f2 0.15
<function f3 at 0xb7738684>
f3 0.61
```
If someone would like to have a go on my testdata, it can be found
[here](http://pastebin.com/VMAuVG9n) | 2011/06/12 | [
"https://Stackoverflow.com/questions/6324412",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/297323/"
] | I had the same problem (I Googled "notepad++ file coloring quit" to find this discussion.) In my case the coloring quit mid file in a single file. I finally realized that adjacent string literals with one of them a macro was fooling Notepad++.
My code that broke it read:
Write\_Supplemental\_Configuration(privateData->new\_config, FTP\_ROOT\_DIR"/lists.csv");
and the fix was to add a space after the macro:
Write\_Supplemental\_Configuration(privateData->new\_config, FTP\_ROOT\_DIR "/lists.csv");
I tried replacing the macro FTP\_ROOT\_DIR with "foo" and the problem went away.
So in my case it was a macro that fooled the Notepad++ coloring. | If the coloring only stopped working for one file, you should check the extension name of your file. You might have accidentally saved the file as .txt |
6,324,412 | After answering a question here on SO about finding a city in a
user-supplied question, I started thinking about the *best* way to
search for a string in a text when you have a limited data-set as this one.
`in` and `find` matches against a substring, which is not wanted. Reqular
expressions using "word boundaries" works but are quite slow. The
"punctuation" approach seems to be a candidate, but there is a lot of
punctuation [characters](http://en.wikipedia.org/wiki/Punctuation) that can appear both in question as well as
some in the name of a city (i.e. a period in "St. Louis").
Regexps are probably the best general-purpose solution, but I'm
curious if this can be solved using some other technique.
**The task is to:**
Find a city in the US in a user supplied text in the English language
regardless of case.
My code heavily inspired by <http://www.python.org/doc/essays/list2str/>
```
#!/usr/bin/env python
import time
import re
def timing(f, n):
print f.__name__,
r = range(n)
t1 = time.clock()
for i in r:
f(); f(); f(); f(); f(); f(); f(); f(); f(); f()
t2 = time.clock()
print round(t2-t1, 6)
def f0():
'''broken since it finds sub-strings, e.g.
city "Erie" is found in "series"'''
Q = question.upper()
for c in cities:
c = c.upper()
if c in Q:
pass
def f1():
'''slow, but working'''
for c in cities:
re.search('\\b%s\\b' % c, question, re.IGNORECASE)
def f2():
'''broken, same problem as f0()'''
Q = question.upper()
for c in cities:
c = c.upper()
if Q.find(c) > 0:
pass
def f3():
'''remove all punctuation, and then search for " str " '''
Q = question.upper()
punct = ['.', ',', '(', ')', '"', '\n', ' ', ' ', ' ']
for p in punct:
Q = Q.replace(p, ' ')
for c in cities:
c = ' ' + c.upper() + ' '
for p in punct:
c = c.replace(p, ' ')
if c in Q:
pass
with open('cities') as fd:
cities = [line.strip() for line in fd]
with open('question') as fd:
question = fd.readlines()[0]
testfuncs = f0, f1, f2, f3
for f in testfuncs:
print f
timing(f, 20)
```
On my old dodgy laptop, I get the following results
```
<function f0 at 0xb7730bc4>
f0 0.14
<function f1 at 0xb7730f7c>
f1 10.4
<function f2 at 0xb7730f44>
f2 0.15
<function f3 at 0xb7738684>
f3 0.61
```
If someone would like to have a go on my testdata, it can be found
[here](http://pastebin.com/VMAuVG9n) | 2011/06/12 | [
"https://Stackoverflow.com/questions/6324412",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/297323/"
] | I had the same problem and discovered it was because I had enabled global foreground color under Global Styles. | First type any thing and Save the file in any format you are working with (i-e; .cpp if c++, .js if JavaScript....etc)
And make sure global foreground color is disabled.
And it should work fine. |
6,324,412 | After answering a question here on SO about finding a city in a
user-supplied question, I started thinking about the *best* way to
search for a string in a text when you have a limited data-set as this one.
`in` and `find` matches against a substring, which is not wanted. Reqular
expressions using "word boundaries" works but are quite slow. The
"punctuation" approach seems to be a candidate, but there is a lot of
punctuation [characters](http://en.wikipedia.org/wiki/Punctuation) that can appear both in question as well as
some in the name of a city (i.e. a period in "St. Louis").
Regexps are probably the best general-purpose solution, but I'm
curious if this can be solved using some other technique.
**The task is to:**
Find a city in the US in a user supplied text in the English language
regardless of case.
My code heavily inspired by <http://www.python.org/doc/essays/list2str/>
```
#!/usr/bin/env python
import time
import re
def timing(f, n):
print f.__name__,
r = range(n)
t1 = time.clock()
for i in r:
f(); f(); f(); f(); f(); f(); f(); f(); f(); f()
t2 = time.clock()
print round(t2-t1, 6)
def f0():
'''broken since it finds sub-strings, e.g.
city "Erie" is found in "series"'''
Q = question.upper()
for c in cities:
c = c.upper()
if c in Q:
pass
def f1():
'''slow, but working'''
for c in cities:
re.search('\\b%s\\b' % c, question, re.IGNORECASE)
def f2():
'''broken, same problem as f0()'''
Q = question.upper()
for c in cities:
c = c.upper()
if Q.find(c) > 0:
pass
def f3():
'''remove all punctuation, and then search for " str " '''
Q = question.upper()
punct = ['.', ',', '(', ')', '"', '\n', ' ', ' ', ' ']
for p in punct:
Q = Q.replace(p, ' ')
for c in cities:
c = ' ' + c.upper() + ' '
for p in punct:
c = c.replace(p, ' ')
if c in Q:
pass
with open('cities') as fd:
cities = [line.strip() for line in fd]
with open('question') as fd:
question = fd.readlines()[0]
testfuncs = f0, f1, f2, f3
for f in testfuncs:
print f
timing(f, 20)
```
On my old dodgy laptop, I get the following results
```
<function f0 at 0xb7730bc4>
f0 0.14
<function f1 at 0xb7730f7c>
f1 10.4
<function f2 at 0xb7730f44>
f2 0.15
<function f3 at 0xb7738684>
f3 0.61
```
If someone would like to have a go on my testdata, it can be found
[here](http://pastebin.com/VMAuVG9n) | 2011/06/12 | [
"https://Stackoverflow.com/questions/6324412",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/297323/"
] | In the **Language** menu select your corresponding language. For example **H** and then **html** | First type any thing and Save the file in any format you are working with (i-e; .cpp if c++, .js if JavaScript....etc)
And make sure global foreground color is disabled.
And it should work fine. |
6,324,412 | After answering a question here on SO about finding a city in a
user-supplied question, I started thinking about the *best* way to
search for a string in a text when you have a limited data-set as this one.
`in` and `find` matches against a substring, which is not wanted. Reqular
expressions using "word boundaries" works but are quite slow. The
"punctuation" approach seems to be a candidate, but there is a lot of
punctuation [characters](http://en.wikipedia.org/wiki/Punctuation) that can appear both in question as well as
some in the name of a city (i.e. a period in "St. Louis").
Regexps are probably the best general-purpose solution, but I'm
curious if this can be solved using some other technique.
**The task is to:**
Find a city in the US in a user supplied text in the English language
regardless of case.
My code heavily inspired by <http://www.python.org/doc/essays/list2str/>
```
#!/usr/bin/env python
import time
import re
def timing(f, n):
print f.__name__,
r = range(n)
t1 = time.clock()
for i in r:
f(); f(); f(); f(); f(); f(); f(); f(); f(); f()
t2 = time.clock()
print round(t2-t1, 6)
def f0():
'''broken since it finds sub-strings, e.g.
city "Erie" is found in "series"'''
Q = question.upper()
for c in cities:
c = c.upper()
if c in Q:
pass
def f1():
'''slow, but working'''
for c in cities:
re.search('\\b%s\\b' % c, question, re.IGNORECASE)
def f2():
'''broken, same problem as f0()'''
Q = question.upper()
for c in cities:
c = c.upper()
if Q.find(c) > 0:
pass
def f3():
'''remove all punctuation, and then search for " str " '''
Q = question.upper()
punct = ['.', ',', '(', ')', '"', '\n', ' ', ' ', ' ']
for p in punct:
Q = Q.replace(p, ' ')
for c in cities:
c = ' ' + c.upper() + ' '
for p in punct:
c = c.replace(p, ' ')
if c in Q:
pass
with open('cities') as fd:
cities = [line.strip() for line in fd]
with open('question') as fd:
question = fd.readlines()[0]
testfuncs = f0, f1, f2, f3
for f in testfuncs:
print f
timing(f, 20)
```
On my old dodgy laptop, I get the following results
```
<function f0 at 0xb7730bc4>
f0 0.14
<function f1 at 0xb7730f7c>
f1 10.4
<function f2 at 0xb7730f44>
f2 0.15
<function f3 at 0xb7738684>
f3 0.61
```
If someone would like to have a go on my testdata, it can be found
[here](http://pastebin.com/VMAuVG9n) | 2011/06/12 | [
"https://Stackoverflow.com/questions/6324412",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/297323/"
] | Make sure that when you save the file it's saved as an **.html** instead of a **.txt**. This make a difference because the **.html** allows you to see the different colour codes whereas **.txt** doesn't. | If the coloring only stopped working for one file, you should check the extension name of your file. You might have accidentally saved the file as .txt |
6,324,412 | After answering a question here on SO about finding a city in a
user-supplied question, I started thinking about the *best* way to
search for a string in a text when you have a limited data-set as this one.
`in` and `find` matches against a substring, which is not wanted. Reqular
expressions using "word boundaries" works but are quite slow. The
"punctuation" approach seems to be a candidate, but there is a lot of
punctuation [characters](http://en.wikipedia.org/wiki/Punctuation) that can appear both in question as well as
some in the name of a city (i.e. a period in "St. Louis").
Regexps are probably the best general-purpose solution, but I'm
curious if this can be solved using some other technique.
**The task is to:**
Find a city in the US in a user supplied text in the English language
regardless of case.
My code heavily inspired by <http://www.python.org/doc/essays/list2str/>
```
#!/usr/bin/env python
import time
import re
def timing(f, n):
print f.__name__,
r = range(n)
t1 = time.clock()
for i in r:
f(); f(); f(); f(); f(); f(); f(); f(); f(); f()
t2 = time.clock()
print round(t2-t1, 6)
def f0():
'''broken since it finds sub-strings, e.g.
city "Erie" is found in "series"'''
Q = question.upper()
for c in cities:
c = c.upper()
if c in Q:
pass
def f1():
'''slow, but working'''
for c in cities:
re.search('\\b%s\\b' % c, question, re.IGNORECASE)
def f2():
'''broken, same problem as f0()'''
Q = question.upper()
for c in cities:
c = c.upper()
if Q.find(c) > 0:
pass
def f3():
'''remove all punctuation, and then search for " str " '''
Q = question.upper()
punct = ['.', ',', '(', ')', '"', '\n', ' ', ' ', ' ']
for p in punct:
Q = Q.replace(p, ' ')
for c in cities:
c = ' ' + c.upper() + ' '
for p in punct:
c = c.replace(p, ' ')
if c in Q:
pass
with open('cities') as fd:
cities = [line.strip() for line in fd]
with open('question') as fd:
question = fd.readlines()[0]
testfuncs = f0, f1, f2, f3
for f in testfuncs:
print f
timing(f, 20)
```
On my old dodgy laptop, I get the following results
```
<function f0 at 0xb7730bc4>
f0 0.14
<function f1 at 0xb7730f7c>
f1 10.4
<function f2 at 0xb7730f44>
f2 0.15
<function f3 at 0xb7738684>
f3 0.61
```
If someone would like to have a go on my testdata, it can be found
[here](http://pastebin.com/VMAuVG9n) | 2011/06/12 | [
"https://Stackoverflow.com/questions/6324412",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/297323/"
] | Try out the following:
1. Select a language manually from the "Languages" menu.
2. In Settings/Preferences, check the File Associatons.
3. In worst case, reinstall. | Got to Setting -> Style Configuration and remove the global style checkbox
[](https://i.stack.imgur.com/JjGhl.png) |
6,324,412 | After answering a question here on SO about finding a city in a
user-supplied question, I started thinking about the *best* way to
search for a string in a text when you have a limited data-set as this one.
`in` and `find` matches against a substring, which is not wanted. Reqular
expressions using "word boundaries" works but are quite slow. The
"punctuation" approach seems to be a candidate, but there is a lot of
punctuation [characters](http://en.wikipedia.org/wiki/Punctuation) that can appear both in question as well as
some in the name of a city (i.e. a period in "St. Louis").
Regexps are probably the best general-purpose solution, but I'm
curious if this can be solved using some other technique.
**The task is to:**
Find a city in the US in a user supplied text in the English language
regardless of case.
My code heavily inspired by <http://www.python.org/doc/essays/list2str/>
```
#!/usr/bin/env python
import time
import re
def timing(f, n):
print f.__name__,
r = range(n)
t1 = time.clock()
for i in r:
f(); f(); f(); f(); f(); f(); f(); f(); f(); f()
t2 = time.clock()
print round(t2-t1, 6)
def f0():
'''broken since it finds sub-strings, e.g.
city "Erie" is found in "series"'''
Q = question.upper()
for c in cities:
c = c.upper()
if c in Q:
pass
def f1():
'''slow, but working'''
for c in cities:
re.search('\\b%s\\b' % c, question, re.IGNORECASE)
def f2():
'''broken, same problem as f0()'''
Q = question.upper()
for c in cities:
c = c.upper()
if Q.find(c) > 0:
pass
def f3():
'''remove all punctuation, and then search for " str " '''
Q = question.upper()
punct = ['.', ',', '(', ')', '"', '\n', ' ', ' ', ' ']
for p in punct:
Q = Q.replace(p, ' ')
for c in cities:
c = ' ' + c.upper() + ' '
for p in punct:
c = c.replace(p, ' ')
if c in Q:
pass
with open('cities') as fd:
cities = [line.strip() for line in fd]
with open('question') as fd:
question = fd.readlines()[0]
testfuncs = f0, f1, f2, f3
for f in testfuncs:
print f
timing(f, 20)
```
On my old dodgy laptop, I get the following results
```
<function f0 at 0xb7730bc4>
f0 0.14
<function f1 at 0xb7730f7c>
f1 10.4
<function f2 at 0xb7730f44>
f2 0.15
<function f3 at 0xb7738684>
f3 0.61
```
If someone would like to have a go on my testdata, it can be found
[here](http://pastebin.com/VMAuVG9n) | 2011/06/12 | [
"https://Stackoverflow.com/questions/6324412",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/297323/"
] | I had the same problem (I Googled "notepad++ file coloring quit" to find this discussion.) In my case the coloring quit mid file in a single file. I finally realized that adjacent string literals with one of them a macro was fooling Notepad++.
My code that broke it read:
Write\_Supplemental\_Configuration(privateData->new\_config, FTP\_ROOT\_DIR"/lists.csv");
and the fix was to add a space after the macro:
Write\_Supplemental\_Configuration(privateData->new\_config, FTP\_ROOT\_DIR "/lists.csv");
I tried replacing the macro FTP\_ROOT\_DIR with "foo" and the problem went away.
So in my case it was a macro that fooled the Notepad++ coloring. | If you want to display text in SQL format, then in menu select Language => S => SQL |
6,324,412 | After answering a question here on SO about finding a city in a
user-supplied question, I started thinking about the *best* way to
search for a string in a text when you have a limited data-set as this one.
`in` and `find` matches against a substring, which is not wanted. Reqular
expressions using "word boundaries" works but are quite slow. The
"punctuation" approach seems to be a candidate, but there is a lot of
punctuation [characters](http://en.wikipedia.org/wiki/Punctuation) that can appear both in question as well as
some in the name of a city (i.e. a period in "St. Louis").
Regexps are probably the best general-purpose solution, but I'm
curious if this can be solved using some other technique.
**The task is to:**
Find a city in the US in a user supplied text in the English language
regardless of case.
My code heavily inspired by <http://www.python.org/doc/essays/list2str/>
```
#!/usr/bin/env python
import time
import re
def timing(f, n):
print f.__name__,
r = range(n)
t1 = time.clock()
for i in r:
f(); f(); f(); f(); f(); f(); f(); f(); f(); f()
t2 = time.clock()
print round(t2-t1, 6)
def f0():
'''broken since it finds sub-strings, e.g.
city "Erie" is found in "series"'''
Q = question.upper()
for c in cities:
c = c.upper()
if c in Q:
pass
def f1():
'''slow, but working'''
for c in cities:
re.search('\\b%s\\b' % c, question, re.IGNORECASE)
def f2():
'''broken, same problem as f0()'''
Q = question.upper()
for c in cities:
c = c.upper()
if Q.find(c) > 0:
pass
def f3():
'''remove all punctuation, and then search for " str " '''
Q = question.upper()
punct = ['.', ',', '(', ')', '"', '\n', ' ', ' ', ' ']
for p in punct:
Q = Q.replace(p, ' ')
for c in cities:
c = ' ' + c.upper() + ' '
for p in punct:
c = c.replace(p, ' ')
if c in Q:
pass
with open('cities') as fd:
cities = [line.strip() for line in fd]
with open('question') as fd:
question = fd.readlines()[0]
testfuncs = f0, f1, f2, f3
for f in testfuncs:
print f
timing(f, 20)
```
On my old dodgy laptop, I get the following results
```
<function f0 at 0xb7730bc4>
f0 0.14
<function f1 at 0xb7730f7c>
f1 10.4
<function f2 at 0xb7730f44>
f2 0.15
<function f3 at 0xb7738684>
f3 0.61
```
If someone would like to have a go on my testdata, it can be found
[here](http://pastebin.com/VMAuVG9n) | 2011/06/12 | [
"https://Stackoverflow.com/questions/6324412",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/297323/"
] | I had the same problem and discovered it was because I had enabled global foreground color under Global Styles. | Have a look in Settings -> Style Configurator. Maybe your styles got messed up somehow. You could try changing the selected style to see if it makes a difference.
I think the saved styles are stored in the "themes" directory under your Notepad++ installation directory, so you could also check that the files have not become corrupted in some way. |
63,781,794 | I got this error message when I was installing python-binance.
Error message is in the link below please check
<https://docs.google.com/document/d/1VE0Ux_ji9RoK0NIrPD3BSbs60sTaxThk3boxsvh051c/edit>
Anyone knows how to fix it? | 2020/09/07 | [
"https://Stackoverflow.com/questions/63781794",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14236836/"
] | You're trying to install [`email` from PyPI](https://pypi.org/project/email/) which is a very old outdated Python2-only package.
`email` is now [a module in the stdlib](https://docs.python.org/3/library/email.html). You don't need to install it, it must always be available. Just import and use. | You might have outdated setuptools, try:
```
pip install --upgrade setuptools
```
Then continue trying to install the module you want.
Usually these kinds of problems can be solved by googling the error: in this case you should try searching with "python setup.py egg\_info".
Also, try to give a more descriptive title for your problems in the future. "Python installing package with pip failed" is too broad. |
63,781,794 | I got this error message when I was installing python-binance.
Error message is in the link below please check
<https://docs.google.com/document/d/1VE0Ux_ji9RoK0NIrPD3BSbs60sTaxThk3boxsvh051c/edit>
Anyone knows how to fix it? | 2020/09/07 | [
"https://Stackoverflow.com/questions/63781794",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14236836/"
] | You're trying to install [`email` from PyPI](https://pypi.org/project/email/) which is a very old outdated Python2-only package.
`email` is now [a module in the stdlib](https://docs.python.org/3/library/email.html). You don't need to install it, it must always be available. Just import and use. | Thanks guys I already figured out what error message means, I downloaded twisted from third party website and installed it and it worked. |
16,375,251 | This is part of a project I am working on for work.
I want to automate a Sharepoint site, specifically to pull data out of a database that I and my coworkers only have front-end access to.
I FINALLY managed to get mechanize (in python) to accomplish this using Python-NTLM, and by patching part of it's source code to fix a reoccurring error.
Now, I am at what I would hope is my final roadblock: Part of the form I need to submit seems to be output of a JavaScript function :| and lo and behold... Mechanize does not support javascript. I don't want to emulate the javascript functionality myself in python because I would ideally like a reusable solution...
So, does **anyone** know how I could evaluate the javascript on the local html I download from sharepoint? I just want to run the javascript somehow (to complete the loading of the page), but without a browser.
I have already looked into selenium, but it's pretty slow for the amount of work I need to get done... I am currently looking into PyV8 to *try* and evaluate the javascript myself... but surely there must be an app or library (or **anything**) that can do this?? | 2013/05/04 | [
"https://Stackoverflow.com/questions/16375251",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/629404/"
] | Well, in the end I came down to the following possible solutions:
* **Run Chrome headless** and collect the html output (thanks to koenp for the link!)
* **Run PhantomJS**, a headless browser with a javascript api
* **Run HTMLUnit**; same thing but for Java
* **Use Ghost.py**, a **python-based** headless browser (that I haven't seen suggested anyyyywhere for some reason!)
* Write a DOM-based javascript interpreter based on Pyv8 (Google v8 javascript engine) and add this to my current "half-solution" with mechanize.
For now, I have decided to use either use Ghost.py or my own modification of the PySide/PyQT Webkit (how ghost works) to evaluate the javascript, as apparently they can run quite fast if you optimize them to not download images and disable the GUI.
Hopefully others will find this list useful! | Well you will need something that both understands the DOM and understand Javascript, so that comes down to a headless browser of some sort. Maybe you can take a look at the [selenium webdriver](http://docs.seleniumhq.org/docs/03_webdriver.jsp), but I guess you already did that. I don't hink there is an easy way of doing this without running the stuff in an actually browser engine. |
59,591,862 | Essentially I'm trying to do something that is stated here [Changing variables in multiple Python instances](https://stackoverflow.com/questions/9302789/changing-variables-in-multiple-python-instances)
but in java.
I want to reset a variable in all instances of a certain class so something like:
```
public class NewClass{
int variable = 1;
}
```
then:
```
NewClass one = new NewClass();
NewClass two = new NewClass();
NewClass three = new NewClass();
Newclass.variable = 2;
System.out.println(one.variable);
System.out.println(two.variable);
System.out.println(three.variable);
```
output would be:
```
2
2
2
```
is there a way to do that? | 2020/01/04 | [
"https://Stackoverflow.com/questions/59591862",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12652681/"
] | This is probably far from a Todo but it'll give you some clarification of what to do.
```js
const box = document.querySelector('.box');
let inputTodo = document.getElementById('inputTodo');
const inputTodoHandler = (event) => {
if(event.which == 13 || event.keyCode == 13) {
addTodo(event.target.value);
event.target.value = '';
return false;
}
}
const addTodo = (todo) => {
const p = document.createElement('p');
p.textContent = todo;
box.appendChild( p );
}
inputTodo.addEventListener('keydown', inputTodoHandler );
```
```css
* {
margin: 0;
background-color: rgb(27, 27, 27);
font-family: 'Indie Flower', cursive;
}
h1 {
font-size: 5.5vw;
color: rgb(241, 240, 240);
display: flex;
justify-content: center;
margin-top: 50px;
letter-spacing: 1px;
}
.main {
margin-left: 100px;
margin-right: 100px;
margin-top: 50px;
display: flex;
flex-direction: row;
justify-content: space-evenly;
font-size: 1vw;
color: rgb(241, 240, 240);
letter-spacing: 2px;
}
.left {
display: flex;
flex-direction: column;
}
.left h2 {
padding-bottom: 50px;
}
.left form {
border: 4px solid rgb(102, 181, 255);
border-radius: 5px;
}
.left form input {
height: 30px;
width: 100%;
color: rgb(241, 240, 240);
font-size: 24px;
letter-spacing: 1px;
border: none;
}
.box {
border: 5px solid black;
border-radius: 7px;
background-color: rgb(255, 234, 176);
width: 600px;
height: 68vh;
}
```
```html
<!DOCTYPE html>
<html lang="en">
<head>
<link rel="stylesheet" href="style.css">
<link rel="text/javascript" href="javascript.js">
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta http-equiv="X-UA-Compatible" content="ie=edge">
<link href="https://fonts.googleapis.com/css?family=Indie+Flower&display=swap" rel="stylesheet">
<title>My to do list</title>
</head>
<body>
<header>
<h1>To do list</h1>
</header>
<section class="main">
<div class="left">
<h2>Please Enter your things to do here..</h2>
<form action="" onsubmit="return false;">
<input type="text" id="inputTodo">
</form>
</div>
<div class="box">
</div>
</section>
</body>
</html>
``` | use the Database system in your Website using sql or any server that stores the info in cloud that store/edit/delete and acess the database in your right/desired location |
59,591,862 | Essentially I'm trying to do something that is stated here [Changing variables in multiple Python instances](https://stackoverflow.com/questions/9302789/changing-variables-in-multiple-python-instances)
but in java.
I want to reset a variable in all instances of a certain class so something like:
```
public class NewClass{
int variable = 1;
}
```
then:
```
NewClass one = new NewClass();
NewClass two = new NewClass();
NewClass three = new NewClass();
Newclass.variable = 2;
System.out.println(one.variable);
System.out.println(two.variable);
System.out.println(three.variable);
```
output would be:
```
2
2
2
```
is there a way to do that? | 2020/01/04 | [
"https://Stackoverflow.com/questions/59591862",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12652681/"
] | There are a few concepts you should look up:
* how to assign a DOM element to a variable with [`querySelector`](https://developer.mozilla.org/en-US/docs/Web/API/Document/querySelector) or other methods:
```
const input = document.querySelector('input')
const div = document.querySelector('.box')
```
* events and event listeners. In your case you are going to have to listen to the changes made to your `<input>` element with [`addEventListener`](https://developer.mozilla.org/en-US/docs/Web/API/EventTarget/addEventListener):
```
input.addEventListener('change', callback)
```
* how to modify the content of a DOM element with [`innerText`](https://developer.mozilla.org/en-US/docs/Web/API/HTMLElement/innerText):
```
div.innerText = input.value
```
---
And then, putting it all together
```
const input = document.querySelector('input')
const div = document.querySelector('.box')
input.addEventListener('change', () => {
div.innerText = input.value
})
``` | use the Database system in your Website using sql or any server that stores the info in cloud that store/edit/delete and acess the database in your right/desired location |
59,591,862 | Essentially I'm trying to do something that is stated here [Changing variables in multiple Python instances](https://stackoverflow.com/questions/9302789/changing-variables-in-multiple-python-instances)
but in java.
I want to reset a variable in all instances of a certain class so something like:
```
public class NewClass{
int variable = 1;
}
```
then:
```
NewClass one = new NewClass();
NewClass two = new NewClass();
NewClass three = new NewClass();
Newclass.variable = 2;
System.out.println(one.variable);
System.out.println(two.variable);
System.out.println(three.variable);
```
output would be:
```
2
2
2
```
is there a way to do that? | 2020/01/04 | [
"https://Stackoverflow.com/questions/59591862",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12652681/"
] | The shortest way to give you a direction of thoughts is roughly to use event listener for key pressed in the input field.
Also since you want to create some kind of list, you need to add input new value to destination content once Enter key was pressed, dividing lines with End-of-line character.
Note: You don't need to use the form tag here as form usually used for exchange data with server side scripts.
```js
var el = document.getElementById('src_txt');
var dest = document.getElementsByClassName('box')[0];
el.addEventListener('keypress', function(ev) {
if (ev.keyCode == 13) {
dest.innerText += (el.value + "\n");
}
});
```
```css
* {
margin: 0;
background-color: rgb(27, 27, 27);
font-family: 'Indie Flower', cursive;
}
h1 {
font-size: 5.5vw;
color: rgb(241, 240, 240);
display: flex;
justify-content: center;
margin-top: 50px;
letter-spacing: 1px;
}
.main {
margin-left: 100px;
margin-right: 100px;
margin-top: 50px;
display: flex;
flex-direction: row;
justify-content: space-evenly;
font-size: 1vw;
color: rgb(241, 240, 240);
letter-spacing: 2px;
}
.left {
display: flex;
flex-direction: column;
}
.left h2 {
padding-bottom: 50px;
}
#src_txt {
height: 30px;
width: 100%;
color: rgb(241, 240, 240);
font-size: 24px;
letter-spacing: 1px;
border: 4px solid rgb(102, 181, 255);
border-radius: 5px;
}
.box {
border: 5px solid black;
border-radius: 7px;
background-color: rgb(255, 234, 176);
width: 600px;
height: 68vh;
color: #000;
font-size: 12px;
}
```
```html
<!DOCTYPE html>
<html lang="en">
<head>
<link rel="stylesheet" href="style.css">
<link rel="text/javascript" href="javascript.js">
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta http-equiv="X-UA-Compatible" content="ie=edge">
<link href="https://fonts.googleapis.com/css?family=Indie+Flower&display=swap" rel="stylesheet">
<title>My to do list</title>
</head>
<body>
<header>
<h1>To do list</h1>
</header>
<section class="main">
<div class="left">
<h2>Please Enter your things to do here..</h2>
<input type="text" id='src_txt'>
</div>
<div class="box">
</div>
</section>
</body>
</html>
``` | use the Database system in your Website using sql or any server that stores the info in cloud that store/edit/delete and acess the database in your right/desired location |
59,591,862 | Essentially I'm trying to do something that is stated here [Changing variables in multiple Python instances](https://stackoverflow.com/questions/9302789/changing-variables-in-multiple-python-instances)
but in java.
I want to reset a variable in all instances of a certain class so something like:
```
public class NewClass{
int variable = 1;
}
```
then:
```
NewClass one = new NewClass();
NewClass two = new NewClass();
NewClass three = new NewClass();
Newclass.variable = 2;
System.out.println(one.variable);
System.out.println(two.variable);
System.out.println(three.variable);
```
output would be:
```
2
2
2
```
is there a way to do that? | 2020/01/04 | [
"https://Stackoverflow.com/questions/59591862",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12652681/"
] | something like this should do the trick. The event.preventDefault() stops the form from being submitted to your server.
```js
function moveit(){
event.preventDefault();
var inputs = document.getElementsByTagName('input');
var input = inputs[0].value;
var box = document.getElementById('box');
var before = box.innerHTML
var after = before + '<br/>'+ input;
box.innerHTML=after;
}
```
```css
* {
margin: 0;
background-color: pink;
}
h1 {
font-size: 5.5vw;
color: rgb(241, 240, 240);
display: flex;
justify-content: center;
margin-top: 50px;
letter-spacing: 1px;
}
.main {
margin-left: 100px;
margin-right: 100px;
margin-top: 50px;
display: flex;
flex-direction: row;
justify-content: space-evenly;
font-size: 1vw;
color: rgb(241, 240, 240);
letter-spacing: 2px;
}
.left {
display: flex;
flex-direction: column;
}
.left h2 {
padding-bottom: 50px;
}
.left form {
border: 4px solid rgb(102, 181, 255);
border-radius: 5px;
}
.left form input {
height: 30px;
width: 100%;
color: rgb(241, 240, 240);
font-size: 24px;
letter-spacing: 1px;
border: none;
}
#box {
border: 5px solid black;
border-radius: 7px;
background-color: rgb(255, 234, 176);
width: 600px;
height: 68vh;
color:black;
}
```
```html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta http-equiv="X-UA-Compatible" content="ie=edge">
<title>My to do list</title>
</head>
<body>
<header>
<h1>To do list</h1>
</header>
<section class="main">
<div class="left">
<h2>Please Enter your things to do here..</h2>
<form onsubmit="moveit();">
<input type="text" ><br/>
<input type='submit' value='submit' >
</form>
</div>
<div id="box">
</div>
</section>
</body>
</html>
``` | use the Database system in your Website using sql or any server that stores the info in cloud that store/edit/delete and acess the database in your right/desired location |
17,960,696 | I was trying to install a package using easy\_install, errors happened "processing dependencies", looks like it cannot locate a package, here's the error I got
---
```
Processing dependencies for python-pack==1.5.0beta2
Searching for python-pack==1.5.0beta2
Reading http://pypi.python.org/simple/python-pack/
Couldn't find index page for 'python-pack' (maybe misspelled?)
Scanning index of all packages (this may take a while)
Reading http://pypi.python.org/simple/
No local packages or download links found for python-pack==1.5.0beta2
Best match: None
```
---
The package to be installed is actually for Ubuntu, and my system is Debian. But I didn't expect errors at this stage.
Could any one please help me out?
Thanks,
Zhihui | 2013/07/31 | [
"https://Stackoverflow.com/questions/17960696",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2636377/"
] | It turned out that the jar files in ~/.m2/repository have been corrupted. This issue has been solved by deleting everything in the repository and do a:
>
> mvn clean install
>
>
>
All the classes can be resolved now. | The answer is more likely that you need to add the dependency to your pom.xml file:
```
<dependency>
<groupId>io.dropwizard</groupId>
<artifactId>dropwizard-hibernate</artifactId>
<version>${dropwizard.version}</version>
</dependency>
``` |
33,324,083 | I am having trouble learning to plot a function in python. For example I want to create a graph with these two functions:
```
y=10x
y=5x+20
```
The only way I found was to use the following code
```
import matplotlib.pyplot as plt
plt.plot([points go here], [points go here])
plt.plot([points go here], [points go here])
plt.ylabel('some numbers')
plt.show()
```
and to manually enter data points, but I have some tougher problems coming up so that would be really difficult.
Is there a way to just put in what function I need a plot for and have python create the graph for me? | 2015/10/24 | [
"https://Stackoverflow.com/questions/33324083",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5484597/"
] | There are quite a lot of answers here which exlain that, but let me give you another one.
A string is interned into the String literal pool only in two situations: when a class is loaded and the String was a literal or compile time constant. Otherwise only when you call `.intern()` on a String. Then a copy of this string is listed in the pool and returned. All other string creations will not be interned. String concatenation (`+`) is producing new instances as long as it is not a compile time constant expression\*.
First of all: never ever use it. If you do not understand it you should not use it. Use `.equals()`. Interning strings for the sake of comparison might be slower than you think and unnecessarily filling the hashtable. Especially for strings with highly different content.
1. s3 is a string literal from the constant pool and therefore interned. s4 is a expression not producing an interned constant.
2. when you intern s4 it has the same content as s3 and is therefore the same instance.
3. same as s4, expression not a constant
4. if you intern s1+s2 you get the instance of s3, but s4 is still not s3
5. if you intern s4 it is the same instance as s3
Some more questions:
```
System.out.println(s3 == s3.intern()); // is true
System.out.println(s4 == s4.intern()); // is false
System.out.println(s1 == "abc"); // is true
System.out.println(s1 == new String("abc")); // is false
```
\* Compile time constants can be expressions with literals on both sides of the concatenation (like `"a" + "bc"`) but also final String variables initialized from constants or literals:
```
final String a = "a";
final String b = "b";
final String ab = a + b;
final String ab2 = "a" + b;
final String ab3 = "a" + new String("b");
System.out.println("ab == ab2 should be true: " + (ab == ab2));
System.out.println("a+b == ab should be true: " + (a+b == ab));
System.out.println("ab == ab3 should be false: " + (ab == ab3));
``` | One thing you have to know is, that Strings are Objects in Java. The variables s1 - s4 do not point directly to the text you stored. It is simply a pointer which says where to find the Text within your RAM.
1. It is false because you compare the Pointers, not the actual text. The text is the same, but these two Strings are 2 completely different Objects which means they have diferent Pointers. Try printing s1 and s2 on the console and you will see.
2. Its true, because Java does some optimizing concerning Strings. If the JVM detects, that two different Strings share the same text, they will be but in something called "String Literal Pool". Since s3 and s4 share the same text they will also sahe the same slot in the "String Literal Pool". The inter()-Method gets the reference to the String in the Literal Pool.
3. Same as 1. You compare two pointers. Not the text-content.
4. As far as I know added values do not get stored in the pool
5. Same as 2. You they contain the same text so they get stored in the String Literal Pool and therefore share the same slot. |
33,324,083 | I am having trouble learning to plot a function in python. For example I want to create a graph with these two functions:
```
y=10x
y=5x+20
```
The only way I found was to use the following code
```
import matplotlib.pyplot as plt
plt.plot([points go here], [points go here])
plt.plot([points go here], [points go here])
plt.ylabel('some numbers')
plt.show()
```
and to manually enter data points, but I have some tougher problems coming up so that would be really difficult.
Is there a way to just put in what function I need a plot for and have python create the graph for me? | 2015/10/24 | [
"https://Stackoverflow.com/questions/33324083",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5484597/"
] | There are quite a lot of answers here which exlain that, but let me give you another one.
A string is interned into the String literal pool only in two situations: when a class is loaded and the String was a literal or compile time constant. Otherwise only when you call `.intern()` on a String. Then a copy of this string is listed in the pool and returned. All other string creations will not be interned. String concatenation (`+`) is producing new instances as long as it is not a compile time constant expression\*.
First of all: never ever use it. If you do not understand it you should not use it. Use `.equals()`. Interning strings for the sake of comparison might be slower than you think and unnecessarily filling the hashtable. Especially for strings with highly different content.
1. s3 is a string literal from the constant pool and therefore interned. s4 is a expression not producing an interned constant.
2. when you intern s4 it has the same content as s3 and is therefore the same instance.
3. same as s4, expression not a constant
4. if you intern s1+s2 you get the instance of s3, but s4 is still not s3
5. if you intern s4 it is the same instance as s3
Some more questions:
```
System.out.println(s3 == s3.intern()); // is true
System.out.println(s4 == s4.intern()); // is false
System.out.println(s1 == "abc"); // is true
System.out.println(s1 == new String("abc")); // is false
```
\* Compile time constants can be expressions with literals on both sides of the concatenation (like `"a" + "bc"`) but also final String variables initialized from constants or literals:
```
final String a = "a";
final String b = "b";
final String ab = a + b;
final String ab2 = "a" + b;
final String ab3 = "a" + new String("b");
System.out.println("ab == ab2 should be true: " + (ab == ab2));
System.out.println("a+b == ab should be true: " + (a+b == ab));
System.out.println("ab == ab3 should be false: " + (ab == ab3));
``` | To start off with, s1, s2, and s3 are in the intern pool when they are declared, because they are declared by a literal. s4 is not in the intern pool to start off with. This is what the intern pool might look like to start off with:
```
"abc" (s1, s2)
"abcabc" (s3)
```
1. s4 does not match s3 because s3 is in the intern pool, but s4 is not.
2. intern() is called on s4, so it looks in the pool for other strings equaling "abcabc" and makes them one object. Therefore, s3 and s4.intern() point to the same object.
3. Again, intern() is not called when adding two strings, so it does not match from the intern() pool.
4. s4 is not in the intern pool so it does not match objects with (s1 + s2).intern().
5. These are both interned, so they both look in the intern pool and find each other. |
46,132,556 | When I try to install python3-tk for python3.5 on ubuntu 16.04 I get the following error, what should I do?
python3-tk : Depends: python3 (< 3.5) but 3.5.1-3 is to be installed | 2017/09/09 | [
"https://Stackoverflow.com/questions/46132556",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3604079/"
] | Activity transition is always expensive and we should switch from one activity to another only when we are switching the context. A `fragment` is a portion of UI in an activity. Same fragment can be used with multiple activities. Just like activity a fragment has its own lifecycle and `setContentView(int layoutResID)` can be set to different layout in `OnCreate` of fragment.
This [link](https://stackoverflow.com/questions/20306091/dilemma-when-to-use-fragments-vs-activities) explains more on when to use activity or fragment.
[Android developer guide on Fragments](https://developer.android.com/guide/components/fragments.html)
[Code path tutorial](https://guides.codepath.com/android/Bottom-Navigation-Views) on bottom navigation views. | Please refer to :-
<https://github.com/waleedsarwar86/BottomNavigationDemo>
and complete explanation in
<http://waleedsarwar.com/posts/2016-05-21-three-tabs-bottom-navigation/>
You will get a running code with the explanation here. |
46,132,556 | When I try to install python3-tk for python3.5 on ubuntu 16.04 I get the following error, what should I do?
python3-tk : Depends: python3 (< 3.5) but 3.5.1-3 is to be installed | 2017/09/09 | [
"https://Stackoverflow.com/questions/46132556",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3604079/"
] | Please refer to :-
<https://github.com/waleedsarwar86/BottomNavigationDemo>
and complete explanation in
<http://waleedsarwar.com/posts/2016-05-21-three-tabs-bottom-navigation/>
You will get a running code with the explanation here. | ```
bottomNavigationView.setOnNavigationItemSelectedListener
(new BottomNavigationView.OnNavigationItemSelectedListener() {
@Override
public boolean onNavigationItemSelected(@NonNull MenuItem item) {
Fragment selectedFragment = null;
switch (item.getItemId()) {
case R.id.action_item1:
selectedFragment = ItemOneFragment.newInstance();
FragmentTransaction transaction = getSupportFragmentManager().beginTransaction();
transaction.replace(R.id.frame_layout, selectedFragment);
transaction.commit();
// selectedFragment.getChildFragmentManager().beginTransaction();
break;
case R.id.action_item2:
selectedFragment = ItemTwoFragment.newInstance();
FragmentTransaction transactiona = getSupportFragmentManager().beginTransaction();
transactiona.replace(R.id.frame_layout, selectedFragment);
transactiona.commit();
// selectedFragment = ItemThreeFragment.newInstance();
break;
case R.id.action_item3:
// selectedFragment = ItemOneFragment.newInstance();
Intent intent=new Intent(MainView.this, YoutActivityLive.class);
intent.setFlags(Intent.FLAG_ACTIVITY_NEW_TASK);
startActivity(intent);
// selectedFragment = ItemTwoFragment.newInstance();
break;
case R.id.action_item5:
selectedFragment = ItemOneFragment.newInstance();
FragmentTransaction transactionb = getSupportFragmentManager().beginTransaction();
transactionb.replace(R.id.frame_layout, selectedFragment);
transactionb.commit();
// selectedFragment = ItemFiveFragment.newInstance();
break;
}
return true;
}
});
``` |
46,132,556 | When I try to install python3-tk for python3.5 on ubuntu 16.04 I get the following error, what should I do?
python3-tk : Depends: python3 (< 3.5) but 3.5.1-3 is to be installed | 2017/09/09 | [
"https://Stackoverflow.com/questions/46132556",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3604079/"
] | Activity transition is always expensive and we should switch from one activity to another only when we are switching the context. A `fragment` is a portion of UI in an activity. Same fragment can be used with multiple activities. Just like activity a fragment has its own lifecycle and `setContentView(int layoutResID)` can be set to different layout in `OnCreate` of fragment.
This [link](https://stackoverflow.com/questions/20306091/dilemma-when-to-use-fragments-vs-activities) explains more on when to use activity or fragment.
[Android developer guide on Fragments](https://developer.android.com/guide/components/fragments.html)
[Code path tutorial](https://guides.codepath.com/android/Bottom-Navigation-Views) on bottom navigation views. | Bottom Navigation View is a navigation bar introduced in android library to make it easy to switch between views with a single tap. It can although be used for almost any purpose, but is most commonly used to switch between fragments with a single tap. Its use for opening activities is somewhat absurd, since it ignores its most important functionality of **switching the views with a single tap**. There are many good articles and blogs out there in this regard, one of which is:
<https://medium.com/@hitherejoe/exploring-the-android-design-support-library-bottom-navigation-drawer-548de699e8e0>
Hope this solves your doubt.. |
46,132,556 | When I try to install python3-tk for python3.5 on ubuntu 16.04 I get the following error, what should I do?
python3-tk : Depends: python3 (< 3.5) but 3.5.1-3 is to be installed | 2017/09/09 | [
"https://Stackoverflow.com/questions/46132556",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3604079/"
] | Bottom Navigation View is a navigation bar introduced in android library to make it easy to switch between views with a single tap. It can although be used for almost any purpose, but is most commonly used to switch between fragments with a single tap. Its use for opening activities is somewhat absurd, since it ignores its most important functionality of **switching the views with a single tap**. There are many good articles and blogs out there in this regard, one of which is:
<https://medium.com/@hitherejoe/exploring-the-android-design-support-library-bottom-navigation-drawer-548de699e8e0>
Hope this solves your doubt.. | ```
bottomNavigationView.setOnNavigationItemSelectedListener
(new BottomNavigationView.OnNavigationItemSelectedListener() {
@Override
public boolean onNavigationItemSelected(@NonNull MenuItem item) {
Fragment selectedFragment = null;
switch (item.getItemId()) {
case R.id.action_item1:
selectedFragment = ItemOneFragment.newInstance();
FragmentTransaction transaction = getSupportFragmentManager().beginTransaction();
transaction.replace(R.id.frame_layout, selectedFragment);
transaction.commit();
// selectedFragment.getChildFragmentManager().beginTransaction();
break;
case R.id.action_item2:
selectedFragment = ItemTwoFragment.newInstance();
FragmentTransaction transactiona = getSupportFragmentManager().beginTransaction();
transactiona.replace(R.id.frame_layout, selectedFragment);
transactiona.commit();
// selectedFragment = ItemThreeFragment.newInstance();
break;
case R.id.action_item3:
// selectedFragment = ItemOneFragment.newInstance();
Intent intent=new Intent(MainView.this, YoutActivityLive.class);
intent.setFlags(Intent.FLAG_ACTIVITY_NEW_TASK);
startActivity(intent);
// selectedFragment = ItemTwoFragment.newInstance();
break;
case R.id.action_item5:
selectedFragment = ItemOneFragment.newInstance();
FragmentTransaction transactionb = getSupportFragmentManager().beginTransaction();
transactionb.replace(R.id.frame_layout, selectedFragment);
transactionb.commit();
// selectedFragment = ItemFiveFragment.newInstance();
break;
}
return true;
}
});
``` |
46,132,556 | When I try to install python3-tk for python3.5 on ubuntu 16.04 I get the following error, what should I do?
python3-tk : Depends: python3 (< 3.5) but 3.5.1-3 is to be installed | 2017/09/09 | [
"https://Stackoverflow.com/questions/46132556",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3604079/"
] | Activity transition is always expensive and we should switch from one activity to another only when we are switching the context. A `fragment` is a portion of UI in an activity. Same fragment can be used with multiple activities. Just like activity a fragment has its own lifecycle and `setContentView(int layoutResID)` can be set to different layout in `OnCreate` of fragment.
This [link](https://stackoverflow.com/questions/20306091/dilemma-when-to-use-fragments-vs-activities) explains more on when to use activity or fragment.
[Android developer guide on Fragments](https://developer.android.com/guide/components/fragments.html)
[Code path tutorial](https://guides.codepath.com/android/Bottom-Navigation-Views) on bottom navigation views. | ```
bottomNavigationView.setOnNavigationItemSelectedListener
(new BottomNavigationView.OnNavigationItemSelectedListener() {
@Override
public boolean onNavigationItemSelected(@NonNull MenuItem item) {
Fragment selectedFragment = null;
switch (item.getItemId()) {
case R.id.action_item1:
selectedFragment = ItemOneFragment.newInstance();
FragmentTransaction transaction = getSupportFragmentManager().beginTransaction();
transaction.replace(R.id.frame_layout, selectedFragment);
transaction.commit();
// selectedFragment.getChildFragmentManager().beginTransaction();
break;
case R.id.action_item2:
selectedFragment = ItemTwoFragment.newInstance();
FragmentTransaction transactiona = getSupportFragmentManager().beginTransaction();
transactiona.replace(R.id.frame_layout, selectedFragment);
transactiona.commit();
// selectedFragment = ItemThreeFragment.newInstance();
break;
case R.id.action_item3:
// selectedFragment = ItemOneFragment.newInstance();
Intent intent=new Intent(MainView.this, YoutActivityLive.class);
intent.setFlags(Intent.FLAG_ACTIVITY_NEW_TASK);
startActivity(intent);
// selectedFragment = ItemTwoFragment.newInstance();
break;
case R.id.action_item5:
selectedFragment = ItemOneFragment.newInstance();
FragmentTransaction transactionb = getSupportFragmentManager().beginTransaction();
transactionb.replace(R.id.frame_layout, selectedFragment);
transactionb.commit();
// selectedFragment = ItemFiveFragment.newInstance();
break;
}
return true;
}
});
``` |
27,713,681 | ```
10:01:36 adcli
10:01:36 sshd[
10:01:37 adcli
10:01:37 adcli
10:01:37 sshd[
10:01:37 adcli
10:01:37 adcli
10:01:37 sshd[
10:01:37 sshd[
10:01:37 adcli
10:01:37 adcli
10:01:37 sshd[
10:01:37 sshd[
10:01:37 adcli
10:01:37 adcli
10:01:37 adcli
10:01:37 runma
10:01:37 runma
10:01:37 sshd[
10:01:37 adcli
10:01:37 adcli
10:01:37 sshd[
10:01:37 sshd[
10:01:37 adcli
10:01:37 adcli
10:01:37 sshd[
10:01:37 roots
10:01:37 sshd[
10:01:37 adcli
10:01:37 adcli
10:01:37 adcli
10:01:37 sshd[
10:01:37 adcli
10:01:37 adcli
10:01:37 sshd[
10:01:37 sshd[
```
Here is my approach:( I know it's not a complete but)
```
import re
i="sshd"
j="apached"
k="wexd"
count_a=0;
count_b=0;
count_c=0;
file=open("hex01.txt","r")
for line in file:
for datestamp in line[0:5]
if line.match("datestamp"):
print datestamp,m=line.count("sshd"),n=line.count("apached"),0=line.count ("wexd"),t=m+n+0
```
This is the sample input data I am trying to process in Python. I know it's reasonably easy to get the output using bash but i am learning a Python and I feel it's reasonably tough to get the desired output. Any help will be appreciated, i even don't need a perfect code but the algorith amd the appropriate python's libraries are enough. The output should be
aprocess\_count, bprocesscount, Totals
ex: `10:01:37 10,2,1,13` - Meaning that 10 sshd, 2adcli and 1 roots from above log file | 2014/12/30 | [
"https://Stackoverflow.com/questions/27713681",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4406840/"
] | As Respawned alluded to, there is no easy answer that will work in all cases. That being said, here are two approaches which seem to work fairly well. Both having upsides and downsides.
Approach 1
==========
Internally, the `getTextContent` method uses whats called an `EvaluatorPreprocessor` to parse the PDF operators, and maintain the graphic state. So what we can do is, implement a custom `EvaluatorPreprocessor`, overwrite the `preprocessCommand` method, and use it to add the current text color to the graphic state. Once this is in place, anytime a new text chunk is created, we can add a color attribute, and set it to the current color state.
The downsides to this approach are:
1. Requires modifying the PDFJS source code. It also depends heavily on
the current implementation of PDFJS, and could break if this is
changed.
2. It will fail in cases where the text is used as a path to be filled with an image. In some PDF creators (such as Photoshop), the way it creates colored text is, it first creates a clipping path from all the given text characters, and then paints a solid image over the path. So the only way to deduce the fill-color is by reading the pixel values from the image, which would require painting it to a canvas. Even hooking into `paintChar` wont be of much help here, since the fill color will only emerge at a later time.
The upside is, its fairly robust and works irrespective of the page background. It also does not require rendering anything to canvas, so it can be done entirely in the background thread.
**Code**
All the modifications are made in the `core/evaluator.js` file.
First you must define the custom evaluator, after the [EvaluatorPreprocessor definition](https://github.com/mozilla/pdf.js/blob/eac168f3cc4df2e8f3f7790c6ca60426b96dcc54/src/core/evaluator.js#L2242).
```
var CustomEvaluatorPreprocessor = (function() {
function CustomEvaluatorPreprocessor(stream, xref, stateManager, resources) {
EvaluatorPreprocessor.call(this, stream, xref, stateManager);
this.resources = resources;
this.xref = xref;
// set initial color state
var state = this.stateManager.state;
state.textRenderingMode = TextRenderingMode.FILL;
state.fillColorSpace = ColorSpace.singletons.gray;
state.fillColor = [0,0,0];
}
CustomEvaluatorPreprocessor.prototype = Object.create(EvaluatorPreprocessor.prototype);
CustomEvaluatorPreprocessor.prototype.preprocessCommand = function(fn, args) {
EvaluatorPreprocessor.prototype.preprocessCommand.call(this, fn, args);
var state = this.stateManager.state;
switch(fn) {
case OPS.setFillColorSpace:
state.fillColorSpace = ColorSpace.parse(args[0], this.xref, this.resources);
break;
case OPS.setFillColor:
var cs = state.fillColorSpace;
state.fillColor = cs.getRgb(args, 0);
break;
case OPS.setFillGray:
state.fillColorSpace = ColorSpace.singletons.gray;
state.fillColor = ColorSpace.singletons.gray.getRgb(args, 0);
break;
case OPS.setFillCMYKColor:
state.fillColorSpace = ColorSpace.singletons.cmyk;
state.fillColor = ColorSpace.singletons.cmyk.getRgb(args, 0);
break;
case OPS.setFillRGBColor:
state.fillColorSpace = ColorSpace.singletons.rgb;
state.fillColor = ColorSpace.singletons.rgb.getRgb(args, 0);
break;
}
};
return CustomEvaluatorPreprocessor;
})();
```
Next, you need to modify the [getTextContent method](https://github.com/mozilla/pdf.js/blob/eac168f3cc4df2e8f3f7790c6ca60426b96dcc54/src/core/evaluator.js#L908) to use the new evaluator:
```
var preprocessor = new CustomEvaluatorPreprocessor(stream, xref, stateManager, resources);
```
And lastly, in the [newTextChunk](https://github.com/mozilla/pdf.js/blob/eac168f3cc4df2e8f3f7790c6ca60426b96dcc54/src/core/evaluator.js#L922) method, add a color attribute:
```
color: stateManager.state.fillColor
```
Approach 2
==========
Another approach would be to extract the text bounding boxes via `getTextContent`, render the page, and for each text, get the pixel values which reside within its bounds, and take that to be the fill color.
The downsides to this approach are:
1. The computed text bounding boxes are not always correct, and in some cases may even be off completely (eg: rotated text). If the bounding box does not cover at least partially the actual text on canvas, then this method will fail. We can recover from complete failures, by checking that the text pixels have a color variance greater than a threshold. The rationale being, if bounding box is completely background, it will have little variance, in which case we can fallback to a default text color (or maybe even the color of k nearest-neighbors).
2. The method assumes the text is darker than the background. Otherwise, the background could be mistaken as the fill color. This wont be a problem is most cases, as most docs have white backgrounds.
The upside is, its simple, and does not require messing with the PDFJS source-code. Also, it will work in cases where the text is used as a clipping path, and filled with an image. Though this can become hazy when you have complex image fills, in which case, the choice of text color becomes ambiguous.
**Demo**
<http://jsfiddle.net/x2rajt5g/>
Sample PDF's to test:
* <https://www.dropbox.com/s/0t5vtu6qqsdm1d4/color-test.pdf?dl=1>
* <https://www.dropbox.com/s/cq0067u80o79o7x/testTextColour.pdf?dl=1>
**Code**
```
function parseColors(canvasImgData, texts) {
var data = canvasImgData.data,
width = canvasImgData.width,
height = canvasImgData.height,
defaultColor = [0, 0, 0],
minVariance = 20;
texts.forEach(function (t) {
var left = Math.floor(t.transform[4]),
w = Math.round(t.width),
h = Math.round(t.height),
bottom = Math.round(height - t.transform[5]),
top = bottom - h,
start = (left + (top * width)) * 4,
color = [],
best = Infinity,
stat = new ImageStats();
for (var i, v, row = 0; row < h; row++) {
i = start + (row * width * 4);
for (var col = 0; col < w; col++) {
if ((v = data[i] + data[i + 1] + data[i + 2]) < best) { // the darker the "better"
best = v;
color[0] = data[i];
color[1] = data[i + 1];
color[2] = data[i + 2];
}
stat.addPixel(data[i], data[i+1], data[i+2]);
i += 4;
}
}
var stdDev = stat.getStdDev();
t.color = stdDev < minVariance ? defaultColor : color;
});
}
function ImageStats() {
this.pixelCount = 0;
this.pixels = [];
this.rgb = [];
this.mean = 0;
this.stdDev = 0;
}
ImageStats.prototype = {
addPixel: function (r, g, b) {
if (!this.rgb.length) {
this.rgb[0] = r;
this.rgb[1] = g;
this.rgb[2] = b;
} else {
this.rgb[0] += r;
this.rgb[1] += g;
this.rgb[2] += b;
}
this.pixelCount++;
this.pixels.push([r,g,b]);
},
getStdDev: function() {
var mean = [
this.rgb[0] / this.pixelCount,
this.rgb[1] / this.pixelCount,
this.rgb[2] / this.pixelCount
];
var diff = [0,0,0];
this.pixels.forEach(function(p) {
diff[0] += Math.pow(mean[0] - p[0], 2);
diff[1] += Math.pow(mean[1] - p[1], 2);
diff[2] += Math.pow(mean[2] - p[2], 2);
});
diff[0] = Math.sqrt(diff[0] / this.pixelCount);
diff[1] = Math.sqrt(diff[1] / this.pixelCount);
diff[2] = Math.sqrt(diff[2] / this.pixelCount);
return diff[0] + diff[1] + diff[2];
}
};
``` | This question is actually extremely hard if you want to do it to perfection... or it can be relatively easy if you can live with solutions that work only some of the time.
First of all, realize that `getTextContent` is intended for searchable text extraction and that's all it's intended to do.
It's been suggested in the comments above that you use `page.getOperatorList()`, but that's basically re-implementing the whole PDF drawing model in your code... which is basically silly because the largest chunk of PDFJS does exactly that... except not for the purpose of text extraction but for the purpose of rendering to canvas. So what you want to do is to hack [canvas.js](https://github.com/mozilla/pdf.js/blob/master/src/display/canvas.js) so that instead of just setting its internal knobs it also does some callbacks to your code. Alas, if you go this way, you won't be able to use stock PDFJS, and I rather doubt that your goal of color extraction will be seen as very useful for PDFJS' main purpose, so your changes are likely not going to get accepted upstream, so you'll likely have to maintain your own fork of PDFJS.
After this dire warning, what you'd need to minimally change are the functions where PDFJS has parsed the PDF color operators and sets its own canvas painting color. That happens around line 1566 (of canvas.js) in [function setFillColorN](https://github.com/mozilla/pdf.js/blob/e7cddcce283c5e76bc042747b5588a6d250c25e1/src/display/canvas.js#L1664). You'll also need to hook the text render... which is rather a character renderer at canvas.js level, namely [CanvasGraphics\_paintChar](https://github.com/mozilla/pdf.js/blob/e7cddcce283c5e76bc042747b5588a6d250c25e1/src/display/canvas.js#L1350) around line 1270. With these two hooked, you'll get a stream of callbacks for color changes interspersed between character drawing sequences. So you can reconstruct the color of character sequences reasonably easy from this.. in the simple color cases.
And now I'm getting to the really ugly part: the fact that PDF has an extremely complex color model. First there are two colors for drawing anything, including text: a fill color and stroke (outline) color. So far not too scary, but the color is an index in a ColorSpace... of which there are several, RGB being only one possibility. Then there's also alpha and compositing modes, so the layers (of various alphas) can result in a different final color depending on the compositing mode. And the PDFJS has not a single place where it accumulates color from layers.. it simply [over]paints them as they come. So if you only extract the fill color changes and ignore alpha, compositing etc.. it will work but not for complex documents.
Hope this helps. |
27,713,681 | ```
10:01:36 adcli
10:01:36 sshd[
10:01:37 adcli
10:01:37 adcli
10:01:37 sshd[
10:01:37 adcli
10:01:37 adcli
10:01:37 sshd[
10:01:37 sshd[
10:01:37 adcli
10:01:37 adcli
10:01:37 sshd[
10:01:37 sshd[
10:01:37 adcli
10:01:37 adcli
10:01:37 adcli
10:01:37 runma
10:01:37 runma
10:01:37 sshd[
10:01:37 adcli
10:01:37 adcli
10:01:37 sshd[
10:01:37 sshd[
10:01:37 adcli
10:01:37 adcli
10:01:37 sshd[
10:01:37 roots
10:01:37 sshd[
10:01:37 adcli
10:01:37 adcli
10:01:37 adcli
10:01:37 sshd[
10:01:37 adcli
10:01:37 adcli
10:01:37 sshd[
10:01:37 sshd[
```
Here is my approach:( I know it's not a complete but)
```
import re
i="sshd"
j="apached"
k="wexd"
count_a=0;
count_b=0;
count_c=0;
file=open("hex01.txt","r")
for line in file:
for datestamp in line[0:5]
if line.match("datestamp"):
print datestamp,m=line.count("sshd"),n=line.count("apached"),0=line.count ("wexd"),t=m+n+0
```
This is the sample input data I am trying to process in Python. I know it's reasonably easy to get the output using bash but i am learning a Python and I feel it's reasonably tough to get the desired output. Any help will be appreciated, i even don't need a perfect code but the algorith amd the appropriate python's libraries are enough. The output should be
aprocess\_count, bprocesscount, Totals
ex: `10:01:37 10,2,1,13` - Meaning that 10 sshd, 2adcli and 1 roots from above log file | 2014/12/30 | [
"https://Stackoverflow.com/questions/27713681",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4406840/"
] | This question is actually extremely hard if you want to do it to perfection... or it can be relatively easy if you can live with solutions that work only some of the time.
First of all, realize that `getTextContent` is intended for searchable text extraction and that's all it's intended to do.
It's been suggested in the comments above that you use `page.getOperatorList()`, but that's basically re-implementing the whole PDF drawing model in your code... which is basically silly because the largest chunk of PDFJS does exactly that... except not for the purpose of text extraction but for the purpose of rendering to canvas. So what you want to do is to hack [canvas.js](https://github.com/mozilla/pdf.js/blob/master/src/display/canvas.js) so that instead of just setting its internal knobs it also does some callbacks to your code. Alas, if you go this way, you won't be able to use stock PDFJS, and I rather doubt that your goal of color extraction will be seen as very useful for PDFJS' main purpose, so your changes are likely not going to get accepted upstream, so you'll likely have to maintain your own fork of PDFJS.
After this dire warning, what you'd need to minimally change are the functions where PDFJS has parsed the PDF color operators and sets its own canvas painting color. That happens around line 1566 (of canvas.js) in [function setFillColorN](https://github.com/mozilla/pdf.js/blob/e7cddcce283c5e76bc042747b5588a6d250c25e1/src/display/canvas.js#L1664). You'll also need to hook the text render... which is rather a character renderer at canvas.js level, namely [CanvasGraphics\_paintChar](https://github.com/mozilla/pdf.js/blob/e7cddcce283c5e76bc042747b5588a6d250c25e1/src/display/canvas.js#L1350) around line 1270. With these two hooked, you'll get a stream of callbacks for color changes interspersed between character drawing sequences. So you can reconstruct the color of character sequences reasonably easy from this.. in the simple color cases.
And now I'm getting to the really ugly part: the fact that PDF has an extremely complex color model. First there are two colors for drawing anything, including text: a fill color and stroke (outline) color. So far not too scary, but the color is an index in a ColorSpace... of which there are several, RGB being only one possibility. Then there's also alpha and compositing modes, so the layers (of various alphas) can result in a different final color depending on the compositing mode. And the PDFJS has not a single place where it accumulates color from layers.. it simply [over]paints them as they come. So if you only extract the fill color changes and ignore alpha, compositing etc.. it will work but not for complex documents.
Hope this helps. | There's no need to patch pdfjs, the transform property gives the x and y, so you can go through the operator list and find the setFillColor op that precedes the text op at that point. |
27,713,681 | ```
10:01:36 adcli
10:01:36 sshd[
10:01:37 adcli
10:01:37 adcli
10:01:37 sshd[
10:01:37 adcli
10:01:37 adcli
10:01:37 sshd[
10:01:37 sshd[
10:01:37 adcli
10:01:37 adcli
10:01:37 sshd[
10:01:37 sshd[
10:01:37 adcli
10:01:37 adcli
10:01:37 adcli
10:01:37 runma
10:01:37 runma
10:01:37 sshd[
10:01:37 adcli
10:01:37 adcli
10:01:37 sshd[
10:01:37 sshd[
10:01:37 adcli
10:01:37 adcli
10:01:37 sshd[
10:01:37 roots
10:01:37 sshd[
10:01:37 adcli
10:01:37 adcli
10:01:37 adcli
10:01:37 sshd[
10:01:37 adcli
10:01:37 adcli
10:01:37 sshd[
10:01:37 sshd[
```
Here is my approach:( I know it's not a complete but)
```
import re
i="sshd"
j="apached"
k="wexd"
count_a=0;
count_b=0;
count_c=0;
file=open("hex01.txt","r")
for line in file:
for datestamp in line[0:5]
if line.match("datestamp"):
print datestamp,m=line.count("sshd"),n=line.count("apached"),0=line.count ("wexd"),t=m+n+0
```
This is the sample input data I am trying to process in Python. I know it's reasonably easy to get the output using bash but i am learning a Python and I feel it's reasonably tough to get the desired output. Any help will be appreciated, i even don't need a perfect code but the algorith amd the appropriate python's libraries are enough. The output should be
aprocess\_count, bprocesscount, Totals
ex: `10:01:37 10,2,1,13` - Meaning that 10 sshd, 2adcli and 1 roots from above log file | 2014/12/30 | [
"https://Stackoverflow.com/questions/27713681",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4406840/"
] | As Respawned alluded to, there is no easy answer that will work in all cases. That being said, here are two approaches which seem to work fairly well. Both having upsides and downsides.
Approach 1
==========
Internally, the `getTextContent` method uses whats called an `EvaluatorPreprocessor` to parse the PDF operators, and maintain the graphic state. So what we can do is, implement a custom `EvaluatorPreprocessor`, overwrite the `preprocessCommand` method, and use it to add the current text color to the graphic state. Once this is in place, anytime a new text chunk is created, we can add a color attribute, and set it to the current color state.
The downsides to this approach are:
1. Requires modifying the PDFJS source code. It also depends heavily on
the current implementation of PDFJS, and could break if this is
changed.
2. It will fail in cases where the text is used as a path to be filled with an image. In some PDF creators (such as Photoshop), the way it creates colored text is, it first creates a clipping path from all the given text characters, and then paints a solid image over the path. So the only way to deduce the fill-color is by reading the pixel values from the image, which would require painting it to a canvas. Even hooking into `paintChar` wont be of much help here, since the fill color will only emerge at a later time.
The upside is, its fairly robust and works irrespective of the page background. It also does not require rendering anything to canvas, so it can be done entirely in the background thread.
**Code**
All the modifications are made in the `core/evaluator.js` file.
First you must define the custom evaluator, after the [EvaluatorPreprocessor definition](https://github.com/mozilla/pdf.js/blob/eac168f3cc4df2e8f3f7790c6ca60426b96dcc54/src/core/evaluator.js#L2242).
```
var CustomEvaluatorPreprocessor = (function() {
function CustomEvaluatorPreprocessor(stream, xref, stateManager, resources) {
EvaluatorPreprocessor.call(this, stream, xref, stateManager);
this.resources = resources;
this.xref = xref;
// set initial color state
var state = this.stateManager.state;
state.textRenderingMode = TextRenderingMode.FILL;
state.fillColorSpace = ColorSpace.singletons.gray;
state.fillColor = [0,0,0];
}
CustomEvaluatorPreprocessor.prototype = Object.create(EvaluatorPreprocessor.prototype);
CustomEvaluatorPreprocessor.prototype.preprocessCommand = function(fn, args) {
EvaluatorPreprocessor.prototype.preprocessCommand.call(this, fn, args);
var state = this.stateManager.state;
switch(fn) {
case OPS.setFillColorSpace:
state.fillColorSpace = ColorSpace.parse(args[0], this.xref, this.resources);
break;
case OPS.setFillColor:
var cs = state.fillColorSpace;
state.fillColor = cs.getRgb(args, 0);
break;
case OPS.setFillGray:
state.fillColorSpace = ColorSpace.singletons.gray;
state.fillColor = ColorSpace.singletons.gray.getRgb(args, 0);
break;
case OPS.setFillCMYKColor:
state.fillColorSpace = ColorSpace.singletons.cmyk;
state.fillColor = ColorSpace.singletons.cmyk.getRgb(args, 0);
break;
case OPS.setFillRGBColor:
state.fillColorSpace = ColorSpace.singletons.rgb;
state.fillColor = ColorSpace.singletons.rgb.getRgb(args, 0);
break;
}
};
return CustomEvaluatorPreprocessor;
})();
```
Next, you need to modify the [getTextContent method](https://github.com/mozilla/pdf.js/blob/eac168f3cc4df2e8f3f7790c6ca60426b96dcc54/src/core/evaluator.js#L908) to use the new evaluator:
```
var preprocessor = new CustomEvaluatorPreprocessor(stream, xref, stateManager, resources);
```
And lastly, in the [newTextChunk](https://github.com/mozilla/pdf.js/blob/eac168f3cc4df2e8f3f7790c6ca60426b96dcc54/src/core/evaluator.js#L922) method, add a color attribute:
```
color: stateManager.state.fillColor
```
Approach 2
==========
Another approach would be to extract the text bounding boxes via `getTextContent`, render the page, and for each text, get the pixel values which reside within its bounds, and take that to be the fill color.
The downsides to this approach are:
1. The computed text bounding boxes are not always correct, and in some cases may even be off completely (eg: rotated text). If the bounding box does not cover at least partially the actual text on canvas, then this method will fail. We can recover from complete failures, by checking that the text pixels have a color variance greater than a threshold. The rationale being, if bounding box is completely background, it will have little variance, in which case we can fallback to a default text color (or maybe even the color of k nearest-neighbors).
2. The method assumes the text is darker than the background. Otherwise, the background could be mistaken as the fill color. This wont be a problem is most cases, as most docs have white backgrounds.
The upside is, its simple, and does not require messing with the PDFJS source-code. Also, it will work in cases where the text is used as a clipping path, and filled with an image. Though this can become hazy when you have complex image fills, in which case, the choice of text color becomes ambiguous.
**Demo**
<http://jsfiddle.net/x2rajt5g/>
Sample PDF's to test:
* <https://www.dropbox.com/s/0t5vtu6qqsdm1d4/color-test.pdf?dl=1>
* <https://www.dropbox.com/s/cq0067u80o79o7x/testTextColour.pdf?dl=1>
**Code**
```
function parseColors(canvasImgData, texts) {
var data = canvasImgData.data,
width = canvasImgData.width,
height = canvasImgData.height,
defaultColor = [0, 0, 0],
minVariance = 20;
texts.forEach(function (t) {
var left = Math.floor(t.transform[4]),
w = Math.round(t.width),
h = Math.round(t.height),
bottom = Math.round(height - t.transform[5]),
top = bottom - h,
start = (left + (top * width)) * 4,
color = [],
best = Infinity,
stat = new ImageStats();
for (var i, v, row = 0; row < h; row++) {
i = start + (row * width * 4);
for (var col = 0; col < w; col++) {
if ((v = data[i] + data[i + 1] + data[i + 2]) < best) { // the darker the "better"
best = v;
color[0] = data[i];
color[1] = data[i + 1];
color[2] = data[i + 2];
}
stat.addPixel(data[i], data[i+1], data[i+2]);
i += 4;
}
}
var stdDev = stat.getStdDev();
t.color = stdDev < minVariance ? defaultColor : color;
});
}
function ImageStats() {
this.pixelCount = 0;
this.pixels = [];
this.rgb = [];
this.mean = 0;
this.stdDev = 0;
}
ImageStats.prototype = {
addPixel: function (r, g, b) {
if (!this.rgb.length) {
this.rgb[0] = r;
this.rgb[1] = g;
this.rgb[2] = b;
} else {
this.rgb[0] += r;
this.rgb[1] += g;
this.rgb[2] += b;
}
this.pixelCount++;
this.pixels.push([r,g,b]);
},
getStdDev: function() {
var mean = [
this.rgb[0] / this.pixelCount,
this.rgb[1] / this.pixelCount,
this.rgb[2] / this.pixelCount
];
var diff = [0,0,0];
this.pixels.forEach(function(p) {
diff[0] += Math.pow(mean[0] - p[0], 2);
diff[1] += Math.pow(mean[1] - p[1], 2);
diff[2] += Math.pow(mean[2] - p[2], 2);
});
diff[0] = Math.sqrt(diff[0] / this.pixelCount);
diff[1] = Math.sqrt(diff[1] / this.pixelCount);
diff[2] = Math.sqrt(diff[2] / this.pixelCount);
return diff[0] + diff[1] + diff[2];
}
};
``` | There's no need to patch pdfjs, the transform property gives the x and y, so you can go through the operator list and find the setFillColor op that precedes the text op at that point. |
55,482,197 | I start to learn Django framework so I need to install latest python, pip, virtualenv and django packets on my mac.
I try to do it with brew, but I got some strange behavior.
At first, python3 installed not in /usr/bin/ but in /Library/Frameworks/Python.framework directory:
```
$ which python
/usr/bin/python
$ which python3
/Library/Frameworks/Python.framework/Versions/3.7/bin/python3
```
It is strange for me, because every tutorial tells about /usr/bin/python37 and nothing about /Library/Frameworks/Python.framework
Is this okay?
After that I made `sudo pip3 install virtualenv` and got this answer:
```
The directory '/Users/user/Library/Caches/pip/http' or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
The directory '/Users/user/Library/Caches/pip' or its parent directory is not owned by the current user and caching wheels has been disabled. check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
```
Okay, I made uninstall and install with -H sudo flag:
```
Installing collected packages: virtualenv
Successfully installed virtualenv-16.4.3
```
But when I try to make a virtual environment, I got
```
$ virtualenv venv
-bash: /usr/local/bin/virtualenv: No such file or directory
```
Checking virtualenv location:
```
$ which virtualenv
/Library/Frameworks/Python.framework/Versions/3.7/bin/virtualenv
```
Why /Library/Frameworks/Python.framework/?
And why it searches for virtualenv in /usr/local/bin/virtualenv?
Coding on Macos is always so painful? | 2019/04/02 | [
"https://Stackoverflow.com/questions/55482197",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11301741/"
] | Instead of using brew you can simply use "venv".
To create a virtual environment you can run -->
```
python3 -m venv environment_name
```
Example: If you want to create an virtual environment for django with name django\_env
```
python3 -m venv django_env
```
"-m" flag checks for sys.path and executes main module.
**Activation of Virtual Environment :**
```
source django_env/bin/activate
```
**Deactivation :**
```
deactivate
``` | ### Python3 Virtualenv Setup
Requirements:
* Python3
* Pip3
```sh
$ brew install python3 #upgrade
```
Pip3 is installed with Python3
**Installation**
To install virtualenv via pip run:
```sh
$ pip3 install virtualenv
```
**Usage**
Creation of virtualenv:
```sh
$ virtualenv -p python3 <desired-path>
```
Activate the virtualenv:
```sh
$ source <desired-path>/bin/activate
```
Deactivate the virtualenv:
```sh
$ deactivate
```
---
You can see more about the `Homebrew` on the [official page](https://brew.sh). |
55,482,197 | I start to learn Django framework so I need to install latest python, pip, virtualenv and django packets on my mac.
I try to do it with brew, but I got some strange behavior.
At first, python3 installed not in /usr/bin/ but in /Library/Frameworks/Python.framework directory:
```
$ which python
/usr/bin/python
$ which python3
/Library/Frameworks/Python.framework/Versions/3.7/bin/python3
```
It is strange for me, because every tutorial tells about /usr/bin/python37 and nothing about /Library/Frameworks/Python.framework
Is this okay?
After that I made `sudo pip3 install virtualenv` and got this answer:
```
The directory '/Users/user/Library/Caches/pip/http' or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
The directory '/Users/user/Library/Caches/pip' or its parent directory is not owned by the current user and caching wheels has been disabled. check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
```
Okay, I made uninstall and install with -H sudo flag:
```
Installing collected packages: virtualenv
Successfully installed virtualenv-16.4.3
```
But when I try to make a virtual environment, I got
```
$ virtualenv venv
-bash: /usr/local/bin/virtualenv: No such file or directory
```
Checking virtualenv location:
```
$ which virtualenv
/Library/Frameworks/Python.framework/Versions/3.7/bin/virtualenv
```
Why /Library/Frameworks/Python.framework/?
And why it searches for virtualenv in /usr/local/bin/virtualenv?
Coding on Macos is always so painful? | 2019/04/02 | [
"https://Stackoverflow.com/questions/55482197",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11301741/"
] | Instead of using brew you can simply use "venv".
To create a virtual environment you can run -->
```
python3 -m venv environment_name
```
Example: If you want to create an virtual environment for django with name django\_env
```
python3 -m venv django_env
```
"-m" flag checks for sys.path and executes main module.
**Activation of Virtual Environment :**
```
source django_env/bin/activate
```
**Deactivation :**
```
deactivate
``` | Just follow as below:
1. $ pip install virtualenv
Once installed, you can create a virtual environment with:
2. $ virtualenv [directory]
On MacOS, we activate our virtual environment with the source command. If you created your venv in the myvenv directory, the command would be
3. $ source myvenv/bin/activate |
55,482,197 | I start to learn Django framework so I need to install latest python, pip, virtualenv and django packets on my mac.
I try to do it with brew, but I got some strange behavior.
At first, python3 installed not in /usr/bin/ but in /Library/Frameworks/Python.framework directory:
```
$ which python
/usr/bin/python
$ which python3
/Library/Frameworks/Python.framework/Versions/3.7/bin/python3
```
It is strange for me, because every tutorial tells about /usr/bin/python37 and nothing about /Library/Frameworks/Python.framework
Is this okay?
After that I made `sudo pip3 install virtualenv` and got this answer:
```
The directory '/Users/user/Library/Caches/pip/http' or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
The directory '/Users/user/Library/Caches/pip' or its parent directory is not owned by the current user and caching wheels has been disabled. check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
```
Okay, I made uninstall and install with -H sudo flag:
```
Installing collected packages: virtualenv
Successfully installed virtualenv-16.4.3
```
But when I try to make a virtual environment, I got
```
$ virtualenv venv
-bash: /usr/local/bin/virtualenv: No such file or directory
```
Checking virtualenv location:
```
$ which virtualenv
/Library/Frameworks/Python.framework/Versions/3.7/bin/virtualenv
```
Why /Library/Frameworks/Python.framework/?
And why it searches for virtualenv in /usr/local/bin/virtualenv?
Coding on Macos is always so painful? | 2019/04/02 | [
"https://Stackoverflow.com/questions/55482197",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11301741/"
] | ### Python3 Virtualenv Setup
Requirements:
* Python3
* Pip3
```sh
$ brew install python3 #upgrade
```
Pip3 is installed with Python3
**Installation**
To install virtualenv via pip run:
```sh
$ pip3 install virtualenv
```
**Usage**
Creation of virtualenv:
```sh
$ virtualenv -p python3 <desired-path>
```
Activate the virtualenv:
```sh
$ source <desired-path>/bin/activate
```
Deactivate the virtualenv:
```sh
$ deactivate
```
---
You can see more about the `Homebrew` on the [official page](https://brew.sh). | Just follow as below:
1. $ pip install virtualenv
Once installed, you can create a virtual environment with:
2. $ virtualenv [directory]
On MacOS, we activate our virtual environment with the source command. If you created your venv in the myvenv directory, the command would be
3. $ source myvenv/bin/activate |
71,020,555 | Like in other programing languages - python or JS, when we create a rest api specifically post for the request body we attract some JSON Object
EX:
url: .../employee (Post)
request body: {option: {filter: "suman"}}
In Python or JS we can just do request\_body.option.filter and get the data
How can I achieve the same with Java ?
Do I need need to create a class for the reqeust\_body and for option and make an instance object request\_body | 2022/02/07 | [
"https://Stackoverflow.com/questions/71020555",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9661967/"
] | What about this?
```
table1 %>%
left_join(cbind(table2, n = 1)) %>%
group_by(Col1, Col2, Col3) %>%
mutate(n = sum(n, na.rm = TRUE))
```
and we will see
```
Col1 Col2 Col3 n
<chr> <chr> <chr> <dbl>
1 Al F C 1
2 Al UF UC 1
3 Al P < 0
4 Cu F C 0
5 Cu UF UC 0
6 Cu P < 0
7 Pb F C 1
8 Pb UF UC 1
9 Pb P < 1
``` | **1)** Append an n=1 column to table2 and an n=0 column to table 1 and then sum n by group.
```
table2 %>%
mutate(n = 1L) %>%
bind_rows(table1 %>% mutate(n = 0L)) %>%
group_by(Col1, Col2, Col3) %>%
summarize(n = sum(n), .groups = "drop")
```
giving:
```
# A tibble: 10 x 4
Col1 Col2 Col3 n
<chr> <chr> <chr> <int>
1 Al F C 1
2 Al P < 0
3 Al UF UC 1
4 Cu F < 1
5 Cu F C 0
6 Cu P < 0
7 Cu UF UC 0
8 Pb F C 1
9 Pb P < 1
10 Pb UF UC 1
```
**2)** This variation gives the same result.
```
list(table1, table2) %>%
bind_rows(.id = "id") %>%
group_by(Col1, Col2, Col3) %>%
summarize(n = sum(id == 2L), .groups = "drop")
```
**3)** This is a data.table only solution.
```
rbindlist(list(table1, table2), idcol = TRUE)[,
.(n = sum(.id == 2L)), by = .(Col1, Col2, Col3)]
```
**4)** This is a base R solution.
```
both <- rbind(transform(table1, n = 0), transform(table2, n = 1))
aggregate(n ~., both, sum)
```
**5)** This uses SQL.
```
library(sqldf)
sqldf("with both as (
select *, 0 as n from table1
union all
select *, 1 as n from table2
)
select Col1, Col2, Col3, sum(n) as n
from both
group by Col1, Col2, Col3
")
``` |
71,020,555 | Like in other programing languages - python or JS, when we create a rest api specifically post for the request body we attract some JSON Object
EX:
url: .../employee (Post)
request body: {option: {filter: "suman"}}
In Python or JS we can just do request\_body.option.filter and get the data
How can I achieve the same with Java ?
Do I need need to create a class for the reqeust\_body and for option and make an instance object request\_body | 2022/02/07 | [
"https://Stackoverflow.com/questions/71020555",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9661967/"
] | You can try `complete`
```
library(tidyverse)
table2 %>%
count(Col1, Col2, Col3, name = "sum") %>%
complete(distinct_all(table1), fill = list(sum=0))
# A tibble: 10 x 4
Col1 Col2 Col3 sum
<chr> <chr> <chr> <dbl>
1 Al F C 1
2 Al P < 0
3 Al UF UC 1
4 Cu F C 0
5 Cu P < 0
6 Cu UF UC 0
7 Pb F C 1
8 Pb P < 1
9 Pb UF UC 1
10 Cu F < 1
```
Or a full\_join
```
table2 %>%
count(Col1, Col2, Col3, name = "sum") %>%
full_join(distinct_all(table1)) %>%
mutate(sum=replace_na(sum, 0))
``` | **1)** Append an n=1 column to table2 and an n=0 column to table 1 and then sum n by group.
```
table2 %>%
mutate(n = 1L) %>%
bind_rows(table1 %>% mutate(n = 0L)) %>%
group_by(Col1, Col2, Col3) %>%
summarize(n = sum(n), .groups = "drop")
```
giving:
```
# A tibble: 10 x 4
Col1 Col2 Col3 n
<chr> <chr> <chr> <int>
1 Al F C 1
2 Al P < 0
3 Al UF UC 1
4 Cu F < 1
5 Cu F C 0
6 Cu P < 0
7 Cu UF UC 0
8 Pb F C 1
9 Pb P < 1
10 Pb UF UC 1
```
**2)** This variation gives the same result.
```
list(table1, table2) %>%
bind_rows(.id = "id") %>%
group_by(Col1, Col2, Col3) %>%
summarize(n = sum(id == 2L), .groups = "drop")
```
**3)** This is a data.table only solution.
```
rbindlist(list(table1, table2), idcol = TRUE)[,
.(n = sum(.id == 2L)), by = .(Col1, Col2, Col3)]
```
**4)** This is a base R solution.
```
both <- rbind(transform(table1, n = 0), transform(table2, n = 1))
aggregate(n ~., both, sum)
```
**5)** This uses SQL.
```
library(sqldf)
sqldf("with both as (
select *, 0 as n from table1
union all
select *, 1 as n from table2
)
select Col1, Col2, Col3, sum(n) as n
from both
group by Col1, Col2, Col3
")
``` |
17,818,502 | Consider this sample python code. It reads from stdin and writes to a file.
```
import sys
arg1 = sys.argv[1]
f = open(arg1,'w')
f.write('<html><head><title></title></head><body>')
for line in sys.stdin:
f.write("<p>")
f.write(line)
f.write("</p>")
f.write("</body></html>")
f.close()
```
Suppose I want to modify this same program to write to stdout instead. Then, I'll have to replace each instance of `f.write()` with `sys.stdout.write()`. But that would be too tedious. I want to know if there is a way to specify `f` as an alias for `sys.stdout`, so that `f.write()` is treated as `sys.stdout.write()`. | 2013/07/23 | [
"https://Stackoverflow.com/questions/17818502",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1199882/"
] | Just do
```
>>> import sys
>>> f = sys.stdout
>>> f.write('abc')
abc
```
Now you just need to do `f = sys.stdout` instead of `f = open(fileName)`. (And remove `f.close()`)
**Also**, Please consider using the following syntax for files.
```
with open(fileName, 'r') as f:
# Do Something
```
The file automatically gets closed for you this way. | Yes, in python, you can alias every class / function / method etc. Just assign the value you want to use to another variable:
```
import sys
f = sys.stdout
```
Now every method you call on `f` will get called on `sys.stdout`. You can do this with whatever you like, for example also with `i = sys.stdin` etc. |
17,818,502 | Consider this sample python code. It reads from stdin and writes to a file.
```
import sys
arg1 = sys.argv[1]
f = open(arg1,'w')
f.write('<html><head><title></title></head><body>')
for line in sys.stdin:
f.write("<p>")
f.write(line)
f.write("</p>")
f.write("</body></html>")
f.close()
```
Suppose I want to modify this same program to write to stdout instead. Then, I'll have to replace each instance of `f.write()` with `sys.stdout.write()`. But that would be too tedious. I want to know if there is a way to specify `f` as an alias for `sys.stdout`, so that `f.write()` is treated as `sys.stdout.write()`. | 2013/07/23 | [
"https://Stackoverflow.com/questions/17818502",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1199882/"
] | Just do
```
>>> import sys
>>> f = sys.stdout
>>> f.write('abc')
abc
```
Now you just need to do `f = sys.stdout` instead of `f = open(fileName)`. (And remove `f.close()`)
**Also**, Please consider using the following syntax for files.
```
with open(fileName, 'r') as f:
# Do Something
```
The file automatically gets closed for you this way. | Names in Python are just bindings. Therefore:
```
f = sys.stdout
```
Just binds the name `f` to *the object* that's *also bound* to `sys.stdout`...
Note that since they're both the same object, any changes you make to `f` or `sys.stdout` at this point will affect *both*... So don't do `f.close()` as you normally wouldn't want to do `sys.stdout.close()`... |
17,818,502 | Consider this sample python code. It reads from stdin and writes to a file.
```
import sys
arg1 = sys.argv[1]
f = open(arg1,'w')
f.write('<html><head><title></title></head><body>')
for line in sys.stdin:
f.write("<p>")
f.write(line)
f.write("</p>")
f.write("</body></html>")
f.close()
```
Suppose I want to modify this same program to write to stdout instead. Then, I'll have to replace each instance of `f.write()` with `sys.stdout.write()`. But that would be too tedious. I want to know if there is a way to specify `f` as an alias for `sys.stdout`, so that `f.write()` is treated as `sys.stdout.write()`. | 2013/07/23 | [
"https://Stackoverflow.com/questions/17818502",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1199882/"
] | Just do
```
>>> import sys
>>> f = sys.stdout
>>> f.write('abc')
abc
```
Now you just need to do `f = sys.stdout` instead of `f = open(fileName)`. (And remove `f.close()`)
**Also**, Please consider using the following syntax for files.
```
with open(fileName, 'r') as f:
# Do Something
```
The file automatically gets closed for you this way. | This is properly the job of a shell anyway; just send it to standard out and redirect it to a file using `>filename` when you invoke the script. |
17,818,502 | Consider this sample python code. It reads from stdin and writes to a file.
```
import sys
arg1 = sys.argv[1]
f = open(arg1,'w')
f.write('<html><head><title></title></head><body>')
for line in sys.stdin:
f.write("<p>")
f.write(line)
f.write("</p>")
f.write("</body></html>")
f.close()
```
Suppose I want to modify this same program to write to stdout instead. Then, I'll have to replace each instance of `f.write()` with `sys.stdout.write()`. But that would be too tedious. I want to know if there is a way to specify `f` as an alias for `sys.stdout`, so that `f.write()` is treated as `sys.stdout.write()`. | 2013/07/23 | [
"https://Stackoverflow.com/questions/17818502",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1199882/"
] | Names in Python are just bindings. Therefore:
```
f = sys.stdout
```
Just binds the name `f` to *the object* that's *also bound* to `sys.stdout`...
Note that since they're both the same object, any changes you make to `f` or `sys.stdout` at this point will affect *both*... So don't do `f.close()` as you normally wouldn't want to do `sys.stdout.close()`... | Yes, in python, you can alias every class / function / method etc. Just assign the value you want to use to another variable:
```
import sys
f = sys.stdout
```
Now every method you call on `f` will get called on `sys.stdout`. You can do this with whatever you like, for example also with `i = sys.stdin` etc. |
17,818,502 | Consider this sample python code. It reads from stdin and writes to a file.
```
import sys
arg1 = sys.argv[1]
f = open(arg1,'w')
f.write('<html><head><title></title></head><body>')
for line in sys.stdin:
f.write("<p>")
f.write(line)
f.write("</p>")
f.write("</body></html>")
f.close()
```
Suppose I want to modify this same program to write to stdout instead. Then, I'll have to replace each instance of `f.write()` with `sys.stdout.write()`. But that would be too tedious. I want to know if there is a way to specify `f` as an alias for `sys.stdout`, so that `f.write()` is treated as `sys.stdout.write()`. | 2013/07/23 | [
"https://Stackoverflow.com/questions/17818502",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1199882/"
] | Yes, in python, you can alias every class / function / method etc. Just assign the value you want to use to another variable:
```
import sys
f = sys.stdout
```
Now every method you call on `f` will get called on `sys.stdout`. You can do this with whatever you like, for example also with `i = sys.stdin` etc. | This is properly the job of a shell anyway; just send it to standard out and redirect it to a file using `>filename` when you invoke the script. |
17,818,502 | Consider this sample python code. It reads from stdin and writes to a file.
```
import sys
arg1 = sys.argv[1]
f = open(arg1,'w')
f.write('<html><head><title></title></head><body>')
for line in sys.stdin:
f.write("<p>")
f.write(line)
f.write("</p>")
f.write("</body></html>")
f.close()
```
Suppose I want to modify this same program to write to stdout instead. Then, I'll have to replace each instance of `f.write()` with `sys.stdout.write()`. But that would be too tedious. I want to know if there is a way to specify `f` as an alias for `sys.stdout`, so that `f.write()` is treated as `sys.stdout.write()`. | 2013/07/23 | [
"https://Stackoverflow.com/questions/17818502",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1199882/"
] | Names in Python are just bindings. Therefore:
```
f = sys.stdout
```
Just binds the name `f` to *the object* that's *also bound* to `sys.stdout`...
Note that since they're both the same object, any changes you make to `f` or `sys.stdout` at this point will affect *both*... So don't do `f.close()` as you normally wouldn't want to do `sys.stdout.close()`... | This is properly the job of a shell anyway; just send it to standard out and redirect it to a file using `>filename` when you invoke the script. |
18,263,733 | I am new to python and django i am creating first tutorial app.
I created app file using following command:
```
C:\Python27\Scripts\django-admin.py startproject mysite
```
After that successfully created a file in directory
But how to run python manage.py runserver i am getting error `not recognized as an internal or extrnal command`
```
C:\Python27\Scripts\django-admin.py startproject mysite
```
But how to run `python manage.py runserver` i am getting error `not recognized as an internal or extrnal command` | 2013/08/15 | [
"https://Stackoverflow.com/questions/18263733",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2582761/"
] | You just need to `cd` into mysite from there.
Use `cd mysite` from the command line. Then run `python manage.py runserver` and the dev server will startup in the current (or a new if there inst a current) browser window.
To visualize this for you:
```
current_dir/ <-- your here now
mysite/ < -- use cd mysite to get to here!
manage.py <-- and use this
mysite/
__init__.py
urs.py
settings.py
ect.
```
current\_dir is where you initially created the project.
**Pro tip**: you always have to come back to this exact dir to use manage.py, so if you get that error again while your making the polls app; you are probably just in the wrong directory. | You need to go to the directory that the app you created resides in then run the command `manage.py runserver` on windows or `python manage.py runserver` in a Unix Terminal.
It is typical to create a separate directory for your Django projects. A typical directory would be:
```
C:\DjangoProjects\
```
You would then put the location of `django-admin.py` on your `PYTHONPATH` in your command shell and run the startproject command and the new project would be created in the current directory that you are in. If you have already created the project, you could also just cut and paste it to a different directory that way your Django projects are not in the same directory as your Python / Django source code.
Either way, in the end go the directory for the app you created, so:
```
C:\DjangoProjects\mysite\
```
and from that directory run the `manage.py runserver` command and this will start the `app` running on your local machine. |
27,692,051 | **Is there any way to disable the syntax highlighting in SublimeREPL-tabs when a script is running?**
Please see this question for context: [Red lines coming up after strings in SublimeREPL (python)?](https://stackoverflow.com/q/25693151/1426065)
For example, when python-scripts run in Sublime REPL, apostrophes (') in the output-text get highlighted as syntax.
Because of this, the last part of the line is highlighted as if the string **(which in fact is text-output and not actual code)** was not closed properly.
This is what the output looks like:

The highlighting is useful when Sublime REPL is running the interactive python shell, but when it just should run a script, I would like to get the text output without highlighting, like in any commandline-interface.
Of course I could just run the scripts in the commandline, but it would be nice to keep all work focused in just one program.
Maybe there are settings for the different kinds of Sublime REPL-enveronments (Interactive, run from script, etc.) that could change this behaviour?
Thanks for any help! :) | 2014/12/29 | [
"https://Stackoverflow.com/questions/27692051",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4183985/"
] | Go to
Sublime Text > Preferences > Package Settings > SublimeREPL > Settings - User
(If your 'Settings - User' is empty, first copy in the contents of 'Settings - Default')
under "repl\_view\_settings": add:
```
,
"syntax": "Packages/Text/Plain text.tmLanguage"
```
so mine is now:
```
// standard sublime view settings that will be overwritten on each repl view
// this has to be customized as a whole dictionary
"repl_view_settings": {
"translate_tabs_to_spaces": false,
"auto_indent": false,
"smart_indent": false,
"spell_check": false,
"indent_subsequent_lines": false,
"detect_indentation": false,
"auto_complete": true,
"line_numbers": false,
"gutter": false,
"syntax": "Packages/Text/Plain text.tmLanguage"
},
``` | As @joe.dawley wrote in the comments to the original question there is a way to manually disable syntax highlighting in SublimeREPL by using the go to anything-command **(Ctrl + Shift + P)** and enter **"sspl"** to set the syntax to plain text. |
44,549,369 | I am trying to calculate the Kullback-Leibler divergence from Gaussian#1 to Gaussian#2
I have the mean and the standard deviation for both Gaussians
I tried this code from <http://www.cs.cmu.edu/~chanwook/MySoftware/rm1_Spk-by-Spk_MLLR/rm1_PNCC_MLLR_1/rm1/python/sphinx/divergence.py>
```
def gau_kl(pm, pv, qm, qv):
"""
Kullback-Leibler divergence from Gaussian pm,pv to Gaussian qm,qv.
Also computes KL divergence from a single Gaussian pm,pv to a set
of Gaussians qm,qv.
Diagonal covariances are assumed. Divergence is expressed in nats.
"""
if (len(qm.shape) == 2):
axis = 1
else:
axis = 0
# Determinants of diagonal covariances pv, qv
dpv = pv.prod()
dqv = qv.prod(axis)
# Inverse of diagonal covariance qv
iqv = 1./qv
# Difference between means pm, qm
diff = qm - pm
return (0.5 *
(numpy.log(dqv / dpv) # log |\Sigma_q| / |\Sigma_p|
+ (iqv * pv).sum(axis) # + tr(\Sigma_q^{-1} * \Sigma_p)
+ (diff * iqv * diff).sum(axis) # + (\mu_q-\mu_p)^T\Sigma_q^{-1}(\mu_q-\mu_p)
- len(pm))) # - N
```
I use the mean and the standard deviation as input, but the last line of the code `(len(pm))` cause an error because the mean is one number and I don't understand the len function here.
Note. The two sets(i.e., Gaussians) are not equal that's why I couldn't use the scipy.stats.entropy | 2017/06/14 | [
"https://Stackoverflow.com/questions/44549369",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7879074/"
] | The following function computes the KL-Divergence between any two multivariate normal distributions (no need for the covariance matrices to be diagonal) (where numpy is imported as np)
```
def kl_mvn(m0, S0, m1, S1):
"""
Kullback-Liebler divergence from Gaussian pm,pv to Gaussian qm,qv.
Also computes KL divergence from a single Gaussian pm,pv to a set
of Gaussians qm,qv.
From wikipedia
KL( (m0, S0) || (m1, S1))
= .5 * ( tr(S1^{-1} S0) + log |S1|/|S0| +
(m1 - m0)^T S1^{-1} (m1 - m0) - N )
"""
# store inv diag covariance of S1 and diff between means
N = m0.shape[0]
iS1 = np.linalg.inv(S1)
diff = m1 - m0
# kl is made of three terms
tr_term = np.trace(iS1 @ S0)
det_term = np.log(np.linalg.det(S1)/np.linalg.det(S0)) #np.sum(np.log(S1)) - np.sum(np.log(S0))
quad_term = diff.T @ np.linalg.inv(S1) @ diff #np.sum( (diff*diff) * iS1, axis=1)
#print(tr_term,det_term,quad_term)
return .5 * (tr_term + det_term + quad_term - N)
``` | If you are still interested ...
That function expects diagonal entries of covariance matrix of multivariate Gaussians, not standard deviations as you mention. If your inputs are univariate Gaussians, then both `pv` and `qv` are vectors of length 1 for variances of corresponding Gaussians.
Besides, `len(pm)` corresponds to dimension of mean vectors. It is indeed **k** in *Multivariate normal distributions* section [here](https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence). For univariate Gaussians, **k** is 1, for bivariate ones **k** is 2, so on. |
55,537,213 | I'm following Adrian Rosebrock's tutorial on recognising digits on an RPi, so no tesseract or whatever:
<https://www.pyimagesearch.com/2017/02/13/recognizing-digits-with-opencv-and-python/>
But it doesn't recognise decimal points, so I've been trying really hard to create a part that would help to do that. I think I've gotten close, but I'm not sure what I've done wrong.
This is my image after preprocessing
[](https://i.stack.imgur.com/fGfOw.png)
and this is what happens after the attempted recognising part
[](https://i.stack.imgur.com/1tCQi.png)
As you can see, I'm doing something wrong somewhere. Already tried tuning param1 and param2 in the houghCircles
More examples:
[](https://i.stack.imgur.com/iAFAP.png)
[](https://i.stack.imgur.com/gUjUF.png)
Can anyone guide me on what I should do? I'm really lost here
================================================================
The images i'm using
[](https://i.stack.imgur.com/byjzJ.jpg)
[](https://i.stack.imgur.com/cZFZQ.jpg)
The code I'm using
```
from imutils.perspective import four_point_transform
from imutils import contours
import imutils
import cv2
import numpy
DIGITS_LOOKUP = {
# Old Library
#(1, 1, 1, 0, 1, 1, 1): 0, # same as new 8
(0, 0, 1, 0, 0, 1, 0): 1,
(1, 0, 1, 1, 1, 1, 0): 2,
(1, 0, 1, 1, 0, 1, 1): 3,
(0, 1, 1, 1, 0, 1, 0): 4,
(1, 1, 0, 1, 0, 1, 1): 5,
#(1, 1, 0, 1, 1, 1, 1): 6,
(1, 0, 1, 0, 0, 1, 0): 7,
(1, 1, 1, 1, 1, 1, 1): 8,
(1, 1, 1, 1, 0, 1, 1): 9,
# New Digital Library
(0, 0, 1, 1, 1, 0, 1): 0,
(1, 0, 1, 0, 0, 1, 1): 2,
(0, 0, 1, 1, 0, 1, 1): 4,
(0, 0, 0, 0, 0, 1, 1): 4,
(1, 1, 0, 0, 0, 1, 1): 5,
(1, 1, 0, 1, 1, 0, 1): 5,
(1, 0, 0, 0, 0, 1, 1): 5,
(1, 1, 1, 0, 0, 0, 0): 7,
(1, 1, 0, 1, 1, 1, 1): 8,
(1, 1, 1, 0, 1, 1, 1): 8
}
image = cv2.imread("10.jpg")
image = imutils.resize(image, height=100)
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
blurred = cv2.GaussianBlur(gray, (5, 5), 0)
edged = cv2.Canny(blurred, 120, 255, 1)
cv2.imshow("1", edged)
cnts = cv2.findContours(edged.copy(), cv2.RETR_EXTERNAL,
cv2.CHAIN_APPROX_SIMPLE)
cnts = imutils.grab_contours(cnts)
cnts = sorted(cnts, key=cv2.contourArea, reverse=True)
displayCnt = None
for c in cnts:
peri = cv2.arcLength(c, True)
approx = cv2.approxPolyDP(c, 0.02 * peri, True)
if len(approx) == 4:
displayCnt = approx
break
warped = four_point_transform(gray, displayCnt.reshape(4, 2))
output = four_point_transform(image, displayCnt.reshape(4, 2))
thresh = cv2.threshold(warped, 0, 255,
cv2.THRESH_BINARY | cv2.THRESH_OTSU)[1]
cv2.imshow("2", thresh)
print(thresh.shape)
circles = cv2.HoughCircles(warped, cv2.HOUGH_GRADIENT, 7, 14, param1=0.1, param2=20, minRadius=3, maxRadius=7)
# ensure at least some circles were found
if circles is not None:
circles = numpy.round(circles[0, :]).astype("int")
for (x, y, r) in circles:
cv2.circle(output, (x, y), r, (0, 255, 0), 4)
cv2.rectangle(output, (x - 5, y - 5), (x + 5, y + 5), (0, 128, 255), -1)
# show the output image
cv2.imshow("test", output)
cv2.waitKey(0)
``` | 2019/04/05 | [
"https://Stackoverflow.com/questions/55537213",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2810806/"
] | If JSON is used to exchange data, it *must* use UTF-8 encoding (see [RFC8259](https://www.rfc-editor.org/rfc/rfc8259)). UTF-16 and UTF-32 encodings are no longer allowed. So it is not necessary to escape the degree character. And I strongly recommend against escaping unnecessarily.
*Correct and recommended*
```
{
"units": "°C"
}
```
Of course, you must apply a proper UTF-8 encoding.
If JSON is used in a closed ecosystem, you can use other text encodings (though I would recommend against it unless you have a very good reason). If you need to escape the degree character in your non-UTF-8 encoding, the correct escaping sequence is `\u00b0`.
*Possible but not recommended*
```
{
"units": "\u00b0C"
}
```
Your second approach is incorrect under all circumstances.
*Incorrect*
```
{
"units":"c2b0"
}
```
It is also incorrect to use something like "\xc2\xb0". This is the escaping used in C/C++ source code. It also used by debugger to display strings. In JSON, it always invalid.
*Incorrect as well*
```
{
"units":"\xc2\xb0"
}
``` | JSON uses unicode to be encoded, but it is specified that you can use `\uxxxx` escape codes to represent characters that don't map into your computer native environment, so it's perfectly valid to include such escape sequences and use only plain ascii encoding to transfer JSON serialized data. |
69,045,992 | So I am trying to install and import pynput in VSCode but its showing me an error every time I try to do it. I used VSCode's in-built terminal to install it using pip and typed the following :
`pip install pynput` but this error is shown : `Fatal error in launcher: Unable to create process using '"c:\users\vicks\appdata\local\programs\python\python38-32\python.exe" "C:\Users\vicks\AppData\Local\Programs\Python\Python38-32\Scripts\pip.exe" install pynput': The system cannot find the file specified`
After receiving the following error, I tried using CMD to install it but the same error is shown. I also tried using `python pip install pynput` and it shows `Python was not found; run without arguments to install from the Microsoft Store, or disable this shortcut from Settings > Manage App Execution Aliases.` even though I have python 3.9.7 and I have selected it as my interpreter in VSCode and I have IDLE(Python 64 bit) installed. How may I resolve the following error? Any help regarding the same is appreciated
Thanks in advance :) | 2021/09/03 | [
"https://Stackoverflow.com/questions/69045992",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16225182/"
] | you need to set a default lang in case there is no preferredLanguages or error occurred like this
```
static String lang ='';
List? languages = [];
languages = await Devicelocale.preferredLanguages;
if(languages?.isNotEmpty ==true){
lang = languages[0] ?? "en";
}else{
lang = "en";
}
``` | You should add the bang `!` at the end `languages[0]!` to remove the nullability. |
69,045,992 | So I am trying to install and import pynput in VSCode but its showing me an error every time I try to do it. I used VSCode's in-built terminal to install it using pip and typed the following :
`pip install pynput` but this error is shown : `Fatal error in launcher: Unable to create process using '"c:\users\vicks\appdata\local\programs\python\python38-32\python.exe" "C:\Users\vicks\AppData\Local\Programs\Python\Python38-32\Scripts\pip.exe" install pynput': The system cannot find the file specified`
After receiving the following error, I tried using CMD to install it but the same error is shown. I also tried using `python pip install pynput` and it shows `Python was not found; run without arguments to install from the Microsoft Store, or disable this shortcut from Settings > Manage App Execution Aliases.` even though I have python 3.9.7 and I have selected it as my interpreter in VSCode and I have IDLE(Python 64 bit) installed. How may I resolve the following error? Any help regarding the same is appreciated
Thanks in advance :) | 2021/09/03 | [
"https://Stackoverflow.com/questions/69045992",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16225182/"
] | You should try to never use a `!`. Variables are nullable for a reasons, they *can* be null and your compiler is very good at guiding you through this. Don't try to be smarter than your compiler by using a `!`.
You can use `if`s or null aware operators like `?` or `??`, but don't assume something *cannot* be null, when in reality, it **can** and **will** be. You need to *handle* that situation, not ignore it:
```
String lang ='';
String fallbackLanguage = 'en';
List<dynamic>? languages = await Devicelocale.preferredLanguages;
lang = languages?.first ?? fallbackLanguage;
``` | You should add the bang `!` at the end `languages[0]!` to remove the nullability. |
69,045,992 | So I am trying to install and import pynput in VSCode but its showing me an error every time I try to do it. I used VSCode's in-built terminal to install it using pip and typed the following :
`pip install pynput` but this error is shown : `Fatal error in launcher: Unable to create process using '"c:\users\vicks\appdata\local\programs\python\python38-32\python.exe" "C:\Users\vicks\AppData\Local\Programs\Python\Python38-32\Scripts\pip.exe" install pynput': The system cannot find the file specified`
After receiving the following error, I tried using CMD to install it but the same error is shown. I also tried using `python pip install pynput` and it shows `Python was not found; run without arguments to install from the Microsoft Store, or disable this shortcut from Settings > Manage App Execution Aliases.` even though I have python 3.9.7 and I have selected it as my interpreter in VSCode and I have IDLE(Python 64 bit) installed. How may I resolve the following error? Any help regarding the same is appreciated
Thanks in advance :) | 2021/09/03 | [
"https://Stackoverflow.com/questions/69045992",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16225182/"
] | Use `whereType<String>` to filter the type you're looking for and create a new list of `List<String>` (non-null).
```dart
final List<dynamic> languages = await Devicelocale.preferredLanguages;
final preferredLangs = languages.whereType<String>();
```
Sample:
```dart
void main() {
final List<dynamic> fakeSource = ["one", null, "two", "three"];
final nonNull = fakeSource.whereType<String>();
print(nonNull);
}
``` | You should add the bang `!` at the end `languages[0]!` to remove the nullability. |
51,775,370 | I'm running Airflow on a clustered environment running on two AWS EC2-Instances. One for master and one for the worker. The worker node though periodically throws this error when running "$airflow worker":
```
[2018-08-09 16:15:43,553] {jobs.py:2574} WARNING - The recorded hostname ip-1.2.3.4 does not match this instance's hostname ip-1.2.3.4.eco.tanonprod.comanyname.io
Traceback (most recent call last):
File "/usr/bin/airflow", line 27, in <module>
args.func(args)
File "/usr/local/lib/python3.6/site-packages/airflow/bin/cli.py", line 387, in run
run_job.run()
File "/usr/local/lib/python3.6/site-packages/airflow/jobs.py", line 198, in run
self._execute()
File "/usr/local/lib/python3.6/site-packages/airflow/jobs.py", line 2527, in _execute
self.heartbeat()
File "/usr/local/lib/python3.6/site-packages/airflow/jobs.py", line 182, in heartbeat
self.heartbeat_callback(session=session)
File "/usr/local/lib/python3.6/site-packages/airflow/utils/db.py", line 50, in wrapper
result = func(*args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/airflow/jobs.py", line 2575, in heartbeat_callback
raise AirflowException("Hostname of job runner does not match")
airflow.exceptions.AirflowException: Hostname of job runner does not match
[2018-08-09 16:15:43,671] {celery_executor.py:54} ERROR - Command 'airflow run arl_source_emr_test_dag runEmrStep2WaiterTask 2018-08-07T00:00:00 --local -sd /var/lib/airflow/dags/arl_source_emr_test_dag.py' returned non-zero exit status 1.
[2018-08-09 16:15:43,681: ERROR/ForkPoolWorker-30] Task airflow.executors.celery_executor.execute_command[875a4da9-582e-4c10-92aa-5407f3b46d5f] raised unexpected: AirflowException('Celery command failed',)
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/airflow/executors/celery_executor.py", line 52, in execute_command
subprocess.check_call(command, shell=True)
File "/usr/lib64/python3.6/subprocess.py", line 291, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command 'airflow run arl_source_emr_test_dag runEmrStep2WaiterTask 2018-08-07T00:00:00 --local -sd /var/lib/airflow/dags/arl_source_emr_test_dag.py' returned non-zero exit status 1.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3.6/dist-packages/celery/app/trace.py", line 382, in trace_task
R = retval = fun(*args, **kwargs)
File "/usr/lib/python3.6/dist-packages/celery/app/trace.py", line 641, in __protected_call__
return self.run(*args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/airflow/executors/celery_executor.py", line 55, in execute_command
raise AirflowException('Celery command failed')
airflow.exceptions.AirflowException: Celery command failed
```
When this error occurs the task is marked as failed on Airflow and thus fails my DAG when nothing actually went wrong in the task.
I'm using Redis as my queue and postgreSQL as my meta-database. Both are external as AWS services. I'm running all of this on my company environment which is why the full name of the server is `ip-1.2.3.4.eco.tanonprod.comanyname.io`. It looks like it wants this full name somewhere but I have no idea where I need to fix this value so that it's getting `ip-1.2.3.4.eco.tanonprod.comanyname.io` instead of just `ip-1.2.3.4`.
**The really weird thing about this issue is that it doesn't always happen.** It seems to just randomly happen every once in a while when I run the DAG. It's also occurring on all of my DAGs sporadically so it's not just one DAG. I find it strange though how it's sporadic because that means other task runs are handling the IP address for whatever this is just fine.
**Note:** I've changed the real IP address to 1.2.3.4 for privacy reasons.
**Answer:**
<https://github.com/apache/incubator-airflow/pull/2484>
This is exactly the problem I am having and other Airflow users on AWS EC2-Instances are experiencing it as well. | 2018/08/09 | [
"https://Stackoverflow.com/questions/51775370",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3299397/"
] | The hostname is set when the task instance runs, and is set to `self.hostname = socket.getfqdn()`, where socket is the python package `import socket`.
The comparison that triggers this error is:
```
fqdn = socket.getfqdn()
if fqdn != ti.hostname:
logging.warning("The recorded hostname {ti.hostname} "
"does not match this instance's hostname "
"{fqdn}".format(**locals()))
raise AirflowException("Hostname of job runner does not match")
```
It seems like the hostname on the ec2 instance is changing on you while the worker is running. Perhaps try manually setting the hostname as described here <https://forums.aws.amazon.com/thread.jspa?threadID=246906> and see if that sticks. | I had a similar problem on my Mac. It fixed it setting `hostname_callable = socket:gethostname` in `airflow.cfg`. |
51,775,370 | I'm running Airflow on a clustered environment running on two AWS EC2-Instances. One for master and one for the worker. The worker node though periodically throws this error when running "$airflow worker":
```
[2018-08-09 16:15:43,553] {jobs.py:2574} WARNING - The recorded hostname ip-1.2.3.4 does not match this instance's hostname ip-1.2.3.4.eco.tanonprod.comanyname.io
Traceback (most recent call last):
File "/usr/bin/airflow", line 27, in <module>
args.func(args)
File "/usr/local/lib/python3.6/site-packages/airflow/bin/cli.py", line 387, in run
run_job.run()
File "/usr/local/lib/python3.6/site-packages/airflow/jobs.py", line 198, in run
self._execute()
File "/usr/local/lib/python3.6/site-packages/airflow/jobs.py", line 2527, in _execute
self.heartbeat()
File "/usr/local/lib/python3.6/site-packages/airflow/jobs.py", line 182, in heartbeat
self.heartbeat_callback(session=session)
File "/usr/local/lib/python3.6/site-packages/airflow/utils/db.py", line 50, in wrapper
result = func(*args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/airflow/jobs.py", line 2575, in heartbeat_callback
raise AirflowException("Hostname of job runner does not match")
airflow.exceptions.AirflowException: Hostname of job runner does not match
[2018-08-09 16:15:43,671] {celery_executor.py:54} ERROR - Command 'airflow run arl_source_emr_test_dag runEmrStep2WaiterTask 2018-08-07T00:00:00 --local -sd /var/lib/airflow/dags/arl_source_emr_test_dag.py' returned non-zero exit status 1.
[2018-08-09 16:15:43,681: ERROR/ForkPoolWorker-30] Task airflow.executors.celery_executor.execute_command[875a4da9-582e-4c10-92aa-5407f3b46d5f] raised unexpected: AirflowException('Celery command failed',)
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/airflow/executors/celery_executor.py", line 52, in execute_command
subprocess.check_call(command, shell=True)
File "/usr/lib64/python3.6/subprocess.py", line 291, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command 'airflow run arl_source_emr_test_dag runEmrStep2WaiterTask 2018-08-07T00:00:00 --local -sd /var/lib/airflow/dags/arl_source_emr_test_dag.py' returned non-zero exit status 1.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3.6/dist-packages/celery/app/trace.py", line 382, in trace_task
R = retval = fun(*args, **kwargs)
File "/usr/lib/python3.6/dist-packages/celery/app/trace.py", line 641, in __protected_call__
return self.run(*args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/airflow/executors/celery_executor.py", line 55, in execute_command
raise AirflowException('Celery command failed')
airflow.exceptions.AirflowException: Celery command failed
```
When this error occurs the task is marked as failed on Airflow and thus fails my DAG when nothing actually went wrong in the task.
I'm using Redis as my queue and postgreSQL as my meta-database. Both are external as AWS services. I'm running all of this on my company environment which is why the full name of the server is `ip-1.2.3.4.eco.tanonprod.comanyname.io`. It looks like it wants this full name somewhere but I have no idea where I need to fix this value so that it's getting `ip-1.2.3.4.eco.tanonprod.comanyname.io` instead of just `ip-1.2.3.4`.
**The really weird thing about this issue is that it doesn't always happen.** It seems to just randomly happen every once in a while when I run the DAG. It's also occurring on all of my DAGs sporadically so it's not just one DAG. I find it strange though how it's sporadic because that means other task runs are handling the IP address for whatever this is just fine.
**Note:** I've changed the real IP address to 1.2.3.4 for privacy reasons.
**Answer:**
<https://github.com/apache/incubator-airflow/pull/2484>
This is exactly the problem I am having and other Airflow users on AWS EC2-Instances are experiencing it as well. | 2018/08/09 | [
"https://Stackoverflow.com/questions/51775370",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3299397/"
] | The hostname is set when the task instance runs, and is set to `self.hostname = socket.getfqdn()`, where socket is the python package `import socket`.
The comparison that triggers this error is:
```
fqdn = socket.getfqdn()
if fqdn != ti.hostname:
logging.warning("The recorded hostname {ti.hostname} "
"does not match this instance's hostname "
"{fqdn}".format(**locals()))
raise AirflowException("Hostname of job runner does not match")
```
It seems like the hostname on the ec2 instance is changing on you while the worker is running. Perhaps try manually setting the hostname as described here <https://forums.aws.amazon.com/thread.jspa?threadID=246906> and see if that sticks. | Personally when running on my Mac, I found that I got similar errors to this when the Mac would sleep while I was running a long job. The solution was to go into System Preferences -> Energy Saver and then check "Prevent computer from sleeping automatically when the display is off." |
51,775,370 | I'm running Airflow on a clustered environment running on two AWS EC2-Instances. One for master and one for the worker. The worker node though periodically throws this error when running "$airflow worker":
```
[2018-08-09 16:15:43,553] {jobs.py:2574} WARNING - The recorded hostname ip-1.2.3.4 does not match this instance's hostname ip-1.2.3.4.eco.tanonprod.comanyname.io
Traceback (most recent call last):
File "/usr/bin/airflow", line 27, in <module>
args.func(args)
File "/usr/local/lib/python3.6/site-packages/airflow/bin/cli.py", line 387, in run
run_job.run()
File "/usr/local/lib/python3.6/site-packages/airflow/jobs.py", line 198, in run
self._execute()
File "/usr/local/lib/python3.6/site-packages/airflow/jobs.py", line 2527, in _execute
self.heartbeat()
File "/usr/local/lib/python3.6/site-packages/airflow/jobs.py", line 182, in heartbeat
self.heartbeat_callback(session=session)
File "/usr/local/lib/python3.6/site-packages/airflow/utils/db.py", line 50, in wrapper
result = func(*args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/airflow/jobs.py", line 2575, in heartbeat_callback
raise AirflowException("Hostname of job runner does not match")
airflow.exceptions.AirflowException: Hostname of job runner does not match
[2018-08-09 16:15:43,671] {celery_executor.py:54} ERROR - Command 'airflow run arl_source_emr_test_dag runEmrStep2WaiterTask 2018-08-07T00:00:00 --local -sd /var/lib/airflow/dags/arl_source_emr_test_dag.py' returned non-zero exit status 1.
[2018-08-09 16:15:43,681: ERROR/ForkPoolWorker-30] Task airflow.executors.celery_executor.execute_command[875a4da9-582e-4c10-92aa-5407f3b46d5f] raised unexpected: AirflowException('Celery command failed',)
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/airflow/executors/celery_executor.py", line 52, in execute_command
subprocess.check_call(command, shell=True)
File "/usr/lib64/python3.6/subprocess.py", line 291, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command 'airflow run arl_source_emr_test_dag runEmrStep2WaiterTask 2018-08-07T00:00:00 --local -sd /var/lib/airflow/dags/arl_source_emr_test_dag.py' returned non-zero exit status 1.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3.6/dist-packages/celery/app/trace.py", line 382, in trace_task
R = retval = fun(*args, **kwargs)
File "/usr/lib/python3.6/dist-packages/celery/app/trace.py", line 641, in __protected_call__
return self.run(*args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/airflow/executors/celery_executor.py", line 55, in execute_command
raise AirflowException('Celery command failed')
airflow.exceptions.AirflowException: Celery command failed
```
When this error occurs the task is marked as failed on Airflow and thus fails my DAG when nothing actually went wrong in the task.
I'm using Redis as my queue and postgreSQL as my meta-database. Both are external as AWS services. I'm running all of this on my company environment which is why the full name of the server is `ip-1.2.3.4.eco.tanonprod.comanyname.io`. It looks like it wants this full name somewhere but I have no idea where I need to fix this value so that it's getting `ip-1.2.3.4.eco.tanonprod.comanyname.io` instead of just `ip-1.2.3.4`.
**The really weird thing about this issue is that it doesn't always happen.** It seems to just randomly happen every once in a while when I run the DAG. It's also occurring on all of my DAGs sporadically so it's not just one DAG. I find it strange though how it's sporadic because that means other task runs are handling the IP address for whatever this is just fine.
**Note:** I've changed the real IP address to 1.2.3.4 for privacy reasons.
**Answer:**
<https://github.com/apache/incubator-airflow/pull/2484>
This is exactly the problem I am having and other Airflow users on AWS EC2-Instances are experiencing it as well. | 2018/08/09 | [
"https://Stackoverflow.com/questions/51775370",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3299397/"
] | I had a similar problem on my Mac. It fixed it setting `hostname_callable = socket:gethostname` in `airflow.cfg`. | Personally when running on my Mac, I found that I got similar errors to this when the Mac would sleep while I was running a long job. The solution was to go into System Preferences -> Energy Saver and then check "Prevent computer from sleeping automatically when the display is off." |
55,337,221 | I'm trying to connect another computer in local network via python (subprocesses module) with this commands from CMD.exe
* `net use \\\\ip\C$ password /user:username`
* `copy D:\file.txt \\ip\C$`
Then in python it look like below.
But when i try second command, I get:
>
> "FileNotFoundError: [WinError 2]"
>
>
>
Have you met same problem?
Is there any way to fix it?
```
import subprocess as sp
code = sp.call(r'net use \\<ip>\C$ <pass> /user:<username>')
print(code)
sp.call(r'copy D:\file.txt \\<ip>\C$')
``` | 2019/03/25 | [
"https://Stackoverflow.com/questions/55337221",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5299639/"
] | The issue is that `copy` is a built-in, not a real command in Windows.
Those Windows messages are awful, but `"FileNotFoundError: [WinError 2]"` doesn't mean one of source & destination files can't be accessed (if `copy` failed, you'd get a normal Windows message with explicit file names).
Here, it means that the *command* could not be accessed.
So you'd need to add `shell=True` to your subprocess call to gain access to built-ins.
But don't do that (security issues, non-portability), use `shutil.copy` instead.
Aside, use `check_call` instead of `call` for your first command, as if `net use` fails, the rest will fail too. Better have an early failure.
To sum it up, here's what I would do:
```
import shutil
import subprocess as sp
sp.check_call(['net','use',r'\\<ip>\C$','password','/user:<username>'])
shutil.copy(r'D:\file.txt,r'\\<ip>\C$')
``` | you need make sure you have right to add a file.
i have testted successfully after i corrected the shared dirctory's right. |
50,777,013 | I am just a beginner to a tensorflow and trying to install TensorFlow with CPU support only.
Initially, I downloaded and installed Python 3.5.2 version from <https://www.python.org/downloads/release/python-352/>
After successful installation, I ran the command `pip3 install --upgrade tensorflow` which installed tensorflow-1.8.0.
To test installation i just ran following commands:
```
> python
> import tensorflow as tf
```
But this gave me an **error**:
>
> ImportError: Could not find 'msvcp140.dll'. TensorFlow requires that
> this DLL be installed in a directory that is named in your %PATH%
> environment variable. You may install this DLL by downloading Visual
> C++ 2015 Redistributable Update 3 from this URL:
> <https://www.microsoft.com/en-us/download/details.aspx?id=53587>
>
>
>
I searched for this issue and found link to an issue <https://github.com/tensorflow/tensorflow/issues/17393>.
According to above i tried running command
```
pip install tensorflow==1.5
```
But, this didn't solved my problem.
I even tried downloading ***msvcp140.dll*** and manually coping it under ***C:\Windows\SysWOW64*** folder and reinstalling python and tensorflow.
How do I fix this problem.
Thanks in advance. | 2018/06/09 | [
"https://Stackoverflow.com/questions/50777013",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8141116/"
] | I copied ***msvcp140.dll*** to path ***C:\Users\PCName\AppData\Local\Programs\Python\Python35***
and it worked for me.
I also switched back to tensorflow 1.8 from 1.5. | You can download the package from the url <https://www.microsoft.com/en-us/download/details.aspx?id=53587> and install it. This will solve the issue. |
50,777,013 | I am just a beginner to a tensorflow and trying to install TensorFlow with CPU support only.
Initially, I downloaded and installed Python 3.5.2 version from <https://www.python.org/downloads/release/python-352/>
After successful installation, I ran the command `pip3 install --upgrade tensorflow` which installed tensorflow-1.8.0.
To test installation i just ran following commands:
```
> python
> import tensorflow as tf
```
But this gave me an **error**:
>
> ImportError: Could not find 'msvcp140.dll'. TensorFlow requires that
> this DLL be installed in a directory that is named in your %PATH%
> environment variable. You may install this DLL by downloading Visual
> C++ 2015 Redistributable Update 3 from this URL:
> <https://www.microsoft.com/en-us/download/details.aspx?id=53587>
>
>
>
I searched for this issue and found link to an issue <https://github.com/tensorflow/tensorflow/issues/17393>.
According to above i tried running command
```
pip install tensorflow==1.5
```
But, this didn't solved my problem.
I even tried downloading ***msvcp140.dll*** and manually coping it under ***C:\Windows\SysWOW64*** folder and reinstalling python and tensorflow.
How do I fix this problem.
Thanks in advance. | 2018/06/09 | [
"https://Stackoverflow.com/questions/50777013",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8141116/"
] | I copied ***msvcp140.dll*** to path ***C:\Users\PCName\AppData\Local\Programs\Python\Python35***
and it worked for me.
I also switched back to tensorflow 1.8 from 1.5. | download msvcp140.dll or click <https://www.dll-files.com/msvcp140.dll.html>
find your python path
path will find easy from your error
error will show like this
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\sky network\AppData\Local\Programs\Python\Python36\lib\site-
packages\tensorflow\__init__.py", line 24, in <module>
from tensorflow.python import *
File "C:\Users\sky network\AppData\Local\Programs\Python\Python36\lib\site-
packages\tensorflow\python\__init__.py", line 49, in <module>
from tensorflow.python import pywrap_tensorflow
File "C:\Users\sky network\AppData\Local\Programs\Python\Python36\lib\site-
packages\tensorflow\python\pywrap_tensorflow.py", line 30, in <module>
self_check.preload_check()
File "C:\Users\sky network\AppData\Local\Programs\Python\Python36\lib\site-
packages\tensorflow\python\platform\self_check.py", line 55, in preload_check
% build_info.msvcp_dll_name)
```
from my error the python path is
"File "C:\Users\sky network\AppData\Local\Programs\Python\Python36"
if u cant find the AppData folder
click folder view option and enable hidden files
paste the file "msvcp140.dll" into
C:\Users\sky network\AppData\Local\Programs\Python\Python36 |
50,777,013 | I am just a beginner to a tensorflow and trying to install TensorFlow with CPU support only.
Initially, I downloaded and installed Python 3.5.2 version from <https://www.python.org/downloads/release/python-352/>
After successful installation, I ran the command `pip3 install --upgrade tensorflow` which installed tensorflow-1.8.0.
To test installation i just ran following commands:
```
> python
> import tensorflow as tf
```
But this gave me an **error**:
>
> ImportError: Could not find 'msvcp140.dll'. TensorFlow requires that
> this DLL be installed in a directory that is named in your %PATH%
> environment variable. You may install this DLL by downloading Visual
> C++ 2015 Redistributable Update 3 from this URL:
> <https://www.microsoft.com/en-us/download/details.aspx?id=53587>
>
>
>
I searched for this issue and found link to an issue <https://github.com/tensorflow/tensorflow/issues/17393>.
According to above i tried running command
```
pip install tensorflow==1.5
```
But, this didn't solved my problem.
I even tried downloading ***msvcp140.dll*** and manually coping it under ***C:\Windows\SysWOW64*** folder and reinstalling python and tensorflow.
How do I fix this problem.
Thanks in advance. | 2018/06/09 | [
"https://Stackoverflow.com/questions/50777013",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8141116/"
] | download msvcp140.dll or click <https://www.dll-files.com/msvcp140.dll.html>
find your python path
path will find easy from your error
error will show like this
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\sky network\AppData\Local\Programs\Python\Python36\lib\site-
packages\tensorflow\__init__.py", line 24, in <module>
from tensorflow.python import *
File "C:\Users\sky network\AppData\Local\Programs\Python\Python36\lib\site-
packages\tensorflow\python\__init__.py", line 49, in <module>
from tensorflow.python import pywrap_tensorflow
File "C:\Users\sky network\AppData\Local\Programs\Python\Python36\lib\site-
packages\tensorflow\python\pywrap_tensorflow.py", line 30, in <module>
self_check.preload_check()
File "C:\Users\sky network\AppData\Local\Programs\Python\Python36\lib\site-
packages\tensorflow\python\platform\self_check.py", line 55, in preload_check
% build_info.msvcp_dll_name)
```
from my error the python path is
"File "C:\Users\sky network\AppData\Local\Programs\Python\Python36"
if u cant find the AppData folder
click folder view option and enable hidden files
paste the file "msvcp140.dll" into
C:\Users\sky network\AppData\Local\Programs\Python\Python36 | You can download the package from the url <https://www.microsoft.com/en-us/download/details.aspx?id=53587> and install it. This will solve the issue. |
57,473,982 | in vs code, for some reason, i cannot run any python code because vs code puts in python instead of py in cmd.
it shows this :
>
> [Running] python -u "c:\Users..."
>
>
>
but is supposed to show this :
>
> [Running] py -u "c:\Users\
>
>
>
i have tried searching online how to fix it, the error message:
**'python' is not recognized as an internal or external command,operable program or batch file.**
but comes up with useless answers
```
import pygame
pygame.init()
screen = pygame.display.set_mode((360,360))
```
what is outputted:
>
> [Running] python -u "c:\Users..."
>
>
>
as you can see it inputs the wrong command and i have no idea how to fix it.
The expected input:
>
> [Running] py -u "c:\Users..."
>
>
> | 2019/08/13 | [
"https://Stackoverflow.com/questions/57473982",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11921290/"
] | Well you can change the interpreter that Code uses by pressing `Ctrl+Shift+P` and then searching for `Python: Select Interpreter`, this should help when it comes to running the code in the IDE. If that doesn't work you could just try and use the built in terminal in Code to run the code manually with the `py` command. | In VSCODE debug mode I have launch json as follows and then i can easily debug the code with breakpoints
```
{
// Use IntelliSense to learn about possible attributes.
// Hover to view descriptions of existing attributes.
// For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
"version": "0.2.0",
"configurations": [{
"name": "Python: Current File",
"type": "python",
"request": "launch",
"program": "${file}",
"console": "integratedTerminal"
}]
```
} |
20,047,117 | I have my code as below.
```
def test():
print num1
print num
num += 10
if __name__ == '__main__':
num = 0
num1 = 3
test()
```
When executing the above python code I get the following output.
```
3
Traceback (most recent call last):
File "<stdin>", line 3, in <module>
File "<stdin>", line 2, in test
UnboundLocalError: local variable 'num' referenced before assignment
```
I do not know why particularly num is not available in test method. It is very weird for me and i did not face this yet before.
Note: I am using python 2.7. | 2013/11/18 | [
"https://Stackoverflow.com/questions/20047117",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1939389/"
] | If you do not want direct child selector, just add a parent reference for the nested elements.
This will make your thing work.
You can add the below.
```
.red .blue h1 {
color: blue;
}
```
**[WORKING DEMO](http://jsfiddle.net/N7FcB/1/)**
To enforce your div to render the color blue, you just need to add the reference of the element that you are using to the class.
**for Instance,**
```
div.blue h1 {
color: blue;
}
```
**[WORKING DEMO - 2](http://jsfiddle.net/N7FcB/4/)**
In both cases, it will work. | Or maybe like that:
```
.red > h1 {
color: red;
}
.blue h1 {
color: blue;
}
```
[fiddle](http://jsfiddle.net/sxVcL/3/).
This is 100%. |
20,047,117 | I have my code as below.
```
def test():
print num1
print num
num += 10
if __name__ == '__main__':
num = 0
num1 = 3
test()
```
When executing the above python code I get the following output.
```
3
Traceback (most recent call last):
File "<stdin>", line 3, in <module>
File "<stdin>", line 2, in test
UnboundLocalError: local variable 'num' referenced before assignment
```
I do not know why particularly num is not available in test method. It is very weird for me and i did not face this yet before.
Note: I am using python 2.7. | 2013/11/18 | [
"https://Stackoverflow.com/questions/20047117",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1939389/"
] | If you do not want direct child selector, just add a parent reference for the nested elements.
This will make your thing work.
You can add the below.
```
.red .blue h1 {
color: blue;
}
```
**[WORKING DEMO](http://jsfiddle.net/N7FcB/1/)**
To enforce your div to render the color blue, you just need to add the reference of the element that you are using to the class.
**for Instance,**
```
div.blue h1 {
color: blue;
}
```
**[WORKING DEMO - 2](http://jsfiddle.net/N7FcB/4/)**
In both cases, it will work. | hope it will help you
```
.red > h1 {
color: red;
}
.blue h1 {
color: blue;
}
```
select as direct child you will not face any more problem. |
20,047,117 | I have my code as below.
```
def test():
print num1
print num
num += 10
if __name__ == '__main__':
num = 0
num1 = 3
test()
```
When executing the above python code I get the following output.
```
3
Traceback (most recent call last):
File "<stdin>", line 3, in <module>
File "<stdin>", line 2, in test
UnboundLocalError: local variable 'num' referenced before assignment
```
I do not know why particularly num is not available in test method. It is very weird for me and i did not face this yet before.
Note: I am using python 2.7. | 2013/11/18 | [
"https://Stackoverflow.com/questions/20047117",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1939389/"
] | If you do not want direct child selector, just add a parent reference for the nested elements.
This will make your thing work.
You can add the below.
```
.red .blue h1 {
color: blue;
}
```
**[WORKING DEMO](http://jsfiddle.net/N7FcB/1/)**
To enforce your div to render the color blue, you just need to add the reference of the element that you are using to the class.
**for Instance,**
```
div.blue h1 {
color: blue;
}
```
**[WORKING DEMO - 2](http://jsfiddle.net/N7FcB/4/)**
In both cases, it will work. | ```
.blue > * {
color: blue;
}
.red > * {
color: red;
}
```
You can always try ">" selector combined with wildcard
[myfiddle](http://jsfiddle.net/9XtaM/2/) |
20,047,117 | I have my code as below.
```
def test():
print num1
print num
num += 10
if __name__ == '__main__':
num = 0
num1 = 3
test()
```
When executing the above python code I get the following output.
```
3
Traceback (most recent call last):
File "<stdin>", line 3, in <module>
File "<stdin>", line 2, in test
UnboundLocalError: local variable 'num' referenced before assignment
```
I do not know why particularly num is not available in test method. It is very weird for me and i did not face this yet before.
Note: I am using python 2.7. | 2013/11/18 | [
"https://Stackoverflow.com/questions/20047117",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1939389/"
] | If you do not want direct child selector, just add a parent reference for the nested elements.
This will make your thing work.
You can add the below.
```
.red .blue h1 {
color: blue;
}
```
**[WORKING DEMO](http://jsfiddle.net/N7FcB/1/)**
To enforce your div to render the color blue, you just need to add the reference of the element that you are using to the class.
**for Instance,**
```
div.blue h1 {
color: blue;
}
```
**[WORKING DEMO - 2](http://jsfiddle.net/N7FcB/4/)**
In both cases, it will work. | how about this?
```
div.red > h1 {
color: red !important;
}
div.blue > h1 {
color: blue !important;
}
```
or throw div element
```
.red > h1 {
color: red !important;
}
.blue > h1 {
color: blue !important;
}
```
<http://jsfiddle.net/N7FcB/6/> |
20,047,117 | I have my code as below.
```
def test():
print num1
print num
num += 10
if __name__ == '__main__':
num = 0
num1 = 3
test()
```
When executing the above python code I get the following output.
```
3
Traceback (most recent call last):
File "<stdin>", line 3, in <module>
File "<stdin>", line 2, in test
UnboundLocalError: local variable 'num' referenced before assignment
```
I do not know why particularly num is not available in test method. It is very weird for me and i did not face this yet before.
Note: I am using python 2.7. | 2013/11/18 | [
"https://Stackoverflow.com/questions/20047117",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1939389/"
] | If you do not want direct child selector, just add a parent reference for the nested elements.
This will make your thing work.
You can add the below.
```
.red .blue h1 {
color: blue;
}
```
**[WORKING DEMO](http://jsfiddle.net/N7FcB/1/)**
To enforce your div to render the color blue, you just need to add the reference of the element that you are using to the class.
**for Instance,**
```
div.blue h1 {
color: blue;
}
```
**[WORKING DEMO - 2](http://jsfiddle.net/N7FcB/4/)**
In both cases, it will work. | Actually how many H1 do you need inside a div? i say not much. so why don't why give the class to the H1.
```
h1.red { color: red; }
h1.green { color: green; }
h1.blue { color: blue; }
```
---
**Update**
How about having a box with depth level, see fiddle <http://jsfiddle.net/AnL7R/>
by having linked classes you can override the upper one, e.g:
```
.blue,
.blue.first,
.blue.second
/*more depth class*/
{
color: blue;
}
.red,
.red.first,
.red.second
/*more depth class*/
{
color: blue;
}
```
Hope it helps |
20,047,117 | I have my code as below.
```
def test():
print num1
print num
num += 10
if __name__ == '__main__':
num = 0
num1 = 3
test()
```
When executing the above python code I get the following output.
```
3
Traceback (most recent call last):
File "<stdin>", line 3, in <module>
File "<stdin>", line 2, in test
UnboundLocalError: local variable 'num' referenced before assignment
```
I do not know why particularly num is not available in test method. It is very weird for me and i did not face this yet before.
Note: I am using python 2.7. | 2013/11/18 | [
"https://Stackoverflow.com/questions/20047117",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1939389/"
] | Browser reads your CSS from top to bottom and it will apply in the same way..
So first you have a rule called
```
.blue h1 {
color: blue;
}
```
So browser will parse this information and will color your `h1` as `blue`, but it goes ahead and it hits second selector which is
```
.red h1 {
color: red;
}
```
Now, as your `h1` which is nested inside `.blue` is further nested inside `.red` and also, the specificity of both the selectors are same, browser will go ahead and apply `red` to the inner `h1`.
So what's the solution?
If you can, just swap the order of your classes... No? You cannot? than use a specific selector..
```
div.blue h1 {
color: blue;
}
```
[**Demo**](http://jsfiddle.net/N7FcB/3/)
The above selector is more specific compared to `.red h1` as it has a `class`, and 2 elements... so here, browser will pick up first rule as it is more specific, thus overriding your `.red h1` selector.
You can make your selectors specific as much as you need, you can write the above as `div.red div.blue h1` or `.red .blue h1`, but just remember, the more specific selectors you use, the more you hit performance bar, also you will end up writing more and more specific selectors inorder to override others, so choose wisely.. | Or maybe like that:
```
.red > h1 {
color: red;
}
.blue h1 {
color: blue;
}
```
[fiddle](http://jsfiddle.net/sxVcL/3/).
This is 100%. |
20,047,117 | I have my code as below.
```
def test():
print num1
print num
num += 10
if __name__ == '__main__':
num = 0
num1 = 3
test()
```
When executing the above python code I get the following output.
```
3
Traceback (most recent call last):
File "<stdin>", line 3, in <module>
File "<stdin>", line 2, in test
UnboundLocalError: local variable 'num' referenced before assignment
```
I do not know why particularly num is not available in test method. It is very weird for me and i did not face this yet before.
Note: I am using python 2.7. | 2013/11/18 | [
"https://Stackoverflow.com/questions/20047117",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1939389/"
] | Browser reads your CSS from top to bottom and it will apply in the same way..
So first you have a rule called
```
.blue h1 {
color: blue;
}
```
So browser will parse this information and will color your `h1` as `blue`, but it goes ahead and it hits second selector which is
```
.red h1 {
color: red;
}
```
Now, as your `h1` which is nested inside `.blue` is further nested inside `.red` and also, the specificity of both the selectors are same, browser will go ahead and apply `red` to the inner `h1`.
So what's the solution?
If you can, just swap the order of your classes... No? You cannot? than use a specific selector..
```
div.blue h1 {
color: blue;
}
```
[**Demo**](http://jsfiddle.net/N7FcB/3/)
The above selector is more specific compared to `.red h1` as it has a `class`, and 2 elements... so here, browser will pick up first rule as it is more specific, thus overriding your `.red h1` selector.
You can make your selectors specific as much as you need, you can write the above as `div.red div.blue h1` or `.red .blue h1`, but just remember, the more specific selectors you use, the more you hit performance bar, also you will end up writing more and more specific selectors inorder to override others, so choose wisely.. | hope it will help you
```
.red > h1 {
color: red;
}
.blue h1 {
color: blue;
}
```
select as direct child you will not face any more problem. |
20,047,117 | I have my code as below.
```
def test():
print num1
print num
num += 10
if __name__ == '__main__':
num = 0
num1 = 3
test()
```
When executing the above python code I get the following output.
```
3
Traceback (most recent call last):
File "<stdin>", line 3, in <module>
File "<stdin>", line 2, in test
UnboundLocalError: local variable 'num' referenced before assignment
```
I do not know why particularly num is not available in test method. It is very weird for me and i did not face this yet before.
Note: I am using python 2.7. | 2013/11/18 | [
"https://Stackoverflow.com/questions/20047117",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1939389/"
] | Browser reads your CSS from top to bottom and it will apply in the same way..
So first you have a rule called
```
.blue h1 {
color: blue;
}
```
So browser will parse this information and will color your `h1` as `blue`, but it goes ahead and it hits second selector which is
```
.red h1 {
color: red;
}
```
Now, as your `h1` which is nested inside `.blue` is further nested inside `.red` and also, the specificity of both the selectors are same, browser will go ahead and apply `red` to the inner `h1`.
So what's the solution?
If you can, just swap the order of your classes... No? You cannot? than use a specific selector..
```
div.blue h1 {
color: blue;
}
```
[**Demo**](http://jsfiddle.net/N7FcB/3/)
The above selector is more specific compared to `.red h1` as it has a `class`, and 2 elements... so here, browser will pick up first rule as it is more specific, thus overriding your `.red h1` selector.
You can make your selectors specific as much as you need, you can write the above as `div.red div.blue h1` or `.red .blue h1`, but just remember, the more specific selectors you use, the more you hit performance bar, also you will end up writing more and more specific selectors inorder to override others, so choose wisely.. | ```
.blue > * {
color: blue;
}
.red > * {
color: red;
}
```
You can always try ">" selector combined with wildcard
[myfiddle](http://jsfiddle.net/9XtaM/2/) |
20,047,117 | I have my code as below.
```
def test():
print num1
print num
num += 10
if __name__ == '__main__':
num = 0
num1 = 3
test()
```
When executing the above python code I get the following output.
```
3
Traceback (most recent call last):
File "<stdin>", line 3, in <module>
File "<stdin>", line 2, in test
UnboundLocalError: local variable 'num' referenced before assignment
```
I do not know why particularly num is not available in test method. It is very weird for me and i did not face this yet before.
Note: I am using python 2.7. | 2013/11/18 | [
"https://Stackoverflow.com/questions/20047117",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1939389/"
] | Browser reads your CSS from top to bottom and it will apply in the same way..
So first you have a rule called
```
.blue h1 {
color: blue;
}
```
So browser will parse this information and will color your `h1` as `blue`, but it goes ahead and it hits second selector which is
```
.red h1 {
color: red;
}
```
Now, as your `h1` which is nested inside `.blue` is further nested inside `.red` and also, the specificity of both the selectors are same, browser will go ahead and apply `red` to the inner `h1`.
So what's the solution?
If you can, just swap the order of your classes... No? You cannot? than use a specific selector..
```
div.blue h1 {
color: blue;
}
```
[**Demo**](http://jsfiddle.net/N7FcB/3/)
The above selector is more specific compared to `.red h1` as it has a `class`, and 2 elements... so here, browser will pick up first rule as it is more specific, thus overriding your `.red h1` selector.
You can make your selectors specific as much as you need, you can write the above as `div.red div.blue h1` or `.red .blue h1`, but just remember, the more specific selectors you use, the more you hit performance bar, also you will end up writing more and more specific selectors inorder to override others, so choose wisely.. | how about this?
```
div.red > h1 {
color: red !important;
}
div.blue > h1 {
color: blue !important;
}
```
or throw div element
```
.red > h1 {
color: red !important;
}
.blue > h1 {
color: blue !important;
}
```
<http://jsfiddle.net/N7FcB/6/> |
20,047,117 | I have my code as below.
```
def test():
print num1
print num
num += 10
if __name__ == '__main__':
num = 0
num1 = 3
test()
```
When executing the above python code I get the following output.
```
3
Traceback (most recent call last):
File "<stdin>", line 3, in <module>
File "<stdin>", line 2, in test
UnboundLocalError: local variable 'num' referenced before assignment
```
I do not know why particularly num is not available in test method. It is very weird for me and i did not face this yet before.
Note: I am using python 2.7. | 2013/11/18 | [
"https://Stackoverflow.com/questions/20047117",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1939389/"
] | Browser reads your CSS from top to bottom and it will apply in the same way..
So first you have a rule called
```
.blue h1 {
color: blue;
}
```
So browser will parse this information and will color your `h1` as `blue`, but it goes ahead and it hits second selector which is
```
.red h1 {
color: red;
}
```
Now, as your `h1` which is nested inside `.blue` is further nested inside `.red` and also, the specificity of both the selectors are same, browser will go ahead and apply `red` to the inner `h1`.
So what's the solution?
If you can, just swap the order of your classes... No? You cannot? than use a specific selector..
```
div.blue h1 {
color: blue;
}
```
[**Demo**](http://jsfiddle.net/N7FcB/3/)
The above selector is more specific compared to `.red h1` as it has a `class`, and 2 elements... so here, browser will pick up first rule as it is more specific, thus overriding your `.red h1` selector.
You can make your selectors specific as much as you need, you can write the above as `div.red div.blue h1` or `.red .blue h1`, but just remember, the more specific selectors you use, the more you hit performance bar, also you will end up writing more and more specific selectors inorder to override others, so choose wisely.. | Actually how many H1 do you need inside a div? i say not much. so why don't why give the class to the H1.
```
h1.red { color: red; }
h1.green { color: green; }
h1.blue { color: blue; }
```
---
**Update**
How about having a box with depth level, see fiddle <http://jsfiddle.net/AnL7R/>
by having linked classes you can override the upper one, e.g:
```
.blue,
.blue.first,
.blue.second
/*more depth class*/
{
color: blue;
}
.red,
.red.first,
.red.second
/*more depth class*/
{
color: blue;
}
```
Hope it helps |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.