qid
int64
469
74.7M
question
stringlengths
36
37.8k
date
stringlengths
10
10
metadata
sequence
response_j
stringlengths
5
31.5k
response_k
stringlengths
10
31.6k
14,610,459
Go's time package claims giving a nanosecond precision. <http://golang.org/src/pkg/time/time.go> I'm wondering how it is implemented and if I can trust it. My doubts come from Python, which clearly documents its difficulties and limitations with time: > > *From <http://docs.python.org/2/library/time.html>* > > > The precision of the various real-time functions may be less than > suggested by the units in which their value or argument is expressed. > E.g. on most Unix systems, the clock “ticks” only 50 or 100 times a > second. > > > On the other hand, the precision of time() and sleep() is better than > their Unix equivalents: times are expressed as floating point numbers, > time() returns the most accurate time available (using Unix > gettimeofday() where available), and sleep() will accept a time with a > nonzero fraction (Unix select() is used to implement this, where > available). > > > Since the operating systems are giving such a hard time to python, how does Go achieve its nanosecond precision?
2013/01/30
[ "https://Stackoverflow.com/questions/14610459", "https://Stackoverflow.com", "https://Stackoverflow.com/users/520159/" ]
Well as for the implementation, `time.Now()` falls back to a function implemented in the runtime. You can review the [C time implementation](http://code.google.com/p/go/source/browse/src/pkg/runtime/time.goc) and the implementation for [`time·now` in assembly](http://code.google.com/p/go/source/browse/src/pkg/runtime/sys_linux_amd64.s?name=go1.1.2#103) (linux amd64 in this case). This then uses [`clock_gettime`](http://linux.die.net/man/3/clock_gettime), which provides nano seconds resolution. On windows, this is realized by calling `GetSystemTimeAsFileTime`, which [too generates nanoseconds](https://stackoverflow.com/a/11743614/1643939) (not as high res but nanoseconds). So yes, the resolution depends on the operating system and you can't expect it to be accurate on every OS but the developers are trying to make it as good as it can be. For example, in go1.0.3, `time·now` for FreeBSD [used `gettimeofday`](http://code.google.com/p/go/source/browse/src/pkg/runtime/sys_freebsd_386.s?name=go1.0.3#110) instead of `clock_gettime`, which only offers millisecond precision. You can see this by looking at the value stored in `AX`, as it is the [syscall id](http://www.acsu.buffalo.edu/~charngda/freebsd_syscalls.html). If you take a look at the referenced assembly, you can see that the ms value is mulitplied by 1000 to get the nanoseconds. However, this is fixed [now](http://code.google.com/p/go/source/browse/src/pkg/runtime/sys_freebsd_386.s?name=go1.1#134). If you want to be sure, check the corresponding implementations in the runtime source code and ask the manuals of your operating system.
If you are interested in querying the operating system to get the precision of the values returned by `clock_gettime`, you can make a syscall to [`clock_getres`](https://linux.die.net/man/2/clock_getres) using the syscall package appropriate to your operating system. For example, on Unix platforms you can do: ```golang package main import ( "fmt" "golang.org/x/sys/unix" ) func main() { res := unix.Timespec{} unix.ClockGetres(unix.CLOCK_MONOTONIC, &res) fmt.Printf("Monotonic clock resolution is %d nanoseconds\n", res.Nsec) } ``` Values from the monotonic clock are used by the time package for comparisons and operations involving times; the precision of the wall-clock time is similarly obtained by changing `unix.CLOCK_MONOTONIC` in the above example to `unix.CLOCK_REALTIME`.
14,610,459
Go's time package claims giving a nanosecond precision. <http://golang.org/src/pkg/time/time.go> I'm wondering how it is implemented and if I can trust it. My doubts come from Python, which clearly documents its difficulties and limitations with time: > > *From <http://docs.python.org/2/library/time.html>* > > > The precision of the various real-time functions may be less than > suggested by the units in which their value or argument is expressed. > E.g. on most Unix systems, the clock “ticks” only 50 or 100 times a > second. > > > On the other hand, the precision of time() and sleep() is better than > their Unix equivalents: times are expressed as floating point numbers, > time() returns the most accurate time available (using Unix > gettimeofday() where available), and sleep() will accept a time with a > nonzero fraction (Unix select() is used to implement this, where > available). > > > Since the operating systems are giving such a hard time to python, how does Go achieve its nanosecond precision?
2013/01/30
[ "https://Stackoverflow.com/questions/14610459", "https://Stackoverflow.com", "https://Stackoverflow.com/users/520159/" ]
One of the problems with Python's [time.time](http://docs.python.org/2/library/time.html#time.time) function is that it returns a [float](http://docs.python.org/2/library/functions.html#float). A float is an [IEEE 754 double-precision number](http://en.wikipedia.org/wiki/Double-precision_floating-point_format) which has 53 bits of precision. Since it is now more than 2\*\*30 seconds since 1970-01-01 (the epoch) you need 61 (31 + 30) bits of precision to store time accurate to the nanosecond since 1970-01-01. Unfortunately that is 7 or 8 bits short of what you can store in a python float, meaning that python floats will always be less precise than go time. To quantify that the demonstration below shows that python time is at most accurate to 100nS just due to the limitations of the `float` type. ``` >>> t = time() >>> t 1359587524.591781 >>> t == t + 1E-6 False >>> t == t + 1E-7 True ``` So go, starting with an `int64` and counting in nS doesn't have these limitations and is limited to the precision of the underlying OS as explained very well by nemo.
If you are interested in querying the operating system to get the precision of the values returned by `clock_gettime`, you can make a syscall to [`clock_getres`](https://linux.die.net/man/2/clock_getres) using the syscall package appropriate to your operating system. For example, on Unix platforms you can do: ```golang package main import ( "fmt" "golang.org/x/sys/unix" ) func main() { res := unix.Timespec{} unix.ClockGetres(unix.CLOCK_MONOTONIC, &res) fmt.Printf("Monotonic clock resolution is %d nanoseconds\n", res.Nsec) } ``` Values from the monotonic clock are used by the time package for comparisons and operations involving times; the precision of the wall-clock time is similarly obtained by changing `unix.CLOCK_MONOTONIC` in the above example to `unix.CLOCK_REALTIME`.
12,172,934
(not to be confused with itertools.chain) I was reading the following: <http://en.wikipedia.org/wiki/Method_chaining> My question is: **what is the best way to implement method chaining in python?** Here is my attempt: ``` class chain(): def __init__(self, my_object): self.o = my_object def __getattr__(self, attr): x = getattr(self.o, attr) if hasattr(x, '__call__'): method = x return lambda *args: self if method(*args) is None else method(*args) else: prop = x return prop list_ = chain([1, 2, 3, 0]) print list_.extend([9, 5]).sort().reverse() """ C:\Python27\python.exe C:/Users/Robert/PycharmProjects/contests/sof.py [9, 5, 3, 2, 1, 0] """ ``` One problem is if calling `method(*args)` modifies `self.o` but doesn't return `None`. (then should I return `self` or return what `method(*args)` returns). Does anyone have better ways of implementing chaining? There are probably many ways to do it. Should I just assume a method always returns `None` so I may always return `self.o` ?
2012/08/29
[ "https://Stackoverflow.com/questions/12172934", "https://Stackoverflow.com", "https://Stackoverflow.com/users/632088/" ]
There is a very handy [`Pipe`](https://github.com/JulienPalard/Pipe) library which may be the answer to your question. For example:: ``` seq = fib() | take_while(lambda x: x < 1000000) \ | where(lambda x: x % 2) \ | select(lambda x: x * x) \ | sum() ```
There isn't going to be any general way of allowing any method of any object to be chained, since you can't know what sort of value that method returns and why without knowing how that particular method works. Methods might return `None` for any reason; it doesn't always mean the method has modified the object. Likewise, methods that do return a value still might not return a value that can be chained. There's no way to chain a method like `list.index`: `fakeList.index(1).sort()` can't have much hope of working, because the whole point of `index` is it returns a number, and that number means something, and can't be ignored just to chain on the original object. If you're just fiddling around with Python's builtin types to chain certain specific methods (like sort and remove), you're better off just wrapping those particular methods explicitly (by overriding them in your wrapper class), instead of trying to do a general mechanism with `__getattr__`.
12,172,934
(not to be confused with itertools.chain) I was reading the following: <http://en.wikipedia.org/wiki/Method_chaining> My question is: **what is the best way to implement method chaining in python?** Here is my attempt: ``` class chain(): def __init__(self, my_object): self.o = my_object def __getattr__(self, attr): x = getattr(self.o, attr) if hasattr(x, '__call__'): method = x return lambda *args: self if method(*args) is None else method(*args) else: prop = x return prop list_ = chain([1, 2, 3, 0]) print list_.extend([9, 5]).sort().reverse() """ C:\Python27\python.exe C:/Users/Robert/PycharmProjects/contests/sof.py [9, 5, 3, 2, 1, 0] """ ``` One problem is if calling `method(*args)` modifies `self.o` but doesn't return `None`. (then should I return `self` or return what `method(*args)` returns). Does anyone have better ways of implementing chaining? There are probably many ways to do it. Should I just assume a method always returns `None` so I may always return `self.o` ?
2012/08/29
[ "https://Stackoverflow.com/questions/12172934", "https://Stackoverflow.com", "https://Stackoverflow.com/users/632088/" ]
It's possible if you use only [pure functions](http://en.wikipedia.org/wiki/Pure_function) so that methods don't modify `self.data` directly, but instead return the modified version. You also have to return `Chainable` instances. Here's an example using [collection pipelining](http://martinfowler.com/articles/collection-pipeline/) with lists: ``` import itertools try: import builtins except ImportError: import __builtin__ as builtins class Chainable(object): def __init__(self, data, method=None): self.data = data self.method = method def __getattr__(self, name): try: method = getattr(self.data, name) except AttributeError: try: method = getattr(builtins, name) except AttributeError: method = getattr(itertools, name) return Chainable(self.data, method) def __call__(self, *args, **kwargs): try: return Chainable(list(self.method(self.data, *args, **kwargs))) except TypeError: return Chainable(list(self.method(args[0], self.data, **kwargs))) ``` Use it like this: ``` chainable_list = Chainable([3, 1, 2, 0]) (chainable_list .chain([11,8,6,7,9,4,5]) .sorted() .reversed() .ifilter(lambda x: x%2) .islice(3) .data) >> [11, 9, 7] ``` Note that `.chain` refers to `itertools.chain` and not the OP's `chain`.
There isn't going to be any general way of allowing any method of any object to be chained, since you can't know what sort of value that method returns and why without knowing how that particular method works. Methods might return `None` for any reason; it doesn't always mean the method has modified the object. Likewise, methods that do return a value still might not return a value that can be chained. There's no way to chain a method like `list.index`: `fakeList.index(1).sort()` can't have much hope of working, because the whole point of `index` is it returns a number, and that number means something, and can't be ignored just to chain on the original object. If you're just fiddling around with Python's builtin types to chain certain specific methods (like sort and remove), you're better off just wrapping those particular methods explicitly (by overriding them in your wrapper class), instead of trying to do a general mechanism with `__getattr__`.
12,172,934
(not to be confused with itertools.chain) I was reading the following: <http://en.wikipedia.org/wiki/Method_chaining> My question is: **what is the best way to implement method chaining in python?** Here is my attempt: ``` class chain(): def __init__(self, my_object): self.o = my_object def __getattr__(self, attr): x = getattr(self.o, attr) if hasattr(x, '__call__'): method = x return lambda *args: self if method(*args) is None else method(*args) else: prop = x return prop list_ = chain([1, 2, 3, 0]) print list_.extend([9, 5]).sort().reverse() """ C:\Python27\python.exe C:/Users/Robert/PycharmProjects/contests/sof.py [9, 5, 3, 2, 1, 0] """ ``` One problem is if calling `method(*args)` modifies `self.o` but doesn't return `None`. (then should I return `self` or return what `method(*args)` returns). Does anyone have better ways of implementing chaining? There are probably many ways to do it. Should I just assume a method always returns `None` so I may always return `self.o` ?
2012/08/29
[ "https://Stackoverflow.com/questions/12172934", "https://Stackoverflow.com", "https://Stackoverflow.com/users/632088/" ]
**Caveat**: This only works on `class` methods() that do not intend to return any data. I was looking for something similar for chaining `Class` functions and found no good answer, so here is what I did and thought was a very simple way of chaining: Simply return the `self` object. So here is my setup: ``` class Car: def __init__(self, name=None): self.name = name self.mode = 'init' def set_name(self, name): self.name = name return self def drive(self): self.mode = 'drive' return self ``` And now I can name the car and put it in drive state by calling: ``` my_car = Car() my_car.set_name('Porche').drive() ``` Hope this helps!
There isn't going to be any general way of allowing any method of any object to be chained, since you can't know what sort of value that method returns and why without knowing how that particular method works. Methods might return `None` for any reason; it doesn't always mean the method has modified the object. Likewise, methods that do return a value still might not return a value that can be chained. There's no way to chain a method like `list.index`: `fakeList.index(1).sort()` can't have much hope of working, because the whole point of `index` is it returns a number, and that number means something, and can't be ignored just to chain on the original object. If you're just fiddling around with Python's builtin types to chain certain specific methods (like sort and remove), you're better off just wrapping those particular methods explicitly (by overriding them in your wrapper class), instead of trying to do a general mechanism with `__getattr__`.
12,172,934
(not to be confused with itertools.chain) I was reading the following: <http://en.wikipedia.org/wiki/Method_chaining> My question is: **what is the best way to implement method chaining in python?** Here is my attempt: ``` class chain(): def __init__(self, my_object): self.o = my_object def __getattr__(self, attr): x = getattr(self.o, attr) if hasattr(x, '__call__'): method = x return lambda *args: self if method(*args) is None else method(*args) else: prop = x return prop list_ = chain([1, 2, 3, 0]) print list_.extend([9, 5]).sort().reverse() """ C:\Python27\python.exe C:/Users/Robert/PycharmProjects/contests/sof.py [9, 5, 3, 2, 1, 0] """ ``` One problem is if calling `method(*args)` modifies `self.o` but doesn't return `None`. (then should I return `self` or return what `method(*args)` returns). Does anyone have better ways of implementing chaining? There are probably many ways to do it. Should I just assume a method always returns `None` so I may always return `self.o` ?
2012/08/29
[ "https://Stackoverflow.com/questions/12172934", "https://Stackoverflow.com", "https://Stackoverflow.com/users/632088/" ]
There is a very handy [`Pipe`](https://github.com/JulienPalard/Pipe) library which may be the answer to your question. For example:: ``` seq = fib() | take_while(lambda x: x < 1000000) \ | where(lambda x: x % 2) \ | select(lambda x: x * x) \ | sum() ```
It's possible if you use only [pure functions](http://en.wikipedia.org/wiki/Pure_function) so that methods don't modify `self.data` directly, but instead return the modified version. You also have to return `Chainable` instances. Here's an example using [collection pipelining](http://martinfowler.com/articles/collection-pipeline/) with lists: ``` import itertools try: import builtins except ImportError: import __builtin__ as builtins class Chainable(object): def __init__(self, data, method=None): self.data = data self.method = method def __getattr__(self, name): try: method = getattr(self.data, name) except AttributeError: try: method = getattr(builtins, name) except AttributeError: method = getattr(itertools, name) return Chainable(self.data, method) def __call__(self, *args, **kwargs): try: return Chainable(list(self.method(self.data, *args, **kwargs))) except TypeError: return Chainable(list(self.method(args[0], self.data, **kwargs))) ``` Use it like this: ``` chainable_list = Chainable([3, 1, 2, 0]) (chainable_list .chain([11,8,6,7,9,4,5]) .sorted() .reversed() .ifilter(lambda x: x%2) .islice(3) .data) >> [11, 9, 7] ``` Note that `.chain` refers to `itertools.chain` and not the OP's `chain`.
12,172,934
(not to be confused with itertools.chain) I was reading the following: <http://en.wikipedia.org/wiki/Method_chaining> My question is: **what is the best way to implement method chaining in python?** Here is my attempt: ``` class chain(): def __init__(self, my_object): self.o = my_object def __getattr__(self, attr): x = getattr(self.o, attr) if hasattr(x, '__call__'): method = x return lambda *args: self if method(*args) is None else method(*args) else: prop = x return prop list_ = chain([1, 2, 3, 0]) print list_.extend([9, 5]).sort().reverse() """ C:\Python27\python.exe C:/Users/Robert/PycharmProjects/contests/sof.py [9, 5, 3, 2, 1, 0] """ ``` One problem is if calling `method(*args)` modifies `self.o` but doesn't return `None`. (then should I return `self` or return what `method(*args)` returns). Does anyone have better ways of implementing chaining? There are probably many ways to do it. Should I just assume a method always returns `None` so I may always return `self.o` ?
2012/08/29
[ "https://Stackoverflow.com/questions/12172934", "https://Stackoverflow.com", "https://Stackoverflow.com/users/632088/" ]
There is a very handy [`Pipe`](https://github.com/JulienPalard/Pipe) library which may be the answer to your question. For example:: ``` seq = fib() | take_while(lambda x: x < 1000000) \ | where(lambda x: x % 2) \ | select(lambda x: x * x) \ | sum() ```
What about ``` def apply(data, *fns): return data.__class__(map(fns[-1], apply(data, *fns[:-1]))) if fns else data >>> print( ... apply( ... [1,2,3], ... str, ... lambda x: {'1': 'one', '2': 'two', '3': 'three'}[x], ... str.upper)) ['ONE', 'TWO', 'THREE'] >>> ``` ? .. even keeps the type :)
12,172,934
(not to be confused with itertools.chain) I was reading the following: <http://en.wikipedia.org/wiki/Method_chaining> My question is: **what is the best way to implement method chaining in python?** Here is my attempt: ``` class chain(): def __init__(self, my_object): self.o = my_object def __getattr__(self, attr): x = getattr(self.o, attr) if hasattr(x, '__call__'): method = x return lambda *args: self if method(*args) is None else method(*args) else: prop = x return prop list_ = chain([1, 2, 3, 0]) print list_.extend([9, 5]).sort().reverse() """ C:\Python27\python.exe C:/Users/Robert/PycharmProjects/contests/sof.py [9, 5, 3, 2, 1, 0] """ ``` One problem is if calling `method(*args)` modifies `self.o` but doesn't return `None`. (then should I return `self` or return what `method(*args)` returns). Does anyone have better ways of implementing chaining? There are probably many ways to do it. Should I just assume a method always returns `None` so I may always return `self.o` ?
2012/08/29
[ "https://Stackoverflow.com/questions/12172934", "https://Stackoverflow.com", "https://Stackoverflow.com/users/632088/" ]
**Caveat**: This only works on `class` methods() that do not intend to return any data. I was looking for something similar for chaining `Class` functions and found no good answer, so here is what I did and thought was a very simple way of chaining: Simply return the `self` object. So here is my setup: ``` class Car: def __init__(self, name=None): self.name = name self.mode = 'init' def set_name(self, name): self.name = name return self def drive(self): self.mode = 'drive' return self ``` And now I can name the car and put it in drive state by calling: ``` my_car = Car() my_car.set_name('Porche').drive() ``` Hope this helps!
It's possible if you use only [pure functions](http://en.wikipedia.org/wiki/Pure_function) so that methods don't modify `self.data` directly, but instead return the modified version. You also have to return `Chainable` instances. Here's an example using [collection pipelining](http://martinfowler.com/articles/collection-pipeline/) with lists: ``` import itertools try: import builtins except ImportError: import __builtin__ as builtins class Chainable(object): def __init__(self, data, method=None): self.data = data self.method = method def __getattr__(self, name): try: method = getattr(self.data, name) except AttributeError: try: method = getattr(builtins, name) except AttributeError: method = getattr(itertools, name) return Chainable(self.data, method) def __call__(self, *args, **kwargs): try: return Chainable(list(self.method(self.data, *args, **kwargs))) except TypeError: return Chainable(list(self.method(args[0], self.data, **kwargs))) ``` Use it like this: ``` chainable_list = Chainable([3, 1, 2, 0]) (chainable_list .chain([11,8,6,7,9,4,5]) .sorted() .reversed() .ifilter(lambda x: x%2) .islice(3) .data) >> [11, 9, 7] ``` Note that `.chain` refers to `itertools.chain` and not the OP's `chain`.
12,172,934
(not to be confused with itertools.chain) I was reading the following: <http://en.wikipedia.org/wiki/Method_chaining> My question is: **what is the best way to implement method chaining in python?** Here is my attempt: ``` class chain(): def __init__(self, my_object): self.o = my_object def __getattr__(self, attr): x = getattr(self.o, attr) if hasattr(x, '__call__'): method = x return lambda *args: self if method(*args) is None else method(*args) else: prop = x return prop list_ = chain([1, 2, 3, 0]) print list_.extend([9, 5]).sort().reverse() """ C:\Python27\python.exe C:/Users/Robert/PycharmProjects/contests/sof.py [9, 5, 3, 2, 1, 0] """ ``` One problem is if calling `method(*args)` modifies `self.o` but doesn't return `None`. (then should I return `self` or return what `method(*args)` returns). Does anyone have better ways of implementing chaining? There are probably many ways to do it. Should I just assume a method always returns `None` so I may always return `self.o` ?
2012/08/29
[ "https://Stackoverflow.com/questions/12172934", "https://Stackoverflow.com", "https://Stackoverflow.com/users/632088/" ]
It's possible if you use only [pure functions](http://en.wikipedia.org/wiki/Pure_function) so that methods don't modify `self.data` directly, but instead return the modified version. You also have to return `Chainable` instances. Here's an example using [collection pipelining](http://martinfowler.com/articles/collection-pipeline/) with lists: ``` import itertools try: import builtins except ImportError: import __builtin__ as builtins class Chainable(object): def __init__(self, data, method=None): self.data = data self.method = method def __getattr__(self, name): try: method = getattr(self.data, name) except AttributeError: try: method = getattr(builtins, name) except AttributeError: method = getattr(itertools, name) return Chainable(self.data, method) def __call__(self, *args, **kwargs): try: return Chainable(list(self.method(self.data, *args, **kwargs))) except TypeError: return Chainable(list(self.method(args[0], self.data, **kwargs))) ``` Use it like this: ``` chainable_list = Chainable([3, 1, 2, 0]) (chainable_list .chain([11,8,6,7,9,4,5]) .sorted() .reversed() .ifilter(lambda x: x%2) .islice(3) .data) >> [11, 9, 7] ``` Note that `.chain` refers to `itertools.chain` and not the OP's `chain`.
What about ``` def apply(data, *fns): return data.__class__(map(fns[-1], apply(data, *fns[:-1]))) if fns else data >>> print( ... apply( ... [1,2,3], ... str, ... lambda x: {'1': 'one', '2': 'two', '3': 'three'}[x], ... str.upper)) ['ONE', 'TWO', 'THREE'] >>> ``` ? .. even keeps the type :)
12,172,934
(not to be confused with itertools.chain) I was reading the following: <http://en.wikipedia.org/wiki/Method_chaining> My question is: **what is the best way to implement method chaining in python?** Here is my attempt: ``` class chain(): def __init__(self, my_object): self.o = my_object def __getattr__(self, attr): x = getattr(self.o, attr) if hasattr(x, '__call__'): method = x return lambda *args: self if method(*args) is None else method(*args) else: prop = x return prop list_ = chain([1, 2, 3, 0]) print list_.extend([9, 5]).sort().reverse() """ C:\Python27\python.exe C:/Users/Robert/PycharmProjects/contests/sof.py [9, 5, 3, 2, 1, 0] """ ``` One problem is if calling `method(*args)` modifies `self.o` but doesn't return `None`. (then should I return `self` or return what `method(*args)` returns). Does anyone have better ways of implementing chaining? There are probably many ways to do it. Should I just assume a method always returns `None` so I may always return `self.o` ?
2012/08/29
[ "https://Stackoverflow.com/questions/12172934", "https://Stackoverflow.com", "https://Stackoverflow.com/users/632088/" ]
**Caveat**: This only works on `class` methods() that do not intend to return any data. I was looking for something similar for chaining `Class` functions and found no good answer, so here is what I did and thought was a very simple way of chaining: Simply return the `self` object. So here is my setup: ``` class Car: def __init__(self, name=None): self.name = name self.mode = 'init' def set_name(self, name): self.name = name return self def drive(self): self.mode = 'drive' return self ``` And now I can name the car and put it in drive state by calling: ``` my_car = Car() my_car.set_name('Porche').drive() ``` Hope this helps!
What about ``` def apply(data, *fns): return data.__class__(map(fns[-1], apply(data, *fns[:-1]))) if fns else data >>> print( ... apply( ... [1,2,3], ... str, ... lambda x: {'1': 'one', '2': 'two', '3': 'three'}[x], ... str.upper)) ['ONE', 'TWO', 'THREE'] >>> ``` ? .. even keeps the type :)
61,748,604
I have two pandas series with DateTimeIndex. I'd like to join these two series such that the resulting DataFrame uses the index of the first series and "matches" the values from the second series accordingly (using a linear interpolation in the second series). First Series: ``` 2020-03-01 1 2020-03-03 2 2020-03-05 3 2020-03-07 4 ``` Second Series: ``` 2020-03-01 20 2020-03-02 22 2020-03-05 25 2020-03-06 35 2020-03-07 36 2020-03-08 45 ``` Desired Output: ``` 2020-03-01 1 20 2020-03-03 2 23 2020-03-05 3 25 2020-03-07 4 36 ``` --- Code for generating the input data: ```python import pandas as pd import datetime as dt s1 = pd.Series([1, 2, 3, 4]) s1.index = pd.to_datetime([dt.date(2020, 3, 1), dt.date(2020, 3, 3), dt.date(2020, 3, 5), dt.date(2020, 3, 7)]) s2 = pd.Series([20, 22, 25, 35, 36, 45]) s2.index = pd.to_datetime([dt.date(2020, 3, 1), dt.date(2020, 3, 2), dt.date(2020, 3, 5), dt.date(2020, 3, 6), dt.date(2020, 3, 7), dt.date(2020, 3, 8)]) ```
2020/05/12
[ "https://Stackoverflow.com/questions/61748604", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5554921/" ]
Use [`concat`](http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.concat.html) with inner join: ``` df = pd.concat([s1, s2], axis=1, keys=('s1','s2'), join='inner') print (df) s1 s2 2020-03-01 1 20 2020-03-05 3 25 2020-03-07 4 36 ``` Solution with interpolate of `s2` Series and then removed rows with missing values: ``` df = (pd.concat([s1, s2], axis=1, keys=('s1','s2')) .assign(s2 = lambda x: x.s2.interpolate('index')) .dropna()) print (df) s1 s2 2020-03-01 1.0 20.0 2020-03-03 2.0 23.0 2020-03-05 3.0 25.0 2020-03-07 4.0 36.0 ```
### Construct combined dataframe ``` # there are many ways to construct a dataframe from series, this uses the constructor: df = pd.DataFrame({'s1': s1, 's2': s2}) s1 s2 2020-03-01 1.0 20.0 2020-03-02 NaN 22.0 2020-03-03 2.0 NaN 2020-03-05 3.0 25.0 2020-03-06 NaN 35.0 2020-03-07 4.0 36.0 2020-03-08 NaN 45.0 ``` ### Interpolate ``` df = df.interpolate() s1 s2 2020-03-01 1.0 20.0 2020-03-02 1.5 22.0 2020-03-03 2.0 23.5 2020-03-05 3.0 25.0 2020-03-06 3.5 35.0 2020-03-07 4.0 36.0 2020-03-08 4.0 45.0 ``` ### Restrict rows ``` # Only keep the rows that were in s1's index. # Several ways to do this, but this example uses .filter df = df.filter(s1.index, axis=0) s1 s2 2020-03-01 1.0 20.0 2020-03-03 2.0 23.5 2020-03-05 3.0 25.0 2020-03-07 4.0 36.0 ``` ### Convert numbers back to int64 ``` df = df.astype('int64') s1 s2 2020-03-01 1 20 2020-03-03 2 23 2020-03-05 3 25 2020-03-07 4 36 ``` One-liner: ``` df = pd.DataFrame({'s1': s1, 's2': s2}).interpolate().filter(s1.index, axis=0).astype('int64') ``` Documentation links: * [interpolate](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.interpolate.html) * [filter](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.filter.html)
73,386,405
``` infile = open('results1', 'r') lines = infile.readlines() import re for line in lines: if re.match("track: 1,", line): print(line) ``` question solved by using python regex below
2022/08/17
[ "https://Stackoverflow.com/questions/73386405", "https://Stackoverflow.com", "https://Stackoverflow.com/users/19783767/" ]
I suggest you use Regular Expressions library (re) which gives you all you need to extract the data from text files. I ran a simple code to solve your current problem: ``` import re # Customize path as the file's address on your system text_file = open('path/sample.txt','r') # Read the file line by line using .readlines(), so that each line will be a continuous long string in the "file_lines" list file_lines = text_file.readlines() ``` Depending on how your target is located in each line, detailed process from here on could be a little different but the overall approach is the same in every scenario. I have assumed your only condition is that the line starts with "Id of the track" and we are looking to extract all the values between parentheses all in one place. ``` # A list to append extracted data list_extracted_data = [] for line in list_lines: # Flag is True if the line starts (special character for start: \A) with 'Id of the track' flag = re.search('\AId of the track',line) if flag: searched_phrase = re.search(r'\B\(.*',line) start_index, end_index = searched_phrase.start(), searched_phrase.end() # Select the indices from each line as it contains our extracted data list_extracted_data.append(line[start_index:end_index]) print(list_extracted_data) ``` > > ['(0.8835006455995176, -0.07697617837544447)', '(0.8835006455995176, -0.07697617837544447)', '(0.8835006455995176, -0.07697617837544447)', '(0.8835006455995176, -0.07697617837544447)', '(0.8755597308669424, -0.23473345870373538)', '(0.8835006455995176, -0.07697617837544447)', '(0.8755597308669424, -0.23473345870373538)', '(6.4057079727806485, -0.6819141582566414)', '(1.1815888836384334, > -0.35535274681454954)'] > > > you can do all sorts of things after you've selected the data from each line, including convert it to numerical type or separating the two numbers inside the parentheses. I assume your intention was to add each of the numbers inside into a different column in a dataFrame: ``` final_df = pd.DataFrame(columns=['id','X','Y']) for K, pair in enumerate(list_extracted_data): # split by comma, select the left part, exclude the '(' at the start this_X = float(pair.split(',')[0][1:]) # split by comma, select the right part, exclude the ')' at the end this_Y = float(pair.split(',')[1][:-1]) final_df = final_df.append({'id':K,'X':this_X,'Y':this_Y},ignore_index=True) ``` [![enter image description here](https://i.stack.imgur.com/AiBt8.png)](https://i.stack.imgur.com/AiBt8.png)
Given that all your target lines follow the exact same pattern, a much simpler way to extract the value between parentheses would be: ``` from ast import literal_eval as make_tuple infile = open('results1', 'r') lines = infile.readlines() import re for line in lines: if re.match("Id of the track: 1,", line): values_slice = line.split(": ")[-1] values = make_tuple(values_slice) # stored as tuple => (0.8835006455995176, -0.07697617837544447) ``` Now you can use/manipulate/store the values whichever way you want.
5,738,339
I have a specific use. I am preparing for GRE. Everytime a new word comes, I look it up at www.mnemonicdictionary.com, for its meanings and mnemonics. I want to write a script in python preferably ( or if someone could provide me a pointer to an already existing thing as I dont know python much but I am learning now) which takes a list of words from a text file, and looks it up at this site, and just fetch relevant portion (meaning and mnemonics) and store it another text file for offline use. Is it possible to do so ?? I tried to look up the source of these pages also. But along with html tags, they also have some ajax functions. Could someone provide me a complete way how to go about this ?? Example: for word impecunious: the related html source is like this ``` <ul class='wordnet'><li><p>(adj.)&nbsp;not having enough money to pay for necessities</p><u>synonyms</u> : <a href='http://www.mnemonicdictionary.com/word/hard up' onclick="ajaxSearch('hard up','click'); return false;">hard up</a> , <a href='http://www.mnemonicdictionary.com/word/in straitened circumstances' onclick="ajaxSearch('in straitened circumstances','click'); return false;">in straitened circumstances</a> , <a href='http://www.mnemonicdictionary.com/word/penniless' onclick="ajaxSearch('penniless','click'); return false;">penniless</a> , <a href='http://www.mnemonicdictionary.com/word/penurious' onclick="ajaxSearch('penurious','click'); return false;">penurious</a> , <a href='http://www.mnemonicdictionary.com/word/pinched' onclick="ajaxSearch('pinched','click'); return false;">pinched</a><p></p></li></ul> ``` but the web page renders like this: **•(adj.) not having enough money to pay for necessities synonyms : hard up , in straitened circumstances , penniless , penurious , pinched**
2011/04/21
[ "https://Stackoverflow.com/questions/5738339", "https://Stackoverflow.com", "https://Stackoverflow.com/users/169210/" ]
If you have Bash (version 4+) and `wget`, an example ``` #!/bin/bash template="http://www.mnemonicdictionary.com/include/ajaxSearch.php?word=%s&event=search" while read -r word do url=$(printf "$template" "$word") data=$(wget -O- -q "$url") data=${data#*&nbsp;} echo "$word: ${data%%<*}" done < file ``` Sample output ``` $> more file synergy tranquil jester $> bash dict.sh synergy: the working together of two things (muscles or drugs for example) to produce an effect greater than the sum of their individual effects tranquil: (of a body of water) free from disturbance by heavy waves jester: a professional clown employed to entertain a king or nobleman in the Middle Ages ``` Update: Include mneumonic ``` template="http://www.mnemonicdictionary.com/include/ajaxSearch.php?word=%s&event=search" while read -r word do url=$(printf "$template" "$word") data=$(wget -O- -q "$url") data=${data#*&nbsp;} m=${data#*class=\'mnemonic\'} m=${m%%</p>*} m="${m##*&nbsp;}" echo "$word: ${data%%<*}, mneumonic: $m" done < file ```
Use [curl](http://curl.haxx.se/) and sed from a Bash shell (either Linux, Mac, or Windows with Cygwin). If I get a second I will write a quick script ... gotta give the baby a bath now though.
49,766,071
I'm new to python, and I know there must be a better way to do this, especially with numpy, and without appending to arrays. Is there a more concise way to do something like this in python? ```py def create_uniform_grid(low, high, bins=(10, 10)): """Define a uniformly-spaced grid that can be used to discretize a space. Parameters ---------- low : array_like Lower bounds for each dimension of the continuous space. high : array_like Upper bounds for each dimension of the continuous space. bins : tuple Number of bins along each corresponding dimension. Returns ------- grid : list of array_like A list of arrays containing split points for each dimension. """ range1 = high[0] - low[0] range2 = high[1] - low[1] steps1 = range1 / bins[0] steps2 = range2 / bins[1] arr1 = [] arr2 = [] for i in range(0, bins[0] - 1): if(i == 0): arr1.append(low[0] + steps1) arr2.append(low[1] + steps2) else: arr1.append(round((arr1[i - 1] + steps1), 1)) arr2.append(arr2[i - 1] + steps2) return [arr1, arr2] low = [-1.0, -5.0] high = [1.0, 5.0] create_uniform_grid(low, high) # [[-0.8, -0.6, -0.4, -0.2, 0.0, 0.2, 0.4, 0.6, 0.8], # [-4.0, -3.0, -2.0, -1.0, 0.0, 1.0, 2.0, 3.0, 4.0]] ```
2018/04/11
[ "https://Stackoverflow.com/questions/49766071", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1097028/" ]
`np.ogrid` is similar to your function. Differences: 1) It will keep the endpoints; 2) It will create a column and a row, so its output is 'broadcast ready': ``` >>> np.ogrid[-1:1:11j, -5:5:11j] [array([[-1. ], [-0.8], [-0.6], [-0.4], [-0.2], [ 0. ], [ 0.2], [ 0.4], [ 0.6], [ 0.8], [ 1. ]]), array([[-5., -4., -3., -2., -1., 0., 1., 2., 3., 4., 5.]])] ```
Maybe the `numpy.meshgrid` is what you want. Here is an example to create the grid and do math on it: ``` #!/usr/bin/python3 # 2018.04.11 11:40:17 CST import numpy as np import matplotlib.pyplot as plt x = np.arange(-5, 5, 0.1) y = np.arange(-5, 5, 0.1) xx, yy = np.meshgrid(x, y, sparse=True) z = np.sin(xx**2 + yy**2) / (xx**2 + yy**2) #h = plt.contourf(x,y,z) plt.imshow(z) plt.show() ``` [![enter image description here](https://i.stack.imgur.com/iwSMU.png)](https://i.stack.imgur.com/iwSMU.png) --- Refer: 1. <https://docs.scipy.org/doc/numpy/reference/generated/numpy.meshgrid.html>
25,067,927
So I have a line here that is meant to dump frames from a movie via python and ffmpeg. ``` subprocess.check_output([ffmpeg, "-i", self.moviefile, "-ss 00:01:00.000 -t 00:00:05 -vf scale=" + str(resolution) + ":-1 -r", str(framerate), "-qscale:v 6", self.processpath + "/" + self.filetitles + "-output%03d.jpg"]) ``` And currently it's giving me the error: ``` 'CalledProcessError: Command ... returned non-zero exit status 1' ``` The command python SAYS it's running is: ``` '['/var/lib/openshift/id/app-root/data/programs/ffmpeg/ffmpeg', '-i', u'/var/lib/openshift/id/app-root/data/moviefiles/moviename/moviename.mp4', '-ss 00:01:00.000 -t 00:00:05 -vf scale=320:-1 -r', '10', '-qscale:v 6', '/var/lib/openshift/id/app-root/data/process/moviename/moviename-output%03d.jpg']' ``` But when I run the following command via ssh... ``` '/var/lib/openshift/id/app-root/data/programs/ffmpeg/ffmpeg' -i '/var/lib/openshift/id/app-root/data/moviefiles/moviename/moviename.mp4' -ss 00:01:00.000 -t 00:00:05 -vf scale=320:-1 -r 10 -qscale:v 6 '/var/lib/openshift/id/app-root/data/process/moviename/moviename-output%03d.jpg' ``` It works just fine. What am I doing wrong? I think I'm misunderstanding the way subprocess field parsing works...
2014/07/31
[ "https://Stackoverflow.com/questions/25067927", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
The subprocess module does almost never allow any whitespace characters in its parameters, unless you run it in shell mode. Try this: ``` subprocess.check_output(["ffmpeg", "-i", self.moviefile, "-ss", "00:01:00.000", "-t", "00:00:05", "-vf", "scale=" + str(resolution) + ":-1", "-r", str(framerate), "-qscale:v", "6", self.processpath + "/" + self.filetitles + "-output%03d.jpg"]) ``` Here is a cite from [the python docs.](https://docs.python.org/2/library/subprocess.html#popen-constructor) *"Note in particular that options (such as -input) and arguments (such as eggs.txt) that are separated by whitespace in the shell go in separate list elements, while arguments that need quoting or backslash escaping when used in the shell (such as filenames containing spaces or the echo command shown above) are single list elements."*
The argument array you pass to `check_call` is not correctly formatted. Every argument to `ffmpeg` needs to be a single element in the argument list, for example ``` ... "-ss 00:01:00.000 -t 00:00:05 -vf ... ``` should be ``` ... "-ss", "00:01:00.000", "-t", "00:00:05", "-vf", ... ``` The complete resulting args array should be: ``` ['ffmpeg', '-i', '/var/lib/openshift/id/app-root/data/moviefiles/moviename/moviename.mp4', '-ss', '00:01:00.000', '-t', '00:00:05', '-vf', 'scale=320:-1', '-r', '10', '-qscale:v', '6', '/var/lib/openshift/id/app-root/data/process/moviename/moviename-output%03d.jpg'] ```
4,002,660
In my MySQL database I have dates going back to the mid 1700s which I need to convert somehow to ints in a format similar to Unix time. The value of the int isn't important, so long as I can take a date from either my database or from user input and generate the same int. I need to use MySQL to generate the int on the database side, and python to transform the date from the user. Normally, the [UNIX\_TIMESTAMP function](http://dev.mysql.com/doc/refman/5.1/en/date-and-time-functions.html#function_unix-timestamp), would accomplish this in MySQL, but for dates before 1970, it always returns zero. The [TO\_DAYS MySQL function](http://dev.mysql.com/doc/refman/5.1/en/date-and-time-functions.html#function_to-days), also could work, but I can't take a date from user input and use Python to create the same values as this function creates in MySQL. So basically, I need a function like UNIX\_TIMESTAMP that works in MySQL and Python for dates between 1700-01-01 and 2100-01-01. Put another way, this MySQL pseudo-code: ``` select 1700_UNIX_TIME(date) from table; ``` Must equal this Python code: ``` 1700_UNIX_TIME(date) ```
2010/10/23
[ "https://Stackoverflow.com/questions/4002660", "https://Stackoverflow.com", "https://Stackoverflow.com/users/64911/" ]
This is my idea, create a filter in your web application , when u receive a request like `/area.jsp?id=1` , in `doFilter` method , forward the request to `http://example.com/newyork`. In `web.xml`: ``` <filter> <filter-name>RedirectFilter</filter-name> <filter-class> com.filters.RedirectFilter </filter-class> </filter> <filter-mapping> <filter-name>RedirectFilter</filter-name> <url-pattern>/*</url-pattern> </filter-mapping> ``` Write the following class and place it in `WEB-INF/classses`: ``` class RedirectFilter implements Filter { public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain) throws IOException, ServletException { String scheme = req.getScheme(); // http String serverName = req.getServerName(); // example.com int serverPort = req.getServerPort(); // 80 String contextPath = req.getContextPath(); // /mywebapp String servletPath = req.getServletPath(); // /servlet/MyServlet String pathInfo = req.getPathInfo(); // area.jsp?id=1 String queryString = req.getQueryString(); if (pathInfo.IndexOf("area.jsp") > 1) { pathInfo = "/newyork"; String url = scheme+"://"+serverName+contextPath+pathInfo; filterConfig.getServletContext().getRequestDispatcher(login_page). forward(request, response); } else { chain.doFilter(request, response); return; } } } ```
In your database where you store these area IDs, add a column called "slug" and populate it with the names you want to use. The "slug" for id 1 would be "newyork". Now when a request comes in for one of these URLs, look up the row by "slug" instead of by id.
4,002,660
In my MySQL database I have dates going back to the mid 1700s which I need to convert somehow to ints in a format similar to Unix time. The value of the int isn't important, so long as I can take a date from either my database or from user input and generate the same int. I need to use MySQL to generate the int on the database side, and python to transform the date from the user. Normally, the [UNIX\_TIMESTAMP function](http://dev.mysql.com/doc/refman/5.1/en/date-and-time-functions.html#function_unix-timestamp), would accomplish this in MySQL, but for dates before 1970, it always returns zero. The [TO\_DAYS MySQL function](http://dev.mysql.com/doc/refman/5.1/en/date-and-time-functions.html#function_to-days), also could work, but I can't take a date from user input and use Python to create the same values as this function creates in MySQL. So basically, I need a function like UNIX\_TIMESTAMP that works in MySQL and Python for dates between 1700-01-01 and 2100-01-01. Put another way, this MySQL pseudo-code: ``` select 1700_UNIX_TIME(date) from table; ``` Must equal this Python code: ``` 1700_UNIX_TIME(date) ```
2010/10/23
[ "https://Stackoverflow.com/questions/4002660", "https://Stackoverflow.com", "https://Stackoverflow.com/users/64911/" ]
Use nginx's rewrite module to map that one URL to the area.jsp?id=1 URL <http://wiki.nginx.org/NginxHttpRewriteModule>
In your database where you store these area IDs, add a column called "slug" and populate it with the names you want to use. The "slug" for id 1 would be "newyork". Now when a request comes in for one of these URLs, look up the row by "slug" instead of by id.
4,002,660
In my MySQL database I have dates going back to the mid 1700s which I need to convert somehow to ints in a format similar to Unix time. The value of the int isn't important, so long as I can take a date from either my database or from user input and generate the same int. I need to use MySQL to generate the int on the database side, and python to transform the date from the user. Normally, the [UNIX\_TIMESTAMP function](http://dev.mysql.com/doc/refman/5.1/en/date-and-time-functions.html#function_unix-timestamp), would accomplish this in MySQL, but for dates before 1970, it always returns zero. The [TO\_DAYS MySQL function](http://dev.mysql.com/doc/refman/5.1/en/date-and-time-functions.html#function_to-days), also could work, but I can't take a date from user input and use Python to create the same values as this function creates in MySQL. So basically, I need a function like UNIX\_TIMESTAMP that works in MySQL and Python for dates between 1700-01-01 and 2100-01-01. Put another way, this MySQL pseudo-code: ``` select 1700_UNIX_TIME(date) from table; ``` Must equal this Python code: ``` 1700_UNIX_TIME(date) ```
2010/10/23
[ "https://Stackoverflow.com/questions/4002660", "https://Stackoverflow.com", "https://Stackoverflow.com/users/64911/" ]
Use nginx's rewrite module to map that one URL to the area.jsp?id=1 URL <http://wiki.nginx.org/NginxHttpRewriteModule>
This is my idea, create a filter in your web application , when u receive a request like `/area.jsp?id=1` , in `doFilter` method , forward the request to `http://example.com/newyork`. In `web.xml`: ``` <filter> <filter-name>RedirectFilter</filter-name> <filter-class> com.filters.RedirectFilter </filter-class> </filter> <filter-mapping> <filter-name>RedirectFilter</filter-name> <url-pattern>/*</url-pattern> </filter-mapping> ``` Write the following class and place it in `WEB-INF/classses`: ``` class RedirectFilter implements Filter { public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain) throws IOException, ServletException { String scheme = req.getScheme(); // http String serverName = req.getServerName(); // example.com int serverPort = req.getServerPort(); // 80 String contextPath = req.getContextPath(); // /mywebapp String servletPath = req.getServletPath(); // /servlet/MyServlet String pathInfo = req.getPathInfo(); // area.jsp?id=1 String queryString = req.getQueryString(); if (pathInfo.IndexOf("area.jsp") > 1) { pathInfo = "/newyork"; String url = scheme+"://"+serverName+contextPath+pathInfo; filterConfig.getServletContext().getRequestDispatcher(login_page). forward(request, response); } else { chain.doFilter(request, response); return; } } } ```
64,415,588
Given 2 data frames like the link example, I need to add to df1 the "index income" from df2. I need to search by the df1 combined key in df2 and if there is a match return the value into a new column in df1. There is not an equal number of instances in df1 and df2 and there are about 700 rows in df1 1000 rows in df2. I was able to do this in excel with a vlookup but I am trying to apply it to python code now. ![Data Frame images](https://i.stack.imgur.com/YUh1H.png)
2020/10/18
[ "https://Stackoverflow.com/questions/64415588", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14473305/" ]
This should solve your issue: ``` df1.merge(df2, how='left', on='combind_key') ``` This (`left` join) will give you all the records of `df1` and matching records from `df2`.
<https://www.geeksforgeeks.org/how-to-do-a-vlookup-in-python-using-pandas/> Here is an answer using joins. I modified my df2 to only include useful columns then used pandas left join. ``` Left_join = pd.merge(df, zip_df, on ='State County', how ='left') ```
66,996,373
I'm trying to install and use Pillow with Python 3.9.2 (managed with pyenv). I'm using Poetry to manage my virtual environments and dependencies, so I ran `poetry add pillow`, which successfully added `Pillow = "^8.2.0"` to my pyproject.toml. Per the Pillow docs, I added `from PIL import Image` in my script, but when I try to run it, I get: ``` File "<long/path/to/file.py>", line 3, in <module> from PIL import Image ModuleNotFoundError: No module named 'PIL' ``` When I look in the venv Poetry is creating for me, I can see a PIL directory (`/long/path/lib/python3.9/site-packages/PIL/`) and an Image.py file inside it. What am I missing here? I've tried: * Downcasing to `from pil import Image` per [this](https://github.com/python-pillow/Pillow/issues/3851#issuecomment-568223993); did not work * Downgrading to lower versions of Python and PIL; works, but defeats the purpose * ETA: Exporting a requirements.txt file from Poetry, creating a virtualenv with venv, and installing the packages manually; works, but cuts me off from using Poetry/pyproject.toml Any help would be tremendously appreciated.
2021/04/08
[ "https://Stackoverflow.com/questions/66996373", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4430379/" ]
I couldn't find a way to solve this either (using poetry 1.1.13). Ultimately, I resorted to a workaround of `poetry add pillow && pip install pillow` so I could move on with my life. :P `poetry add pillow` gets the dependency in to the TOML, so consumers of the package *should* be OK.
capitalizing "Pillow" solved it for me: `poetry add Pillow`
30,558,917
Using the pandas library for python I am reading a csv, then grouping the results with a sum. ``` grouped = df[['Organization Name','Views']].groupby('Organization Name').sum().sort(columns='Views',ascending=False).head(10) #Bar Chart Section print grouped.to_string() ``` Unfortunately I get the following result for the table: ``` Views Organization Name Test1 112 Test2 114 Test3 115 ``` it seems that the column headers are going on two separate rows.
2015/05/31
[ "https://Stackoverflow.com/questions/30558917", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1760634/" ]
Because you grouped on 'Organization Name', this is being used as the name for your index, you can set this to `None` using: ``` grouped.index.name = None ``` Will then remove the line, this is just a display issue, your data is not in some funny shape Alternatively if you don't want 'Organization Name' to become the index then pass `as_index=False` to `groupby`: ``` grouped = df[['Organization Name','Views']].groupby('Organization Name', as_index=False).sum().sort(columns='Views',ascending=False).head(10) ```
`grouped.reset_index()` should fix this. This happened because you have grouped the data and aggregated on a column.
67,511,611
I am new to Python socket server programming, I am following this [example](https://docs.python.org/3/library/socketserver.html#examples) to setup a server using the socketserver framework. Based on the comment, pressing Ctrl-C will stop the server but when I try to run it again, I get `OSError: [Errno 98] Address already in use` which makes me have to kill the process manually using the terminal. Based on my understanding, KeyboardInterrupt is considered one type of exception in Python, and when an exception happens in a `with` block, Python will also call the `__exit__()` function to clean up. I have tried to create a `__exit__()` function in the TCP hanlder class but that does not seems to fix the problem. Does anyone know a way to unbind the socket when an exception is raised? server.py ``` import socketserver from threading import Thread class MyTCPHandler(socketserver.BaseRequestHandler): """ The request handler class for our server. It is instantiated once per connection to the server, and must override the handle() method to implement communication to the client. """ def handle(self): # self.request is the TCP socket connected to the client self.data = self.request.recv(1024).strip() print("{} wrote:".format(self.client_address[0])) print(self.data) # just send back the same data, but upper-cased self.request.sendall(self.data.upper()) # Self-written function to try to make Python close the server properly def __exit__(self): shutdown_thread = Thread(target=server.shutdown) shutdown_thread.start() if __name__ == "__main__": HOST, PORT = "localhost", 9999 # Create the server, binding to localhost on port 9999 with socketserver.TCPServer((HOST, PORT), MyTCPHandler) as server: # Activate the server; this will keep running until you # interrupt the program with Ctrl-C server.serve_forever() ```
2021/05/12
[ "https://Stackoverflow.com/questions/67511611", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10733376/" ]
Just split the string and add map over the `stringArray` and add `<b>` just before the `beginOffset` and `</b>` after the `endOffset`. ```js var indices = [{ beginOffset: 2, endOffset: 8, }, { beginOffset: 42, endOffset: 48, }, { beginOffset: 58, endOffset: 63, }, ]; var teststring = "a lovely day at the office to meet such a lovely woman. I loved her so much"; let stringArray = teststring.split(""); indices.forEach(({ beginOffset: begin, endOffset: end }) => { stringArray = stringArray.map((l, index) => { if (index === begin - 1) { return [l, `<b>`]; } else if (index === end - 1) { return [l, `</b>`]; } else return l; }); }); console.log(stringArray.flat().join("")); ```
Sort the indices from highest to lowest. Then when you insert `<b>` and `</b>` it won't affect the indexes in subsequent iterations. ```js var indices = [{ beginOffset: 2, endOffset: 8 }, { beginOffset: 42, endOffset: 48 }, { beginOffset: 58, endOffset: 63 } ]; var teststring = "a lovely day at the office to meet such a lovely woman. I loved her so much"; indices.sort((a, b) => b.beginOffset - a.beginOffset).forEach(({ beginOffset, endOffset }) => teststring = teststring.substring(0, beginOffset) + '<b>' + teststring.substring(beginOffset, endOffset) + '</b>' + teststring.substr(endOffset)); console.log(teststring); ```
67,511,611
I am new to Python socket server programming, I am following this [example](https://docs.python.org/3/library/socketserver.html#examples) to setup a server using the socketserver framework. Based on the comment, pressing Ctrl-C will stop the server but when I try to run it again, I get `OSError: [Errno 98] Address already in use` which makes me have to kill the process manually using the terminal. Based on my understanding, KeyboardInterrupt is considered one type of exception in Python, and when an exception happens in a `with` block, Python will also call the `__exit__()` function to clean up. I have tried to create a `__exit__()` function in the TCP hanlder class but that does not seems to fix the problem. Does anyone know a way to unbind the socket when an exception is raised? server.py ``` import socketserver from threading import Thread class MyTCPHandler(socketserver.BaseRequestHandler): """ The request handler class for our server. It is instantiated once per connection to the server, and must override the handle() method to implement communication to the client. """ def handle(self): # self.request is the TCP socket connected to the client self.data = self.request.recv(1024).strip() print("{} wrote:".format(self.client_address[0])) print(self.data) # just send back the same data, but upper-cased self.request.sendall(self.data.upper()) # Self-written function to try to make Python close the server properly def __exit__(self): shutdown_thread = Thread(target=server.shutdown) shutdown_thread.start() if __name__ == "__main__": HOST, PORT = "localhost", 9999 # Create the server, binding to localhost on port 9999 with socketserver.TCPServer((HOST, PORT), MyTCPHandler) as server: # Activate the server; this will keep running until you # interrupt the program with Ctrl-C server.serve_forever() ```
2021/05/12
[ "https://Stackoverflow.com/questions/67511611", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10733376/" ]
Just split the string and add map over the `stringArray` and add `<b>` just before the `beginOffset` and `</b>` after the `endOffset`. ```js var indices = [{ beginOffset: 2, endOffset: 8, }, { beginOffset: 42, endOffset: 48, }, { beginOffset: 58, endOffset: 63, }, ]; var teststring = "a lovely day at the office to meet such a lovely woman. I loved her so much"; let stringArray = teststring.split(""); indices.forEach(({ beginOffset: begin, endOffset: end }) => { stringArray = stringArray.map((l, index) => { if (index === begin - 1) { return [l, `<b>`]; } else if (index === end - 1) { return [l, `</b>`]; } else return l; }); }); console.log(stringArray.flat().join("")); ```
I don't know if this is good solution or not but what I shortly resolved you can use ``` let str = "a lovely day at the office to meet such a lovely woman. I loved her so much" let o = [ { beginOffset : 2, endOffset : 8 }, { beginOffset : 42, endOffset : 48 }, { beginOffset : 58, endOffset : 63 } ] const map = new Map() o.forEach(function(v){ map.set(v.beginOffset,'boff') map.set(v.endOffset,'eoff') }) let b = "" str.split("").forEach(function(s,k){ let f = map.get(k) if(f){ if(f === "boff"){ b += "<b>" }else{ b += "</b>" } } b +=s }) console.log(b) ```
45,477,478
I have a group of images and some separate heatmap data which (imperfectly) explains where subject of the image is. The heatmap data is in a numpy array with shape (224,224,3). I would like to generate bounding box data from this heatmap data. The heatmaps are not always perfect, So I guess I'm wondering if anyone can think of an intelligent way to do this. Here are some examples of what happens when I apply the heatmap data to the image: [![Image of a cat with a heatmap illuminating the subject of the image](https://github.com/jacobgil/keras-grad-cam/raw/master/examples/persian_cat.jpg?raw=true)](https://github.com/jacobgil/keras-grad-cam/raw/master/examples/persian_cat.jpg?raw=true) [![enter image description here](https://i.stack.imgur.com/JZqJn.jpg)](https://i.stack.imgur.com/JZqJn.jpg) I found a solution to this in matlab, but I have no idea how to read this code! I am a python programmer, unfortunately. <https://github.com/metalbubble/CAM/tree/master/bboxgenerator> Anyone have any ideas about how to approach something like this?
2017/08/03
[ "https://Stackoverflow.com/questions/45477478", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3539683/" ]
this is not a good piece of code. I would not know where to start on the bad practices... This function defines a function that it is not reachable from any other scope, and not reusable, just to return its call with the data argument. The outer return could be simple as ``` return self.change('groupTo', groupExp, data); ```
if you call `getData()` function without passing any parameter then value of the data variable in function is undefined. So at this Line ternary operator is used. ``` data = (data === undefined) ? this.defaultData() : data; ``` So it will check whether `data === undefined` condition which is true. therefore it will assign value of `this.defaultData()` to the data attribute In short when value of `data` is `undefined` that time following is the case ``` data = this.defaultData() ``` Otheriwse if data has a value means calling function `getData("Hi")` with parameter then it will be evaluated as a ``` data = data // data = Hi ``` Now here `var self = this;` is used to preserve the context of `this` inside nested function which is mentioned below. ``` return (function parse(group) { return self.change('groupTo', groupExp, group); }(data)); ``` Without self = this if i try to use `this` in Nested function then it will point to the Global Object i.e `window` Object in JS. In following Code `arg` is available inside the function as we are passing it in call of IIFE so it is availabel to pass in the call of doSomething function. ``` (function (local_arg) { doSomething(local_arg); })(arg); ```
45,477,478
I have a group of images and some separate heatmap data which (imperfectly) explains where subject of the image is. The heatmap data is in a numpy array with shape (224,224,3). I would like to generate bounding box data from this heatmap data. The heatmaps are not always perfect, So I guess I'm wondering if anyone can think of an intelligent way to do this. Here are some examples of what happens when I apply the heatmap data to the image: [![Image of a cat with a heatmap illuminating the subject of the image](https://github.com/jacobgil/keras-grad-cam/raw/master/examples/persian_cat.jpg?raw=true)](https://github.com/jacobgil/keras-grad-cam/raw/master/examples/persian_cat.jpg?raw=true) [![enter image description here](https://i.stack.imgur.com/JZqJn.jpg)](https://i.stack.imgur.com/JZqJn.jpg) I found a solution to this in matlab, but I have no idea how to read this code! I am a python programmer, unfortunately. <https://github.com/metalbubble/CAM/tree/master/bboxgenerator> Anyone have any ideas about how to approach something like this?
2017/08/03
[ "https://Stackoverflow.com/questions/45477478", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3539683/" ]
In this pattern: ``` (function (local_arg) { doSomething(local_arg); })(arg); ``` ...the function is immediately executed, and the parameter `local_arg` will take the value of the argument that was passed, i.e. `arg`. So the above is doing the same as just: ``` doSomething(arg); ``` In some cases where `arg` is a more complicated expression, and you need to use it multiple times, or you have the need for variables that only need to be known locally, the IIFE pattern can be useful.
if you call `getData()` function without passing any parameter then value of the data variable in function is undefined. So at this Line ternary operator is used. ``` data = (data === undefined) ? this.defaultData() : data; ``` So it will check whether `data === undefined` condition which is true. therefore it will assign value of `this.defaultData()` to the data attribute In short when value of `data` is `undefined` that time following is the case ``` data = this.defaultData() ``` Otheriwse if data has a value means calling function `getData("Hi")` with parameter then it will be evaluated as a ``` data = data // data = Hi ``` Now here `var self = this;` is used to preserve the context of `this` inside nested function which is mentioned below. ``` return (function parse(group) { return self.change('groupTo', groupExp, group); }(data)); ``` Without self = this if i try to use `this` in Nested function then it will point to the Global Object i.e `window` Object in JS. In following Code `arg` is available inside the function as we are passing it in call of IIFE so it is availabel to pass in the call of doSomething function. ``` (function (local_arg) { doSomething(local_arg); })(arg); ```
34,490,117
C code: ``` #include "Python.h" #include <windows.h> __declspec(dllexport) PyObject* getTheString() { auto p = Py_BuildValue("s","hello"); char * s = PyString_AsString(p); MessageBoxA(NULL,s,"s",0); return p; } ``` Python code: ``` import ctypes import sys sys.path.append('./') dll = ctypes.CDLL('pythonCall_test.dll') print type(dll.getTheString()) ``` Result: ``` <type 'int'> ``` How can I get `pytype(str)'hello'` from C code? Or is there any Pythonic way to translate this `pytype(int)` to `pytype(str)`? It looks like no matter what I change,the returned `Pyobject*` is a `pytype(int)` no else
2015/12/28
[ "https://Stackoverflow.com/questions/34490117", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5680359/" ]
> > By default functions are assumed to return the C `int` type. Other > return types can be specified by setting the `restype` attribute of the > function object. > [(ref)](https://docs.python.org/2/library/ctypes.html#return-types) > > > Define the type returned by your function like that: ``` >>> from ctypes import c_char_p >>> dll.getTheString.restype = c_char_p # c_char_p is a pointer to a string >>> print type(dll.getTheString()) ```
`int` is the default return type, to specify another type you need to set the function object's `restype` attribute. See [Return types](https://docs.python.org/2/library/ctypes.html#return-types) in the `ctype` docs for details.
61,959,745
I want to merge all files with the extension `.asc` in my current working directory to be merged into a file called `outfile.asc`. My problem is, I don't know how to exclude a specific file (`"BigTree.asc"`) and how to overwrite an existing `"outfile.asc"` if there is one in the directory. ``` if len(sys.argv) < 2: print("Please supply the directory of the ascii files and an output-file as argument:") print("python merge_file.py directory outfile") exit() directory = sys.argv[1] os.chdir(directory) currwd = os.getcwd() filename = sys.argv[2] fileobj_out = open(filename, "w") starttime = time.time() read_files = glob.glob(currwd+"\*.asc") with open("output.asc", "wb") as outfile: for f in read_files: with open(f, "rb") as infile: if f == "BigTree.asc": continue else: outfile.write(infile.read()) endtime = time.time() runtime = int(endtime-starttime) sys.stdout.write("The script took %i sec." %runtime) ```
2020/05/22
[ "https://Stackoverflow.com/questions/61959745", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13461656/" ]
As suggested in a comment, here's my simplified (simplistic?) solution to make it such that specific flask end points in google app engine are only accessibly by application code or app engine service accounts. The answer is based on the documentation regarding [validating cron requests](https://cloud.google.com/appengine/docs/standard/python3/scheduling-jobs-with-cron-yaml#validating_cron_requests) and [validating task requests](https://cloud.google.com/tasks/docs/creating-appengine-handlers#reading_app_engine_task_request_headers). Basically, we write a decorator that will validate whether or not `X-Appengine-Cron: true` is in the headers (implying that the end point is being called by your code, not a remote user). If the header is not found, then we do not run the protected function. ``` # python # main.py from flask import Flask, request, redirect, render_template app = Flask(__name__) # Define the decorator to protect your end points def validate_cron_header(protected_function): def cron_header_validator_wrapper(*args, **kwargs): # https://cloud.google.com/appengine/docs/standard/python3/scheduling-jobs-with-cron-yaml#validating_cron_requests header = request.headers.get('X-Appengine-Cron') # If you are validating a TASK request from a TASK QUEUE instead of a CRON request, then use 'X-Appengine-TaskName' instead of 'X-Appengine-Cron' # example: # header = request.headers.get('X-Appengine-TaskName') # Other possible headers to check can be found here: https://cloud.google.com/tasks/docs/creating-appengine-handlers#reading_app_engine_task_request_headers # If the header does not exist, then don't run the protected function if not header: # here you can raise an error, redirect to a page, etc. return redirect("/") # Run and return the protected function return protected_function(*args, **kwargs) # The line below is necessary to allow the use of the wrapper on multiple endpoints # https://stackoverflow.com/a/42254713 cron_header_validator_wrapper.__name__ = protected_function.__name__ return cron_header_validator_wrapper @app.route("/example/protected/handler") @validate_cron_header def a_protected_handler(): # Run your code here your_response_or_error_etc = "text" return your_response_or_error_etc @app.route("/yet/another/example/protected/handler/<myvar>") @validate_cron_header def another_protected_handler(some_var=None): # Run your code here return render_template("my_sample_template", some_var=some_var) ```
It still works in Python 3.x, I use the original approach in my own Flask AppEngine app running Python 3.8 Here is a simplified version of my `app.yaml` with everything you need: ``` runtime: python38 app_engine_apis: true handlers: - url: /admin/.* secure: always script: auto login: admin - url: /.* secure: always script: auto ``` Both scripts are set to auto and point to main.py by default. In main.py, I define my routes and all routes starting with /admin will force the user to login with a Google Account which has owner/admin rights for the application. Just make sure you include `app_engine_apis: true` in your `app.yaml` file as it is required for login to work.
49,625,350
I have a zip file structure like - B.zip/org/note.txt I want to directly list the files inside org folder without going to other folders in B.zip I have written the following code but it is listing all the files and directories available inside the B.zip file ``` f = zipfile.ZipFile('D:\python\B.jar') for name in f.namelist(): print '%s: %r' % (name, f.read(name)) ```
2018/04/03
[ "https://Stackoverflow.com/questions/49625350", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
You can filter the yields by startwith function.(Using Python 3) ``` import os import zipfile with zipfile.ZipFile('D:\python\B.jar') as z: for filename in z.namelist(): if filename.startswith("org"): print(filename) ```
How to list all files that are inside ZIP files of a certain folder ------------------------------------------------------------------- > > Everytime I came into this post making a similar question... But different at the same time. Cause of this, I think other users can have the same doubt. If you got to this post trying this.... > > > ```py import os import zipfile # Use your folder path path = r'set_yoor_path' for file in os.listdir(os.chdir(path)): if file[-3:].upper() == 'ZIP': for item in zipfile.ZipFile(file).namelist(): print(item) ``` If someone feels that this post has to be deleted, please let m know. Tks
53,605,066
I know there are lots of Q&As to extract datetime from string, such as [dateutil.parser](https://stackoverflow.com/questions/3276180/extracting-date-from-a-string-in-python), to extract datetime from a string ``` import dateutil.parser as dparser dparser.parse('something sep 28 2017 something',fuzzy=True).date() output: datetime.date(2017, 9, 28) ``` but my question is how to know which part of string results this extraction, e.g. i want a function that also returns me 'sep 28 2017' ``` datetime, datetime_str = get_date_str('something sep 28 2017 something') outputs: datetime.date(2017, 9, 28), 'sep 28 2017' ``` any clue or any direction that i can search around?
2018/12/04
[ "https://Stackoverflow.com/questions/53605066", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1165964/" ]
Extend to the discussion with @Paul and following the solution from @alecxe, I have proposed the following solution, which works on a number of testing cases, I've made the problem slight challenger: **Step 1: get excluded tokens** ``` import dateutil.parser as dparser ostr = 'something sep 28 2017 something abcd' _, excl_str = dparser.parse(ostr,fuzzy_with_tokens=True) ``` gives outputs of: ``` excl_str: ('something ', ' ', 'something abcd') ``` **Step 2 : rank tokens by length** ``` excl_str = list(excl_str) excl_str.sort(reverse=True,key = len) ``` gives a sorted token list: ``` excl_str: ['something abcd', 'something ', ' '] ``` **Step 3: delete tokens and ignore space element** ``` for i in excl_str: if i != ' ': ostr = ostr.replace(i,'') return ostr ``` gives a final output ``` ostr: 'sep 28 2017 ' ``` ***Note:*** step 2 is required, because it will cause problem if any shorter token a subset of longer ones. e.g., in this case, if deletion follows an order of `('something ', ' ', 'something abcd')`, the replacement process will remove `something` from `something abcd`, and `abcd` will never get deleted, ends up with `'sep 28 2017 abcd'`
Interesting problem! There is no direct way to get the parsed out date string out of the bigger string with `dateutil`. The problem is that `dateutil` parser does not even have this string available as an intermediate result as it really builds parts of the future `datetime` object on the fly and character by character ([source](https://github.com/dateutil/dateutil/blob/master/dateutil/parser/_parser.py#L732-L856)). It, though, also collects a list of skipped tokens which is probably your best bet. As this list is ordered, you can loop over the tokens and replace the first occurrence of the token: ``` from dateutil import parser s = 'something sep 28 2017 something' parsed_datetime, tokens = parser.parse(s, fuzzy_with_tokens=True) for token in tokens: s = s.replace(token.lstrip(), "", 1) print(s) # prints "sep 28 2017" ``` I am though not 100% sure if this would work in all the possible cases, especially, with the different whitespace characters (notice how I had to workaround things with `.lstrip()`).
16,874,010
I am trying to write out a line to a new file based on input from a csv file, with elements from different rows and different columns for example test.csv: ``` name1, value1, integer1, integer1a name2, value2, integer2, integer2a name3, value3, integer3, integer3a ``` desired output: ``` command integer1:integer1a moretext integer2:integer2a command integer2:integer2a moretext integer3:integer3a ``` I realize this will probably some type of loop, I am just getting lost in the references for loop interation and python maps
2013/06/01
[ "https://Stackoverflow.com/questions/16874010", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2443424/" ]
For an array you can use the std::vector class. ``` std::vector<account *>MyAccounts; MyAccounts.push_back(new account()); ``` Then you can use it like an array accessing it normally. ``` MyAccounts[i]->accountFunction(); ``` **update** I don't know enough about your code, so I give just some general examples here. In your bank class you have a member like shown above )`MyAccounts`. Now when ever you add a new account to your bank, you can do it with the push back function. For example to add a new account and set the initial amount of 100 moneyitems. ``` MyAccounts.push_back(new account()); size_t i = MyAccounts.size(); MyAccounts[i]->setAmount(100); ```
You can do something like below ``` class Bank { public: int AddAccount(Account act){ m_vecAccts.push_back(act);} .... private: ... std:vector<account> m_vecAccts; } ``` Update: This is just a Bank class with vector of accounts as private member variable. AddAccount is public function which can add account to vector
16,874,010
I am trying to write out a line to a new file based on input from a csv file, with elements from different rows and different columns for example test.csv: ``` name1, value1, integer1, integer1a name2, value2, integer2, integer2a name3, value3, integer3, integer3a ``` desired output: ``` command integer1:integer1a moretext integer2:integer2a command integer2:integer2a moretext integer3:integer3a ``` I realize this will probably some type of loop, I am just getting lost in the references for loop interation and python maps
2013/06/01
[ "https://Stackoverflow.com/questions/16874010", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2443424/" ]
First of all, ``` //error handling else{ cout << "Error! Invalid operator." << endl; accounting(); } ``` This look ugly, you are recursively calling accounting function after every bad input. Imagine a situation where user type 1 000 000x bad inputs... you will then try to free the memory 1 000 000x times - after one successful input! ``` //frees memory delete c; ``` The whole accounting function is designed wrong. I suppose you don't want to destroy the account after some kind of transaction, right? I think the person who withdraws 10 dollars from their 10 million dollars account which will be then destroyed, will change bank immediately :) So a while cycle with continue could be solution ``` //function for handling I/O int accounting(){ string command; while(true) { cout << "account> "; cin >> command; //exits prompt if (command == "quit"){ break; } //overwrites account balance else if (command == "init"){ cin >> c->value; c->init(); continue; } //prints balance else if (command == "balance"){ cout << "" << c->account_balance() << endl; continue; } //deposits value else if (command == "deposit"){ cin >> c->value; c->deposit(); accounting(); } //withdraws value else if (command == "withdraw"){ cin >> c->value; c->withdraw(); continue; } //error handling else{ cout << "Error! Invalid operator." << endl; continue; } } ``` Then, ``` int value; ``` is not a class member, it should be argument of methods withdraw and deposit, like this ``` //deposit function void account::deposit(int value){ //int changed to void, you are not returning anything! balance += value; } //withdraw function bool account::withdraw(int value){ //error handling if(value>balance){ cout << "Error! insufficient funds." << endl; return false; } if(value<0) { cout << "Haha, nice try!" << endl; return false; } balance -= value; return true; ``` }
You can do something like below ``` class Bank { public: int AddAccount(Account act){ m_vecAccts.push_back(act);} .... private: ... std:vector<account> m_vecAccts; } ``` Update: This is just a Bank class with vector of accounts as private member variable. AddAccount is public function which can add account to vector
45,823,884
So I'm working a quiz on Python as a project for an Intro to Programming course. My quiz works as intended except in the case that the quiz variable is not being affected by the new values of the blank array. On the run\_quiz function I want to make the quiz variable update itself by changing the blanks to the correct answer after the user has provided it. Here's my code: ``` #Declaration of variables blank = ["___1___", "___2___", "___3___", "___4___"] answers = [] tries = 5 difficulty = "" quiz = "" #Level 1: Easy quiz1 = "Python is intended to be a highly " + blank[0] + " language. It is designed to have an uncluttered " + blank[1] + " layout, often using English " + blank[2] + " where other languages use " + blank[3] + ".\n" #Level 2: Medium quiz2 = "Python interpreters are available for many " + blank[0] + " allowing Python code to run on a wide variety of systems. " + blank[1] + " the reference implementation of Python, is " + blank[2] + " software and has a community-based development model, as do nearly all of its variant implementations. " + blank[1] + " is managed by the non-profit " + blank[3] + ".\n" #Level 3: Hard quiz3 = "Python features a " + blank[0] + " system and automatic " + blank[1] + " and supports multiple " + blank[2] + " including object-oriented, imperative, functional programming, and " + blank[3] + " styles. It has a large and comprehensive standard library.\n" #Answer and quiz assignment def assign(): global difficulty global quiz x = 0 while x == 0: user_input = raw_input("Select a difficulty, Press 1 for Easy, 2 for Medium or 3 for Hard.\n") if user_input == "1": answers.extend(["readable", "visual", "keywords", "punctuation"]) difficulty = "Easy" quiz = quiz1 x = 1 elif user_input == "2": answers.extend(["operating systems", "cpython", "open source", "python software foundation"]) difficulty = "Medium" quiz = quiz2 x = 1 elif user_input == "3": answers.extend(["dynamic type", "memory management", "programming paradigms", "procedural"]) difficulty = "Hard" quiz = quiz3 x = 1 else: print "Error: You must select 1, 2 or 3.\n" x = 0 def run_quiz(): n = 0 global tries global blank print "Welcome to the Python Quiz! This quiz follows a fill in the blank structure. You will have 5 tries to replace the 4 blanks on the difficulty you select. Let's begin!\n" assign() print "You have slected " + difficulty + ".\n" print "Read the paragraph carefully and prepare to provide your answers.\n" while n < 4 and tries > 0: print quiz user_input = raw_input("What is your answer for " + blank[n] + "? Remember, you have " + str(tries) + " tries left.\n") if user_input.lower() == answers[n]: print "That is correct!\n" blank[n] = answers[n] n += 1 else: print "That is the wrong answer. Try again!\n" tries -= 1 if n == 4 or tries == 0: if n == 4: print "Congratulations! You are an expert on Python!" else: print "You have no more tries left! You can always come back and play again!" run_quiz() ``` I know my code has many areas of improvement but this is my first Python project so I guess that's expected.
2017/08/22
[ "https://Stackoverflow.com/questions/45823884", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8501849/" ]
The problem is that your variable, `quiz`, is just a fixed string, and although it looks like it has something to do with `blanks`, it actually doesn't. What you want is 'string interpolation'. Python allows this with the `.format` method of `str` objects. This is really the crux of your question, and using string interpolation it's easy to do. I'd advise you to take some time to learn `.format`, it's an incredibly helpful function in almost any script. I've also updated your code a bit not to use global variables, as this is generally bad practice and can lead to confusing, difficult to track bugs. It may also impair the uncluttered visual layout :). Here is your modified code, which should be working now: ``` quizzes = [ ("""\ Python is intended to be a highly {} language.\ It is designed to have an uncluttered {} layout,\ often using English {} where other languages use {} """, ["readable", "visual", "keywords", "punctuation"], "Easy"), ("""\ Python interpreters are available for many {}\ allowing Python code to run on a wide variety of systems.\ {} the reference implementation of Python, is {}\ software and has a community-based development model, as\ do nearly all of its variant implementations. {} is managed by the non-profit {} """, ["operating systems", "cpython", "open source", "python software foundation"], "Medium"), ("""\ Python features a {} system and automatic {} and\ supports multiple {} including object-oriented,\ imperative, functional programming, and\ {} styles. It has a large and comprehensive standard library. """, ["dynamic type", "memory management", "programming paradigms", "procedural"], "Hard") ] #Answer and quiz assignment def assign(): while True: user_input = raw_input("Select a difficulty, Press 1 for Easy, 2 for Medium or 3 for Hard.\n") if user_input == "1": return quizzes[0] elif user_input == "2": return quizzes[1] elif user_input == "3": return quizzes[2] else: print "Error: You must select 1, 2 or 3.\n" continue break def run_quiz(): n = 0 #Declaration of variables blank = ["___1___", "___2___", "___3___", "___4___"] tries = 5 print "Welcome to the Python Quiz! This quiz follows a fill in the blank structure. You will have 5 tries to replace the 4 blanks on the difficulty you select. Let's begin!\n" quiz, answers, difficulty = assign() print "You have selected {}.\n".format(difficulty) print "Read the paragraph carefully and prepare to provide your answers.\n" while n < 4 and tries > 0: print quiz.format(*blank) user_input = raw_input("What is your answer for {}? Remember, you have {} tries left.\n".format(blank[n], tries)) if user_input.lower() == answers[n]: print "That is correct!\n" blank[n] = answers[n] n += 1 else: print "That is the wrong answer. Try again!\n" tries -= 1 if n == 4 or tries == 0: if n == 4: print "Congratulations! You are an expert on Python!" else: print "You have no more tries left! You can always come back and play again!" run_quiz() ``` A little more on string interpolation: You're doing a lot of `"start of string " + str(var) + " end of string"`. This can be achieved quite simply with `"start of string {} end of string".format(var)"` - it even automatically does the `str` conversion. I've changed your `quiz` variables to have `"{}"` where either `"__1__"` etc should be displayed or the user's answer. You can then do `quiz.format(*blank*)` to print the 'most recent' version of the quiz. `*` here 'unpacks' the elements of blank into separate arguments for `format`. If you find it more easy to learn with example usage, here are two usages of `format` in a simpler context: ``` >>> "the value of 2 + 3 is {}".format(2 + 3) 'the value of 2 + 3 is 5' >>> a = 10 >>> "a is {}".format(a) 'a is 10' ``` I've also stored the information about each quiz in a `list` of `tuple`s, and assign now has a `return` value, rather than causing side effects. Apart from that, your code is still pretty much intact. Your original logic hasn't changed at all. Regarding your comment about objects: Technically, yes, `quizzes` is an object. However, as Python is a 'pure object oriented language', *everything* in Python is an object. `2` is an object. `"abc"` is an object. `[1, 2, 3]` is an object. Even functions are objects. You may be thinking in terms of JavaScript - with all of the brackets and parentheses, it kind of resembles a JS Object. However, `quizzes` is nothing more than a list (of tuples). You might also be thinking of instances of custom classes, but it's not one of those either. Instances require you to define a class first, using `class ...`. A bit more on what `quizzes` actually is - it's a list of tuples of strings, lists of strings and strings. This is a kind of complicated type signature, but it's just a lot of nested container types really. It firstly means that each element of `quizzes` is a 'tuple'. A tuples is pretty similar to a list, except that it can't be changed in place. Really, you could almost always use a list instead of a tuple, but my rule of thumb is that a heterogenous collection (meaning stuff of different types) should generally be a tuple. Each tuple has the quiz text, the answers, and the difficulty. I've put it in an object like this as it means it can be accessed by indexing (using `quiz[n]`), rather than by a bunch of if statements which then refer to `quiz1`, `quiz2`, etc. Generally, if you find yourself naming more than about two variables which are semantically similar like this, it would be a good idea to put them in a list, so you can index, and iterate etc.
Only now have I read your question properly. You first make your strings quiz1, quiz2 an quiz3. You only do that once. After that you change your blanks array. But you don't reconstruct your strings. So they still have the old values. Note that a copy of elements of the blanks array is made into e.g. quiz1. That copy doesn't change automagically after the fact. If you want to program it like this, you'll have to rebuild your quiz1, quiz2 and quiz3 strings explicitly each time you change your blanks array. General advice: Don't use so many globals. Use function parameters instead. But for a first attempt I guess it's OK. [edit] A simple modification would be: Replace your quiz, quiz1, quiz2 and quiz3 by functions get\_quiz (), get\_quiz1 () etc. that get the most recent version, including the altered elements of blanks. This modification doesn't make this an elegant program. But you'll come to that with a bit more experience. A long shot in case you wonder (but don't try to bridge that gap in one step): In the end Quiz will probably be a class with methods and attributes, of which you have instances. To be sure: I think that experimenting like this will make you a good programmer, more than copying some ready to go code!
72,011,497
I am reading data remote .dat files for EDI data processing. Original Data is some string bytes: ``` b'MDA1MDtWMjAxOS44LjAuMDtWMjAxOS44LjAuMDsyMDIwMD.........' ``` Used decode as below... ``` byte_data = base64.b64decode(byte_data) ``` Gave me this below byte data. Is there a better way to process below bytes data into python list ? ``` b"0050;V2019.8.0.0;V2019.8.0.0;20200407;184821\r\n0070;;7;0;7\r\n0080;11;50;bot.pdf;Driss;C:\\Dat\\Abl\\\r\n0090;1;Z;Zub\xf6r;0;0;0;Zub\xf6r;;;Zub\xf6r\r\n ``` Tried decode with uft-8, didn't work. ``` byte_data.decode('utf-8') ``` Tired to convert to string and read as CSV but did not help, landed on original data. Need to keep some of the string as it is and convert \xf6r \r \n ``` data = io.StringIO(above_data) data.seek(0) csv_reader = csv.reader(data, delimiter=";") ```
2022/04/26
[ "https://Stackoverflow.com/questions/72011497", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6837224/" ]
It didn't work with 'utf-8' because it's not 'utf-8', it's probably 'ISO-8859-1' (latin-1) ```py text = byte_data.decode('ISO-8859-1') ``` because `\xf6` is `ö` in 'ISO-8859-1'
Is it definitely utf-8 encoded? This might help guide to what decoder to use: ``` import chardet print(cardet.detect(byte_data)) ```
45,209,068
I'm new to python, and now I need to use it to work with some data in a txt file. Here is a sample data, where after each `'&'`, is a new index: ``` uid=aaa&sid=bbb&bid=ccc&cid=ddd&pid=eee&ver=fff... uid=aaa2&sid=bbb2&bid=ccc2&cid=ddd2&pid=eee2&ver=fff2... ... ``` The end result is to have a DataFrame (with pandas) with `columns=['uid', 'sid', 'bid', 'cid', 'pid', 'ver'...]` and the content of `uid` as index. My idea is: to strip out `aaa`, `bbb`, and `ccc`, etc. from the string, and insert them into the dataframe. I've tried: ``` st1 = gif?uid=aaa&sid=bbb&bid=ccc&cid=ddd&pid=eee&ver=fff......HTTPasfawfaw (st1 is the original string) st2 = st1.split("gif?")[1].split("HTTP")[0] st3 = st2.split('&') ``` My question is: 1. how can I only take the string after the `=` out and put them in Dataframe? 2. I need to deal with huge data files, is there a better way to do this with less time and takes less memory? Thank you in advance for your help!
2017/07/20
[ "https://Stackoverflow.com/questions/45209068", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8336506/" ]
This is a URL querystring. You should use the `urllib` module in the standard library to parse it. ``` from urllib.parse import parse_qs # python3 from urlparse import parse_qs # python2 parse_qs('uid=aaa2&sid=bbb2&bid=ccc2&cid=ddd2&pid=eee2&ver=fff2') ``` Output: ``` {'bid': ['ccc2'], 'cid': ['ddd2'], 'pid': ['eee2'], 'sid': ['bbb2'], 'uid': ['aaa2'], 'ver': ['fff2']} ```
You can use `regex` to create a `list` of all the columns and values and then use it to create your `dataframe`, for example: ``` import re st = 'uid=aaa&sid=bbb&bid=ccc&cid=ddd&pid=eee&ver=fffuid=aaa2&sid=bbb2&bid=ccc2&cid=ddd2&pid=eee2&ver=fff2' myData = re.findall(r'(\wid)=(\w+)', st) prit myData ``` output: ``` [('uid', 'aaa'), ('sid', 'bbb'), ('bid', 'ccc'), ('cid', 'ddd'), ('pid', 'eee'), ('uid', 'aaa2'), ('sid', 'bbb2'), ('bid', 'ccc2'), ('cid', 'ddd2'), ('pid', 'eee2')] ```
45,209,068
I'm new to python, and now I need to use it to work with some data in a txt file. Here is a sample data, where after each `'&'`, is a new index: ``` uid=aaa&sid=bbb&bid=ccc&cid=ddd&pid=eee&ver=fff... uid=aaa2&sid=bbb2&bid=ccc2&cid=ddd2&pid=eee2&ver=fff2... ... ``` The end result is to have a DataFrame (with pandas) with `columns=['uid', 'sid', 'bid', 'cid', 'pid', 'ver'...]` and the content of `uid` as index. My idea is: to strip out `aaa`, `bbb`, and `ccc`, etc. from the string, and insert them into the dataframe. I've tried: ``` st1 = gif?uid=aaa&sid=bbb&bid=ccc&cid=ddd&pid=eee&ver=fff......HTTPasfawfaw (st1 is the original string) st2 = st1.split("gif?")[1].split("HTTP")[0] st3 = st2.split('&') ``` My question is: 1. how can I only take the string after the `=` out and put them in Dataframe? 2. I need to deal with huge data files, is there a better way to do this with less time and takes less memory? Thank you in advance for your help!
2017/07/20
[ "https://Stackoverflow.com/questions/45209068", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8336506/" ]
This is a URL querystring. You should use the `urllib` module in the standard library to parse it. ``` from urllib.parse import parse_qs # python3 from urlparse import parse_qs # python2 parse_qs('uid=aaa2&sid=bbb2&bid=ccc2&cid=ddd2&pid=eee2&ver=fff2') ``` Output: ``` {'bid': ['ccc2'], 'cid': ['ddd2'], 'pid': ['eee2'], 'sid': ['bbb2'], 'uid': ['aaa2'], 'ver': ['fff2']} ```
``` txt = open('test.txt').read() pd.DataFrame( [dict([kv.split('=') for kv in l.split('&')]) for l in txt.split('\n')] ) bid cid pid sid uid ver 0 ccc ddd eee bbb aaa fff 1 ccc2 ddd2 eee2 bbb2 aaa2 fff2 ```
37,061,089
I installed Jupyter notebooks in Ubuntu 14.04 via Anaconda earlier, and just now I installed TensorFlow. I would like TensorFlow to work regardless of whether I am working in a notebook or simply scripting. In my attempt to achieve this, I ended up installing TensorFlow twice, once using Anaconda, and once using pip. The Anaconda install works, but I need to preface any call to python with "source activate tensorflow". And the pip install works nicely, if start python the standard way (in the terminal) then tensorflow loads just fine. My question is: how can I also have it work in the Jupyter notebooks? This leads me to a more general question: it seems that my python kernel in Jupyter/Anaconda is separate from the python kernel (or environment? not sure about the terminology here) used system wide. It would be nice if these coincided, so that if I install a new python library, it becomes accessible to all the varied ways I have of running python.
2016/05/05
[ "https://Stackoverflow.com/questions/37061089", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4556722/" ]
I installed PIP with Conda `conda install pip` instead of `apt-get install python-pip python-dev`. Then installed tensorflow use [Pip Installation](https://www.tensorflow.org/versions/r0.9/get_started/os_setup.html#test-the-tensorflow-installation): ``` # Ubuntu/Linux 64-bit, CPU only, Python 2.7 $ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.9.0-cp27-none-linux_x86_64.whl # Ubuntu/Linux 64-bit, GPU enabled, Python 2.7 # Requires CUDA toolkit 7.5 and CuDNN v4. For other versions, see "Install from sources" below. $ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow-0.9.0-cp27-none-linux_x86_64.whl ``` ... `pip install --upgrade $TF_BINARY_URL` Then it will work in jupyter notebook.
``` pip install tensorflow ``` This worked for me in my conda virtual environment. I was trying to use `conda install tensorflow` in a conda virtual environment where jupyter notebooks was already installed, resulting in many conflicts and failure. But pip install worked fine.
37,061,089
I installed Jupyter notebooks in Ubuntu 14.04 via Anaconda earlier, and just now I installed TensorFlow. I would like TensorFlow to work regardless of whether I am working in a notebook or simply scripting. In my attempt to achieve this, I ended up installing TensorFlow twice, once using Anaconda, and once using pip. The Anaconda install works, but I need to preface any call to python with "source activate tensorflow". And the pip install works nicely, if start python the standard way (in the terminal) then tensorflow loads just fine. My question is: how can I also have it work in the Jupyter notebooks? This leads me to a more general question: it seems that my python kernel in Jupyter/Anaconda is separate from the python kernel (or environment? not sure about the terminology here) used system wide. It would be nice if these coincided, so that if I install a new python library, it becomes accessible to all the varied ways I have of running python.
2016/05/05
[ "https://Stackoverflow.com/questions/37061089", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4556722/" ]
i used these following which in virtualenv. ``` pip3 install --ignore-installed ipython pip3 install --ignore-installed jupyter ``` This re-installs both ipython and jupyter notebook in my tensorflow virtual environment. You can verify it after installation by `which ipython` and `which jupyter`. The `bin` will be under the virtual env. > > **NOTE** I am using python 3.\* > > >
I have another solution that you don't need to `source activate tensorflow` before using `jupyter notebook` every time. **Partion 1** Firstly, you should ensure you have installed jupyter in your virtualenv. If you have installed, you can skip this section (Use `which jupyter` to check). If you not, you could run `source activate tensorflow`, and then install jupyter in your virtualenv by `conda install jupyter`. (You can use `pip` too.) **Partion 2** 1.From within your virtualenv, run ``` username$ source activate tensorflow (tensorflow)username$ ipython kernelspec install-self --user ``` This will create a kernelspec for your virtualenv and tell you where it is: ``` (tensorflow)username$ [InstallNativeKernelSpec] Installed kernelspec pythonX in /home/username/.local/share/jupyter/kernels/pythonX ``` Where pythonX will match the version of Python in your virtualenv. 2.Copy the new kernelspec somewhere useful. Choose a `kernel_name` for your new kernel that is not `python2` or `python3` or one you've used before and then: ``` (tensorflow)username$ mkdir -p ~/.ipython/kernels (tensorflow)username$ mv ~/.local/share/jupyter/kernels/pythonX ~/.ipython/kernels/<kernel_name> ``` 3.If you want to change the name of the kernel that IPython shows you, you need to edit `~/.ipython/kernels/<kernel_name>/kernel.json` and change the JSON key called `display_name` to be a name that you like. 4.You should now be able to see your kernel in the IPython notebook menu: `Kernel -> Change kernel` and be able so switch to it (you may need to refresh the page before it appears in the list). IPython will remember which kernel to use for that notebook from then on. [Reference](https://help.pythonanywhere.com/pages/IPythonNotebookVirtualenvs/).
37,061,089
I installed Jupyter notebooks in Ubuntu 14.04 via Anaconda earlier, and just now I installed TensorFlow. I would like TensorFlow to work regardless of whether I am working in a notebook or simply scripting. In my attempt to achieve this, I ended up installing TensorFlow twice, once using Anaconda, and once using pip. The Anaconda install works, but I need to preface any call to python with "source activate tensorflow". And the pip install works nicely, if start python the standard way (in the terminal) then tensorflow loads just fine. My question is: how can I also have it work in the Jupyter notebooks? This leads me to a more general question: it seems that my python kernel in Jupyter/Anaconda is separate from the python kernel (or environment? not sure about the terminology here) used system wide. It would be nice if these coincided, so that if I install a new python library, it becomes accessible to all the varied ways I have of running python.
2016/05/05
[ "https://Stackoverflow.com/questions/37061089", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4556722/" ]
Here is what I did to enable tensorflow in Anaconda -> Jupyter. 1. Install Tensorflow using the instructions provided at 2. Go to /Users/username/anaconda/env and ensure Tensorflow is installed 3. Open the Anaconda navigator and go to "Environments" (located in the left navigation) 4. Select "All" in teh first drop down and search for Tensorflow 5. If its not enabled, enabled it in the checkbox and confirm the process that follows. 6. Now open a new Jupyter notebook and tensorflow should work
The accepted answer (by Zhongyu Kuang) has just helped me out. Here I've create an `environment.yml` file that enables me to make this conda / tensorflow installation process repeatable. Step 1 - create a Conda environment.yml File ============================================ `environment.yml` looks like this: ``` name: hello-tensorflow dependencies: - python=3.6 - jupyter - ipython - pip: - https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.1.0-cp36-cp36m-linux_x86_64.whl ``` Note: * Simply replace the name to whatever you want. (mine is `hello-tensorflow`) * Simply replace the python version to whatever you want. (mine is `3.6`) * Simply replace the tensorflow pip install URL to whatever you want (mine is the Tensorflow URL where Python 3.6 with GPU support) Step 2 - create the Conda environment ===================================== With the `environment.yml` being in the current path you are on, this command creates the environment `hello-tensorflow` (or whatever you have renamed it to): ``` conda env create -f environment.yml ``` Step 3: source activate ======================= Activate the newly created environment: ``` source activate hello-tensorflow ``` Step 4 - which python / jupyter / ipython ========================================= which python... ``` (hello-tensorflow) $ which python /home/johnny/anaconda3/envs/hello-tensorflow/bin/python ``` which jupyter... ``` (hello-tensorflow) $ which jupyter /home/johnny/anaconda3/envs/hello-tensorflow/bin/jupyter ``` which ipython... ``` (hello-tensorflow) $ which ipython /home/johnny/anaconda3/envs/hello-tensorflow/bin/ipython ``` Step 5 ====== You should now be able to import tensorflow from python, jupyter (console / qtconsole / notebook, etc.) and ipython.
37,061,089
I installed Jupyter notebooks in Ubuntu 14.04 via Anaconda earlier, and just now I installed TensorFlow. I would like TensorFlow to work regardless of whether I am working in a notebook or simply scripting. In my attempt to achieve this, I ended up installing TensorFlow twice, once using Anaconda, and once using pip. The Anaconda install works, but I need to preface any call to python with "source activate tensorflow". And the pip install works nicely, if start python the standard way (in the terminal) then tensorflow loads just fine. My question is: how can I also have it work in the Jupyter notebooks? This leads me to a more general question: it seems that my python kernel in Jupyter/Anaconda is separate from the python kernel (or environment? not sure about the terminology here) used system wide. It would be nice if these coincided, so that if I install a new python library, it becomes accessible to all the varied ways I have of running python.
2016/05/05
[ "https://Stackoverflow.com/questions/37061089", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4556722/" ]
**Update** [TensorFlow website](https://www.tensorflow.org/versions/r0.11/get_started/os_setup.html#virtualenv-installation) supports five installations. To my understanding, using [Pip installation](https://www.tensorflow.org/versions/r0.11/get_started/os_setup.html#pip-installation) directly would be fine to import TensorFlow in Jupyter Notebook (as long as Jupyter Notebook was installed and there were no other issues) b/z it didn't create any virtual environments. **Using [virtualenv install](https://www.tensorflow.org/versions/r0.11/get_started/os_setup.html#virtualenv-installation) and [conda install](https://www.tensorflow.org/versions/r0.11/get_started/os_setup.html#anaconda-installation) would need to install jupyter into the newly created TensorFlow environment to allow TensorFlow to work in Jupyter Notebook** (see the following original post section for more details). I believe [docker install](https://www.tensorflow.org/versions/r0.11/get_started/os_setup.html) may require some port setup in the VirtualBox to make TensorFlow work in Jupyter Notebook ([see this post](https://stackoverflow.com/questions/33636925/how-do-i-start-tensorflow-docker-jupyter-notebook?rq=1])). For [installing from sources](https://www.tensorflow.org/versions/r0.11/get_started/os_setup.html#installing-from-sources), it also depends on which environment the source code is built and installed into. If it's installed into a freshly created virtual environment or an virtual environment which didn't have Jupyter Notebook installed, it would also need to install Jupyter Notebook into the virtual environment to use Tensorflow in Jupyter Notebook. **Original Post** To use tensorflow in Ipython and/or Jupyter(Ipython) Notebook, you'll need to install Ipython and Jupyter (after installing tensorflow) under the tensorflow activated environment. Before install Ipython and Jupyter under tensorflow environment, if you do the following commands in terminal: ``` username$ source activate tensorflow (tensorflow)username$ which ipython (tensorflow)username$ /Users/username/anaconda/bin/ipython (tensorflow)username$ which jupyter (tensorflow)username$ /Users/username/anaconda/bin/jupyter (tensorflow)username$ which python (tensorflow)username$ /User/username//anaconda/envs/tensorflow/bin/python ``` This is telling you that when you open python from terminal, it is using the one installed in the "environments" where tensorflow is installed. Therefore you can actually import tensorflow successfully. However, if you are trying to run ipython and/or jupyter notebook, these are not installed under the "environments" equipped with tensorflow, hence it has to go back to use the regular environment which has no tensorflow module, hence you get an import error. You can verify this by listing out the items under envs/tensorflow/bin directory: ``` (tensorflow) username$ ls /User/username/anaconda/envs/tensorflow/bin/ ``` You will see that there are no "ipython" and/or "jupyer" listing out. To use tensorflow with Ipython and/or Jupyter notebook, simply install them into the tensorflow environment: ``` (tensorflow) username$ conda install ipython (tensorflow) username$ pip install jupyter #(use pip3 for python3) ``` After installing them, there should be a "jupyer" and a "ipython" show up in the envs/tensorflow/bin/ directory. Notes: Before trying to import tensorflow module in jupyter notebook, try close the notebook. And "source deactivate tensorflow" first, and then reactivate it ("source activate tensorflow") to make sure things are "on the same page". Then reopen the notebook and try import tensorflow. It should be import successfully (worked on mine at least).
Jupyter Lab: ModuleNotFound tensorflow ====================================== For a future version of me or a colleague that runs into this issue: ``` conda install -c conda-forge jupyter jupyterlab keras tensorflow ``` Turns out `jupyterlab` is a plugin for `jupyter`. So even if you are in an environment that has `jupyter` but **not** `jupyterlab` as well, if you try to run: ``` jupyter lab ``` then `jupyter` will look in the `(base)` environment for the `jupyterlab` plugin. Then your imports in `jupyter lab` will be relative to that plugin and **not** your conda environment.
37,061,089
I installed Jupyter notebooks in Ubuntu 14.04 via Anaconda earlier, and just now I installed TensorFlow. I would like TensorFlow to work regardless of whether I am working in a notebook or simply scripting. In my attempt to achieve this, I ended up installing TensorFlow twice, once using Anaconda, and once using pip. The Anaconda install works, but I need to preface any call to python with "source activate tensorflow". And the pip install works nicely, if start python the standard way (in the terminal) then tensorflow loads just fine. My question is: how can I also have it work in the Jupyter notebooks? This leads me to a more general question: it seems that my python kernel in Jupyter/Anaconda is separate from the python kernel (or environment? not sure about the terminology here) used system wide. It would be nice if these coincided, so that if I install a new python library, it becomes accessible to all the varied ways I have of running python.
2016/05/05
[ "https://Stackoverflow.com/questions/37061089", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4556722/" ]
I found the solution from someone else's post. It is simple and works well! <http://help.pythonanywhere.com/pages/IPythonNotebookVirtualenvs> Just install the following in the Command Prompt and change kernel to Python 3 in Jupyter Notebook. It will import tensorflow successfully. > > pip install tornado==4.5.3 > > > pip install ipykernel==4.8.2 > > > (Orginial post: <https://github.com/tensorflow/tensorflow/issues/11851>)
Open an Anaconda Prompt screen: `(base) C:\Users\YOU>conda create -n tf tensorflow` After the environment is created type: `conda activate tf` Prompt moves to (tf) environment, that is: `(tf) C:\Users\YOU>` then install Jupyter Notebook in this (tf) environment: `conda install -c conda-forge jupyterlab - jupyter notebook` Still in (tf) environment, that is type `(tf) C:\Users\YOU>jupyter notebook` The notebook screen starts!! A New notebook then can `import tensorflow` FROM THEN ON To open a session click Anaconda prompt, type `conda activate tf` the prompt moves to tf environment `(tf) C:\Users\YOU>` then type `(tf) C:\Users\YOU>jupyter notebook`
37,061,089
I installed Jupyter notebooks in Ubuntu 14.04 via Anaconda earlier, and just now I installed TensorFlow. I would like TensorFlow to work regardless of whether I am working in a notebook or simply scripting. In my attempt to achieve this, I ended up installing TensorFlow twice, once using Anaconda, and once using pip. The Anaconda install works, but I need to preface any call to python with "source activate tensorflow". And the pip install works nicely, if start python the standard way (in the terminal) then tensorflow loads just fine. My question is: how can I also have it work in the Jupyter notebooks? This leads me to a more general question: it seems that my python kernel in Jupyter/Anaconda is separate from the python kernel (or environment? not sure about the terminology here) used system wide. It would be nice if these coincided, so that if I install a new python library, it becomes accessible to all the varied ways I have of running python.
2016/05/05
[ "https://Stackoverflow.com/questions/37061089", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4556722/" ]
**Update** [TensorFlow website](https://www.tensorflow.org/versions/r0.11/get_started/os_setup.html#virtualenv-installation) supports five installations. To my understanding, using [Pip installation](https://www.tensorflow.org/versions/r0.11/get_started/os_setup.html#pip-installation) directly would be fine to import TensorFlow in Jupyter Notebook (as long as Jupyter Notebook was installed and there were no other issues) b/z it didn't create any virtual environments. **Using [virtualenv install](https://www.tensorflow.org/versions/r0.11/get_started/os_setup.html#virtualenv-installation) and [conda install](https://www.tensorflow.org/versions/r0.11/get_started/os_setup.html#anaconda-installation) would need to install jupyter into the newly created TensorFlow environment to allow TensorFlow to work in Jupyter Notebook** (see the following original post section for more details). I believe [docker install](https://www.tensorflow.org/versions/r0.11/get_started/os_setup.html) may require some port setup in the VirtualBox to make TensorFlow work in Jupyter Notebook ([see this post](https://stackoverflow.com/questions/33636925/how-do-i-start-tensorflow-docker-jupyter-notebook?rq=1])). For [installing from sources](https://www.tensorflow.org/versions/r0.11/get_started/os_setup.html#installing-from-sources), it also depends on which environment the source code is built and installed into. If it's installed into a freshly created virtual environment or an virtual environment which didn't have Jupyter Notebook installed, it would also need to install Jupyter Notebook into the virtual environment to use Tensorflow in Jupyter Notebook. **Original Post** To use tensorflow in Ipython and/or Jupyter(Ipython) Notebook, you'll need to install Ipython and Jupyter (after installing tensorflow) under the tensorflow activated environment. Before install Ipython and Jupyter under tensorflow environment, if you do the following commands in terminal: ``` username$ source activate tensorflow (tensorflow)username$ which ipython (tensorflow)username$ /Users/username/anaconda/bin/ipython (tensorflow)username$ which jupyter (tensorflow)username$ /Users/username/anaconda/bin/jupyter (tensorflow)username$ which python (tensorflow)username$ /User/username//anaconda/envs/tensorflow/bin/python ``` This is telling you that when you open python from terminal, it is using the one installed in the "environments" where tensorflow is installed. Therefore you can actually import tensorflow successfully. However, if you are trying to run ipython and/or jupyter notebook, these are not installed under the "environments" equipped with tensorflow, hence it has to go back to use the regular environment which has no tensorflow module, hence you get an import error. You can verify this by listing out the items under envs/tensorflow/bin directory: ``` (tensorflow) username$ ls /User/username/anaconda/envs/tensorflow/bin/ ``` You will see that there are no "ipython" and/or "jupyer" listing out. To use tensorflow with Ipython and/or Jupyter notebook, simply install them into the tensorflow environment: ``` (tensorflow) username$ conda install ipython (tensorflow) username$ pip install jupyter #(use pip3 for python3) ``` After installing them, there should be a "jupyer" and a "ipython" show up in the envs/tensorflow/bin/ directory. Notes: Before trying to import tensorflow module in jupyter notebook, try close the notebook. And "source deactivate tensorflow" first, and then reactivate it ("source activate tensorflow") to make sure things are "on the same page". Then reopen the notebook and try import tensorflow. It should be import successfully (worked on mine at least).
i used these following which in virtualenv. ``` pip3 install --ignore-installed ipython pip3 install --ignore-installed jupyter ``` This re-installs both ipython and jupyter notebook in my tensorflow virtual environment. You can verify it after installation by `which ipython` and `which jupyter`. The `bin` will be under the virtual env. > > **NOTE** I am using python 3.\* > > >
37,061,089
I installed Jupyter notebooks in Ubuntu 14.04 via Anaconda earlier, and just now I installed TensorFlow. I would like TensorFlow to work regardless of whether I am working in a notebook or simply scripting. In my attempt to achieve this, I ended up installing TensorFlow twice, once using Anaconda, and once using pip. The Anaconda install works, but I need to preface any call to python with "source activate tensorflow". And the pip install works nicely, if start python the standard way (in the terminal) then tensorflow loads just fine. My question is: how can I also have it work in the Jupyter notebooks? This leads me to a more general question: it seems that my python kernel in Jupyter/Anaconda is separate from the python kernel (or environment? not sure about the terminology here) used system wide. It would be nice if these coincided, so that if I install a new python library, it becomes accessible to all the varied ways I have of running python.
2016/05/05
[ "https://Stackoverflow.com/questions/37061089", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4556722/" ]
Here is what I did to enable tensorflow in Anaconda -> Jupyter. 1. Install Tensorflow using the instructions provided at 2. Go to /Users/username/anaconda/env and ensure Tensorflow is installed 3. Open the Anaconda navigator and go to "Environments" (located in the left navigation) 4. Select "All" in teh first drop down and search for Tensorflow 5. If its not enabled, enabled it in the checkbox and confirm the process that follows. 6. Now open a new Jupyter notebook and tensorflow should work
I think your question is very similar with the question post here. [Windows 7 jupyter notebook executing tensorflow](https://stackoverflow.com/questions/36046448/windows-7-jupyter-notebook-executing-tensorflow/37280604#37280604). As Yaroslav mentioned, you can try `conda install -c http://conda.anaconda.org/jjhelmus tensorflow` .
37,061,089
I installed Jupyter notebooks in Ubuntu 14.04 via Anaconda earlier, and just now I installed TensorFlow. I would like TensorFlow to work regardless of whether I am working in a notebook or simply scripting. In my attempt to achieve this, I ended up installing TensorFlow twice, once using Anaconda, and once using pip. The Anaconda install works, but I need to preface any call to python with "source activate tensorflow". And the pip install works nicely, if start python the standard way (in the terminal) then tensorflow loads just fine. My question is: how can I also have it work in the Jupyter notebooks? This leads me to a more general question: it seems that my python kernel in Jupyter/Anaconda is separate from the python kernel (or environment? not sure about the terminology here) used system wide. It would be nice if these coincided, so that if I install a new python library, it becomes accessible to all the varied ways I have of running python.
2016/05/05
[ "https://Stackoverflow.com/questions/37061089", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4556722/" ]
I have another solution that you don't need to `source activate tensorflow` before using `jupyter notebook` every time. **Partion 1** Firstly, you should ensure you have installed jupyter in your virtualenv. If you have installed, you can skip this section (Use `which jupyter` to check). If you not, you could run `source activate tensorflow`, and then install jupyter in your virtualenv by `conda install jupyter`. (You can use `pip` too.) **Partion 2** 1.From within your virtualenv, run ``` username$ source activate tensorflow (tensorflow)username$ ipython kernelspec install-self --user ``` This will create a kernelspec for your virtualenv and tell you where it is: ``` (tensorflow)username$ [InstallNativeKernelSpec] Installed kernelspec pythonX in /home/username/.local/share/jupyter/kernels/pythonX ``` Where pythonX will match the version of Python in your virtualenv. 2.Copy the new kernelspec somewhere useful. Choose a `kernel_name` for your new kernel that is not `python2` or `python3` or one you've used before and then: ``` (tensorflow)username$ mkdir -p ~/.ipython/kernels (tensorflow)username$ mv ~/.local/share/jupyter/kernels/pythonX ~/.ipython/kernels/<kernel_name> ``` 3.If you want to change the name of the kernel that IPython shows you, you need to edit `~/.ipython/kernels/<kernel_name>/kernel.json` and change the JSON key called `display_name` to be a name that you like. 4.You should now be able to see your kernel in the IPython notebook menu: `Kernel -> Change kernel` and be able so switch to it (you may need to refresh the page before it appears in the list). IPython will remember which kernel to use for that notebook from then on. [Reference](https://help.pythonanywhere.com/pages/IPythonNotebookVirtualenvs/).
I think your question is very similar with the question post here. [Windows 7 jupyter notebook executing tensorflow](https://stackoverflow.com/questions/36046448/windows-7-jupyter-notebook-executing-tensorflow/37280604#37280604). As Yaroslav mentioned, you can try `conda install -c http://conda.anaconda.org/jjhelmus tensorflow` .
37,061,089
I installed Jupyter notebooks in Ubuntu 14.04 via Anaconda earlier, and just now I installed TensorFlow. I would like TensorFlow to work regardless of whether I am working in a notebook or simply scripting. In my attempt to achieve this, I ended up installing TensorFlow twice, once using Anaconda, and once using pip. The Anaconda install works, but I need to preface any call to python with "source activate tensorflow". And the pip install works nicely, if start python the standard way (in the terminal) then tensorflow loads just fine. My question is: how can I also have it work in the Jupyter notebooks? This leads me to a more general question: it seems that my python kernel in Jupyter/Anaconda is separate from the python kernel (or environment? not sure about the terminology here) used system wide. It would be nice if these coincided, so that if I install a new python library, it becomes accessible to all the varied ways I have of running python.
2016/05/05
[ "https://Stackoverflow.com/questions/37061089", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4556722/" ]
I have another solution that you don't need to `source activate tensorflow` before using `jupyter notebook` every time. **Partion 1** Firstly, you should ensure you have installed jupyter in your virtualenv. If you have installed, you can skip this section (Use `which jupyter` to check). If you not, you could run `source activate tensorflow`, and then install jupyter in your virtualenv by `conda install jupyter`. (You can use `pip` too.) **Partion 2** 1.From within your virtualenv, run ``` username$ source activate tensorflow (tensorflow)username$ ipython kernelspec install-self --user ``` This will create a kernelspec for your virtualenv and tell you where it is: ``` (tensorflow)username$ [InstallNativeKernelSpec] Installed kernelspec pythonX in /home/username/.local/share/jupyter/kernels/pythonX ``` Where pythonX will match the version of Python in your virtualenv. 2.Copy the new kernelspec somewhere useful. Choose a `kernel_name` for your new kernel that is not `python2` or `python3` or one you've used before and then: ``` (tensorflow)username$ mkdir -p ~/.ipython/kernels (tensorflow)username$ mv ~/.local/share/jupyter/kernels/pythonX ~/.ipython/kernels/<kernel_name> ``` 3.If you want to change the name of the kernel that IPython shows you, you need to edit `~/.ipython/kernels/<kernel_name>/kernel.json` and change the JSON key called `display_name` to be a name that you like. 4.You should now be able to see your kernel in the IPython notebook menu: `Kernel -> Change kernel` and be able so switch to it (you may need to refresh the page before it appears in the list). IPython will remember which kernel to use for that notebook from then on. [Reference](https://help.pythonanywhere.com/pages/IPythonNotebookVirtualenvs/).
I installed PIP with Conda `conda install pip` instead of `apt-get install python-pip python-dev`. Then installed tensorflow use [Pip Installation](https://www.tensorflow.org/versions/r0.9/get_started/os_setup.html#test-the-tensorflow-installation): ``` # Ubuntu/Linux 64-bit, CPU only, Python 2.7 $ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.9.0-cp27-none-linux_x86_64.whl # Ubuntu/Linux 64-bit, GPU enabled, Python 2.7 # Requires CUDA toolkit 7.5 and CuDNN v4. For other versions, see "Install from sources" below. $ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow-0.9.0-cp27-none-linux_x86_64.whl ``` ... `pip install --upgrade $TF_BINARY_URL` Then it will work in jupyter notebook.
37,061,089
I installed Jupyter notebooks in Ubuntu 14.04 via Anaconda earlier, and just now I installed TensorFlow. I would like TensorFlow to work regardless of whether I am working in a notebook or simply scripting. In my attempt to achieve this, I ended up installing TensorFlow twice, once using Anaconda, and once using pip. The Anaconda install works, but I need to preface any call to python with "source activate tensorflow". And the pip install works nicely, if start python the standard way (in the terminal) then tensorflow loads just fine. My question is: how can I also have it work in the Jupyter notebooks? This leads me to a more general question: it seems that my python kernel in Jupyter/Anaconda is separate from the python kernel (or environment? not sure about the terminology here) used system wide. It would be nice if these coincided, so that if I install a new python library, it becomes accessible to all the varied ways I have of running python.
2016/05/05
[ "https://Stackoverflow.com/questions/37061089", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4556722/" ]
I have another solution that you don't need to `source activate tensorflow` before using `jupyter notebook` every time. **Partion 1** Firstly, you should ensure you have installed jupyter in your virtualenv. If you have installed, you can skip this section (Use `which jupyter` to check). If you not, you could run `source activate tensorflow`, and then install jupyter in your virtualenv by `conda install jupyter`. (You can use `pip` too.) **Partion 2** 1.From within your virtualenv, run ``` username$ source activate tensorflow (tensorflow)username$ ipython kernelspec install-self --user ``` This will create a kernelspec for your virtualenv and tell you where it is: ``` (tensorflow)username$ [InstallNativeKernelSpec] Installed kernelspec pythonX in /home/username/.local/share/jupyter/kernels/pythonX ``` Where pythonX will match the version of Python in your virtualenv. 2.Copy the new kernelspec somewhere useful. Choose a `kernel_name` for your new kernel that is not `python2` or `python3` or one you've used before and then: ``` (tensorflow)username$ mkdir -p ~/.ipython/kernels (tensorflow)username$ mv ~/.local/share/jupyter/kernels/pythonX ~/.ipython/kernels/<kernel_name> ``` 3.If you want to change the name of the kernel that IPython shows you, you need to edit `~/.ipython/kernels/<kernel_name>/kernel.json` and change the JSON key called `display_name` to be a name that you like. 4.You should now be able to see your kernel in the IPython notebook menu: `Kernel -> Change kernel` and be able so switch to it (you may need to refresh the page before it appears in the list). IPython will remember which kernel to use for that notebook from then on. [Reference](https://help.pythonanywhere.com/pages/IPythonNotebookVirtualenvs/).
I wonder if it is not enough to simply launch ipython from tensorflow environnement. That is 1) first activate tensorflow virtualenv with: ``` source ~/tensorflow/bin/activate ``` 2) launch ipython under tensorflow environnement ``` (tensorflow)$ ipython notebook --ip=xxx.xxx.xxx.xxx ```
10,559,144
I am trying to use `suptitle` to print a title, and I want to occationally replace this title. Currently I am using: ``` self.ui.canvas1.figure.suptitle(title) ``` where figure is a matplotlib figure (canvas1 is an mplCanvas, but that is not relevant) and title is a python string. Currently, this works, except for the fact that when I run this code again later, it just prints the new text on top of the old, resulting in a gargeled, unreadable title. How do you replace the old `suptitle` of a figure, instead of just printing over? Thanks, Tyler
2012/05/11
[ "https://Stackoverflow.com/questions/10559144", "https://Stackoverflow.com", "https://Stackoverflow.com/users/402632/" ]
`figure.suptitle` returns a `matplotlib.text.Text` instance. You can save it and set the new title: ``` txt = fig.suptitle('A test title') txt.set_text('A better title') plt.draw() ```
Resurrecting this old thread because I recently ran into this. There is a references to the Text object returned by the original setting of suptitle in figure.texts. You can use this to change the original until this is fixed in matplotlib.
10,559,144
I am trying to use `suptitle` to print a title, and I want to occationally replace this title. Currently I am using: ``` self.ui.canvas1.figure.suptitle(title) ``` where figure is a matplotlib figure (canvas1 is an mplCanvas, but that is not relevant) and title is a python string. Currently, this works, except for the fact that when I run this code again later, it just prints the new text on top of the old, resulting in a gargeled, unreadable title. How do you replace the old `suptitle` of a figure, instead of just printing over? Thanks, Tyler
2012/05/11
[ "https://Stackoverflow.com/questions/10559144", "https://Stackoverflow.com", "https://Stackoverflow.com/users/402632/" ]
`figure.suptitle` returns a `matplotlib.text.Text` instance. You can save it and set the new title: ``` txt = fig.suptitle('A test title') txt.set_text('A better title') plt.draw() ```
I had similar problem. Method suptitile of figure object show title over old title (previously created). This is definately a bug in matplotlib. Especially as you can find this code in figure.py (part of matplotlib package): ``` (...) sup = self.text(x, y, t, **kwargs) if self._suptitle is not None: self._suptitle.set_text(t) self._suptitle.set_position((x, y)) self._suptitle.update_from(sup) else: self._suptitle = sup return self._suptitle ``` Luckily, this bug is present in matplotlib version 1.2.1 but it was later fixed (in 2.2.4, it is no longer present). Try update of matplotlib, it will fix it for you.
10,559,144
I am trying to use `suptitle` to print a title, and I want to occationally replace this title. Currently I am using: ``` self.ui.canvas1.figure.suptitle(title) ``` where figure is a matplotlib figure (canvas1 is an mplCanvas, but that is not relevant) and title is a python string. Currently, this works, except for the fact that when I run this code again later, it just prints the new text on top of the old, resulting in a gargeled, unreadable title. How do you replace the old `suptitle` of a figure, instead of just printing over? Thanks, Tyler
2012/05/11
[ "https://Stackoverflow.com/questions/10559144", "https://Stackoverflow.com", "https://Stackoverflow.com/users/402632/" ]
Resurrecting this old thread because I recently ran into this. There is a references to the Text object returned by the original setting of suptitle in figure.texts. You can use this to change the original until this is fixed in matplotlib.
I had similar problem. Method suptitile of figure object show title over old title (previously created). This is definately a bug in matplotlib. Especially as you can find this code in figure.py (part of matplotlib package): ``` (...) sup = self.text(x, y, t, **kwargs) if self._suptitle is not None: self._suptitle.set_text(t) self._suptitle.set_position((x, y)) self._suptitle.update_from(sup) else: self._suptitle = sup return self._suptitle ``` Luckily, this bug is present in matplotlib version 1.2.1 but it was later fixed (in 2.2.4, it is no longer present). Try update of matplotlib, it will fix it for you.
12,451,124
So I've already graduated and received all credits for my compsci degree. But my professor from my last quarter just sent me an email saying he found something interesting in one of my homework assignments. I forget the context, but I don't think it matters. I'll post the email exchange. --- From: PROF To: ME Subject: RE: LDA Hi STUDENT, I noticed something odd in one of your homework answers: ``` def swap(i,j): tmp = i i = j j = tmp return i,j ``` This works, but what is wrong with it? --- From: ME To: PROF Subject: RE: LDA oh yea, I was thinking in C, wouldn't it work in python to do: ``` def swap(i,j): i,j = j,i return i,j ``` Python will then optimize that swap on the stack eliminating the need for an tmp variable at all. --- From: PROF To: ME Subject: RE: LDA that's true, but there is a more subtle conceptual confusion So my question is: what am I missing here?
2012/09/16
[ "https://Stackoverflow.com/questions/12451124", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1676273/" ]
Think about how you would call `swap` in Python, versus how you would call a swap function in C. For example, in C, ``` swap(&a, &b); ``` is valid and swaps the memory in `a` with the memory in `b` (assuming the implementation of `swap` is right). But, in Python, ``` swap(a, b) ``` ...does nothing! You'd have to assign the result: ``` a,b = swap(a,b) ``` but then why don't you just do ``` a,b = b,a ``` and ditch the swap() function completely? If you really understand the difference between Python and C, you will be able to explain why the Python swap function cannot swap two variables without assigning the result.
I guess his point is that inside a function there's no need to do the swap at all - because the return values of the function aren't tied to the values passed in, so this would do as well: ``` def swap(i, j): return j, i ``` So in fact there's no point in having the function, it doesn't add anything at all. You'd have to call `i, j = swap(i, j)` - which is exactly the same as `j, i = i, j`.
12,451,124
So I've already graduated and received all credits for my compsci degree. But my professor from my last quarter just sent me an email saying he found something interesting in one of my homework assignments. I forget the context, but I don't think it matters. I'll post the email exchange. --- From: PROF To: ME Subject: RE: LDA Hi STUDENT, I noticed something odd in one of your homework answers: ``` def swap(i,j): tmp = i i = j j = tmp return i,j ``` This works, but what is wrong with it? --- From: ME To: PROF Subject: RE: LDA oh yea, I was thinking in C, wouldn't it work in python to do: ``` def swap(i,j): i,j = j,i return i,j ``` Python will then optimize that swap on the stack eliminating the need for an tmp variable at all. --- From: PROF To: ME Subject: RE: LDA that's true, but there is a more subtle conceptual confusion So my question is: what am I missing here?
2012/09/16
[ "https://Stackoverflow.com/questions/12451124", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1676273/" ]
Think about how you would call `swap` in Python, versus how you would call a swap function in C. For example, in C, ``` swap(&a, &b); ``` is valid and swaps the memory in `a` with the memory in `b` (assuming the implementation of `swap` is right). But, in Python, ``` swap(a, b) ``` ...does nothing! You'd have to assign the result: ``` a,b = swap(a,b) ``` but then why don't you just do ``` a,b = b,a ``` and ditch the swap() function completely? If you really understand the difference between Python and C, you will be able to explain why the Python swap function cannot swap two variables without assigning the result.
Your function seems overcomplicated, surely you could just do this def swap(i,j): return j,i This would achieve the same thing with only one line of code?
12,451,124
So I've already graduated and received all credits for my compsci degree. But my professor from my last quarter just sent me an email saying he found something interesting in one of my homework assignments. I forget the context, but I don't think it matters. I'll post the email exchange. --- From: PROF To: ME Subject: RE: LDA Hi STUDENT, I noticed something odd in one of your homework answers: ``` def swap(i,j): tmp = i i = j j = tmp return i,j ``` This works, but what is wrong with it? --- From: ME To: PROF Subject: RE: LDA oh yea, I was thinking in C, wouldn't it work in python to do: ``` def swap(i,j): i,j = j,i return i,j ``` Python will then optimize that swap on the stack eliminating the need for an tmp variable at all. --- From: PROF To: ME Subject: RE: LDA that's true, but there is a more subtle conceptual confusion So my question is: what am I missing here?
2012/09/16
[ "https://Stackoverflow.com/questions/12451124", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1676273/" ]
Think about how you would call `swap` in Python, versus how you would call a swap function in C. For example, in C, ``` swap(&a, &b); ``` is valid and swaps the memory in `a` with the memory in `b` (assuming the implementation of `swap` is right). But, in Python, ``` swap(a, b) ``` ...does nothing! You'd have to assign the result: ``` a,b = swap(a,b) ``` but then why don't you just do ``` a,b = b,a ``` and ditch the swap() function completely? If you really understand the difference between Python and C, you will be able to explain why the Python swap function cannot swap two variables without assigning the result.
All he was expecting was the pythonic way to swap: ``` i, j = j, i ```
12,451,124
So I've already graduated and received all credits for my compsci degree. But my professor from my last quarter just sent me an email saying he found something interesting in one of my homework assignments. I forget the context, but I don't think it matters. I'll post the email exchange. --- From: PROF To: ME Subject: RE: LDA Hi STUDENT, I noticed something odd in one of your homework answers: ``` def swap(i,j): tmp = i i = j j = tmp return i,j ``` This works, but what is wrong with it? --- From: ME To: PROF Subject: RE: LDA oh yea, I was thinking in C, wouldn't it work in python to do: ``` def swap(i,j): i,j = j,i return i,j ``` Python will then optimize that swap on the stack eliminating the need for an tmp variable at all. --- From: PROF To: ME Subject: RE: LDA that's true, but there is a more subtle conceptual confusion So my question is: what am I missing here?
2012/09/16
[ "https://Stackoverflow.com/questions/12451124", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1676273/" ]
Your function seems overcomplicated, surely you could just do this def swap(i,j): return j,i This would achieve the same thing with only one line of code?
I guess his point is that inside a function there's no need to do the swap at all - because the return values of the function aren't tied to the values passed in, so this would do as well: ``` def swap(i, j): return j, i ``` So in fact there's no point in having the function, it doesn't add anything at all. You'd have to call `i, j = swap(i, j)` - which is exactly the same as `j, i = i, j`.
12,451,124
So I've already graduated and received all credits for my compsci degree. But my professor from my last quarter just sent me an email saying he found something interesting in one of my homework assignments. I forget the context, but I don't think it matters. I'll post the email exchange. --- From: PROF To: ME Subject: RE: LDA Hi STUDENT, I noticed something odd in one of your homework answers: ``` def swap(i,j): tmp = i i = j j = tmp return i,j ``` This works, but what is wrong with it? --- From: ME To: PROF Subject: RE: LDA oh yea, I was thinking in C, wouldn't it work in python to do: ``` def swap(i,j): i,j = j,i return i,j ``` Python will then optimize that swap on the stack eliminating the need for an tmp variable at all. --- From: PROF To: ME Subject: RE: LDA that's true, but there is a more subtle conceptual confusion So my question is: what am I missing here?
2012/09/16
[ "https://Stackoverflow.com/questions/12451124", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1676273/" ]
All he was expecting was the pythonic way to swap: ``` i, j = j, i ```
I guess his point is that inside a function there's no need to do the swap at all - because the return values of the function aren't tied to the values passed in, so this would do as well: ``` def swap(i, j): return j, i ``` So in fact there's no point in having the function, it doesn't add anything at all. You'd have to call `i, j = swap(i, j)` - which is exactly the same as `j, i = i, j`.
32,328,778
Suppose I want to match a string like this: > > 123(432)123(342)2348(34) > > > I can match digits like `123` with `[\d]*` and `(432)` with `\([\d]+\)`. How can match the whole string by repeating either of the 2 patterns? *I tried `[[\d]* | \([\d]+\)]+`, but this is incorrect.* *I am using python re module.*
2015/09/01
[ "https://Stackoverflow.com/questions/32328778", "https://Stackoverflow.com", "https://Stackoverflow.com/users/954376/" ]
I think you need this regex: ``` "^(\d+|\(\d+\))+$" ``` and to avoid catastrophic backtracking you need to change it to a regex like this: ``` "^(\d|\(\d+\))+$" ```
You can use a character class to match the whole of string : ``` [\d()]+ ``` But if you want to match the separate parts in separate groups you can use `re.findall` with a spacial regex based on your need, for example : ``` >>> import re >>> s="123(432)123(342)2348(34)" >>> re.findall(r'\d+\(\d+\)',s) ['123(432)', '123(342)', '2348(34)'] >>> ``` Or : ``` >>> re.findall(r'(\d+)\((\d+)\)',s) [('123', '432'), ('123', '342'), ('2348', '34')] ``` Or you can just use `\d+` to get all the numbers : ``` >>> re.findall(r'\d+',s) ['123', '432', '123', '342', '2348', '34'] ``` If you want to match the patter `\d+\(\d+\)` repeatedly you can use following regex : ``` (?:\d+\(\d+\))+ ```
32,328,778
Suppose I want to match a string like this: > > 123(432)123(342)2348(34) > > > I can match digits like `123` with `[\d]*` and `(432)` with `\([\d]+\)`. How can match the whole string by repeating either of the 2 patterns? *I tried `[[\d]* | \([\d]+\)]+`, but this is incorrect.* *I am using python re module.*
2015/09/01
[ "https://Stackoverflow.com/questions/32328778", "https://Stackoverflow.com", "https://Stackoverflow.com/users/954376/" ]
You can use a character class to match the whole of string : ``` [\d()]+ ``` But if you want to match the separate parts in separate groups you can use `re.findall` with a spacial regex based on your need, for example : ``` >>> import re >>> s="123(432)123(342)2348(34)" >>> re.findall(r'\d+\(\d+\)',s) ['123(432)', '123(342)', '2348(34)'] >>> ``` Or : ``` >>> re.findall(r'(\d+)\((\d+)\)',s) [('123', '432'), ('123', '342'), ('2348', '34')] ``` Or you can just use `\d+` to get all the numbers : ``` >>> re.findall(r'\d+',s) ['123', '432', '123', '342', '2348', '34'] ``` If you want to match the patter `\d+\(\d+\)` repeatedly you can use following regex : ``` (?:\d+\(\d+\))+ ```
You can achieve it with this pattern: ``` ^(?=.)\d*(?:\(\d+\)\d*)*$ ``` [demo](https://regex101.com/r/wI9jE5/1) `(?=.)` ensures there is at least one character (if you want to allow empty strings, remove it). `\d*(?:\(\d+\)\d*)*` is an unrolled sub-pattern. Explanation: With a bactracking regex engine, when you have a sub-pattern like `(A|B)*` where A and B are mutually exclusive (or at least when the end of A or B doesn't match respectively the beginning of B or A), you can rewrite the sub-pattern like this: `A*(BA*)*` or `B*(AB*)*`. For your example, it replaces `(?:\d+|\(\d+\))*` This new form is more efficient: it reduces the steps needed to obtain a match, it avoids a great part of the eventual bactracking. Note that you can improve it more, if you emulate an [atomic group](http://regular-expressions.mobi/atomic.html) `(?>....)` with [this trick](http://blog.stevenlevithan.com/archives/mimic-atomic-groups) `(?=(....))\1` that uses the fact that a lookahead is naturally atomic: ``` ^(?=.)(?=(\d*(?:\(\d+\)\d*)*))\1$ ``` [demo](https://regex101.com/r/wI9jE5/2) *(compare the number of steps needed with the previous version and check the debugger to see what happens)* Note: if you don't want two consecutive numbers enclosed in parenthesis, you only need to change the quantifier `*` with `+` inside the non-capturing group and to add `(?:\(\d+\))?` at the end of the pattern, before the anchor `$`: ``` ^(?=.)\d*(?:\(\d+\)\d+)*(?:\(\d+\))?$ ``` or ``` ^(?=.)(?=(\d*(?:\(\d+\)\d+)*(?:\(\d+\))?))\1$ ```
32,328,778
Suppose I want to match a string like this: > > 123(432)123(342)2348(34) > > > I can match digits like `123` with `[\d]*` and `(432)` with `\([\d]+\)`. How can match the whole string by repeating either of the 2 patterns? *I tried `[[\d]* | \([\d]+\)]+`, but this is incorrect.* *I am using python re module.*
2015/09/01
[ "https://Stackoverflow.com/questions/32328778", "https://Stackoverflow.com", "https://Stackoverflow.com/users/954376/" ]
I think you need this regex: ``` "^(\d+|\(\d+\))+$" ``` and to avoid catastrophic backtracking you need to change it to a regex like this: ``` "^(\d|\(\d+\))+$" ```
You can achieve it with this pattern: ``` ^(?=.)\d*(?:\(\d+\)\d*)*$ ``` [demo](https://regex101.com/r/wI9jE5/1) `(?=.)` ensures there is at least one character (if you want to allow empty strings, remove it). `\d*(?:\(\d+\)\d*)*` is an unrolled sub-pattern. Explanation: With a bactracking regex engine, when you have a sub-pattern like `(A|B)*` where A and B are mutually exclusive (or at least when the end of A or B doesn't match respectively the beginning of B or A), you can rewrite the sub-pattern like this: `A*(BA*)*` or `B*(AB*)*`. For your example, it replaces `(?:\d+|\(\d+\))*` This new form is more efficient: it reduces the steps needed to obtain a match, it avoids a great part of the eventual bactracking. Note that you can improve it more, if you emulate an [atomic group](http://regular-expressions.mobi/atomic.html) `(?>....)` with [this trick](http://blog.stevenlevithan.com/archives/mimic-atomic-groups) `(?=(....))\1` that uses the fact that a lookahead is naturally atomic: ``` ^(?=.)(?=(\d*(?:\(\d+\)\d*)*))\1$ ``` [demo](https://regex101.com/r/wI9jE5/2) *(compare the number of steps needed with the previous version and check the debugger to see what happens)* Note: if you don't want two consecutive numbers enclosed in parenthesis, you only need to change the quantifier `*` with `+` inside the non-capturing group and to add `(?:\(\d+\))?` at the end of the pattern, before the anchor `$`: ``` ^(?=.)\d*(?:\(\d+\)\d+)*(?:\(\d+\))?$ ``` or ``` ^(?=.)(?=(\d*(?:\(\d+\)\d+)*(?:\(\d+\))?))\1$ ```
32,870,262
I am trying to create a program in python in which the user enters a sentence and the reversed sentenced is printed. The code I have so far is: ``` sentence = raw_input('Enter the sentence') length = len(sentence) for i in sentence[length:0:-1]: a = i print a, ``` When the program is run it misses out the last letter so if the word was 'hello' it would print 'olle'. Can anyone see my mistake?
2015/09/30
[ "https://Stackoverflow.com/questions/32870262", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5342974/" ]
You need to remove the `0` from your indices range, but instead you can use : ``` sentence[length::-1] ``` Also not that then you don't need to loop over your string and use extra assignments and even the `length` you can simply print the reversed string. So the following code will do the job for you : ``` print sentence[::-1] ``` Demo : ``` >>> s="hello" >>> print s[::-1] 'olleh' ```
The second argument of the slice notation means "up to, but not including", so `sentence[length:0:-1]` will loop up to 0, but not at 0. The fix is to explicitly change the 0 to -1, or leave it out (preferred). ``` for i in sentence[::-1]: ```
32,870,262
I am trying to create a program in python in which the user enters a sentence and the reversed sentenced is printed. The code I have so far is: ``` sentence = raw_input('Enter the sentence') length = len(sentence) for i in sentence[length:0:-1]: a = i print a, ``` When the program is run it misses out the last letter so if the word was 'hello' it would print 'olle'. Can anyone see my mistake?
2015/09/30
[ "https://Stackoverflow.com/questions/32870262", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5342974/" ]
Try This: NO LOOPS using MAP Function ``` mySentence = "Mary had a little lamb" def reverseSentence(text): # split the text listOfWords = text.split() #reverese words order inside sentence listOfWords.reverse() #reverse each word inside the list using map function(Better than doing loops...) listOfWords = list(map(lambda x: x[::-1], listOfWords)) #return return listOfWords print(reverseSentence(mySentence)) ```
The second argument of the slice notation means "up to, but not including", so `sentence[length:0:-1]` will loop up to 0, but not at 0. The fix is to explicitly change the 0 to -1, or leave it out (preferred). ``` for i in sentence[::-1]: ```
32,870,262
I am trying to create a program in python in which the user enters a sentence and the reversed sentenced is printed. The code I have so far is: ``` sentence = raw_input('Enter the sentence') length = len(sentence) for i in sentence[length:0:-1]: a = i print a, ``` When the program is run it misses out the last letter so if the word was 'hello' it would print 'olle'. Can anyone see my mistake?
2015/09/30
[ "https://Stackoverflow.com/questions/32870262", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5342974/" ]
You need to remove the `0` from your indices range, but instead you can use : ``` sentence[length::-1] ``` Also not that then you don't need to loop over your string and use extra assignments and even the `length` you can simply print the reversed string. So the following code will do the job for you : ``` print sentence[::-1] ``` Demo : ``` >>> s="hello" >>> print s[::-1] 'olleh' ```
``` print ''.join(reversed(raw_input('Enter the sentence'))) ```
32,870,262
I am trying to create a program in python in which the user enters a sentence and the reversed sentenced is printed. The code I have so far is: ``` sentence = raw_input('Enter the sentence') length = len(sentence) for i in sentence[length:0:-1]: a = i print a, ``` When the program is run it misses out the last letter so if the word was 'hello' it would print 'olle'. Can anyone see my mistake?
2015/09/30
[ "https://Stackoverflow.com/questions/32870262", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5342974/" ]
You need to remove the `0` from your indices range, but instead you can use : ``` sentence[length::-1] ``` Also not that then you don't need to loop over your string and use extra assignments and even the `length` you can simply print the reversed string. So the following code will do the job for you : ``` print sentence[::-1] ``` Demo : ``` >>> s="hello" >>> print s[::-1] 'olleh' ```
Here you go: ``` sentence = raw_input('Enter the sentence') length = len(sentence) sentence = sentence[::-1] print(sentence) ``` Enjoy! Some explanation, the important line `sentence = sentence[::-1]` is a use of Python's slice notation. In detail [here](https://stackoverflow.com/questions/509211/explain-pythons-slice-notation). This leverage of the syntax reverses the indexes of the items in the iterable string. the result is the reversed sentence you are looking for.
32,870,262
I am trying to create a program in python in which the user enters a sentence and the reversed sentenced is printed. The code I have so far is: ``` sentence = raw_input('Enter the sentence') length = len(sentence) for i in sentence[length:0:-1]: a = i print a, ``` When the program is run it misses out the last letter so if the word was 'hello' it would print 'olle'. Can anyone see my mistake?
2015/09/30
[ "https://Stackoverflow.com/questions/32870262", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5342974/" ]
You need to remove the `0` from your indices range, but instead you can use : ``` sentence[length::-1] ``` Also not that then you don't need to loop over your string and use extra assignments and even the `length` you can simply print the reversed string. So the following code will do the job for you : ``` print sentence[::-1] ``` Demo : ``` >>> s="hello" >>> print s[::-1] 'olleh' ```
Try This: NO LOOPS using MAP Function ``` mySentence = "Mary had a little lamb" def reverseSentence(text): # split the text listOfWords = text.split() #reverese words order inside sentence listOfWords.reverse() #reverse each word inside the list using map function(Better than doing loops...) listOfWords = list(map(lambda x: x[::-1], listOfWords)) #return return listOfWords print(reverseSentence(mySentence)) ```
32,870,262
I am trying to create a program in python in which the user enters a sentence and the reversed sentenced is printed. The code I have so far is: ``` sentence = raw_input('Enter the sentence') length = len(sentence) for i in sentence[length:0:-1]: a = i print a, ``` When the program is run it misses out the last letter so if the word was 'hello' it would print 'olle'. Can anyone see my mistake?
2015/09/30
[ "https://Stackoverflow.com/questions/32870262", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5342974/" ]
Try This: NO LOOPS using MAP Function ``` mySentence = "Mary had a little lamb" def reverseSentence(text): # split the text listOfWords = text.split() #reverese words order inside sentence listOfWords.reverse() #reverse each word inside the list using map function(Better than doing loops...) listOfWords = list(map(lambda x: x[::-1], listOfWords)) #return return listOfWords print(reverseSentence(mySentence)) ```
``` print ''.join(reversed(raw_input('Enter the sentence'))) ```
32,870,262
I am trying to create a program in python in which the user enters a sentence and the reversed sentenced is printed. The code I have so far is: ``` sentence = raw_input('Enter the sentence') length = len(sentence) for i in sentence[length:0:-1]: a = i print a, ``` When the program is run it misses out the last letter so if the word was 'hello' it would print 'olle'. Can anyone see my mistake?
2015/09/30
[ "https://Stackoverflow.com/questions/32870262", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5342974/" ]
Try This: NO LOOPS using MAP Function ``` mySentence = "Mary had a little lamb" def reverseSentence(text): # split the text listOfWords = text.split() #reverese words order inside sentence listOfWords.reverse() #reverse each word inside the list using map function(Better than doing loops...) listOfWords = list(map(lambda x: x[::-1], listOfWords)) #return return listOfWords print(reverseSentence(mySentence)) ```
Here you go: ``` sentence = raw_input('Enter the sentence') length = len(sentence) sentence = sentence[::-1] print(sentence) ``` Enjoy! Some explanation, the important line `sentence = sentence[::-1]` is a use of Python's slice notation. In detail [here](https://stackoverflow.com/questions/509211/explain-pythons-slice-notation). This leverage of the syntax reverses the indexes of the items in the iterable string. the result is the reversed sentence you are looking for.
10,621,615
I was playing around with iterables and more specifically the `yield` operator in Python. While using test driven development to start writing a new iterable, I wondered what is the shortest code that could make this simple test for an iterable to pass: ```py def test(): for x in my_iterable(): pass ``` The shortest version I could think of was: ```py def my_iterable(): for i in []: yield i ``` Is it possible to write a simpler, shorter or more beautiful (pythonic) version?
2012/05/16
[ "https://Stackoverflow.com/questions/10621615", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2457/" ]
Yes, there is: ``` return iter([]) ```
``` def do_yield(): return yield None ``` if usage of `yield` is important for you, one of the other answers otherwise.
10,621,615
I was playing around with iterables and more specifically the `yield` operator in Python. While using test driven development to start writing a new iterable, I wondered what is the shortest code that could make this simple test for an iterable to pass: ```py def test(): for x in my_iterable(): pass ``` The shortest version I could think of was: ```py def my_iterable(): for i in []: yield i ``` Is it possible to write a simpler, shorter or more beautiful (pythonic) version?
2012/05/16
[ "https://Stackoverflow.com/questions/10621615", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2457/" ]
Yes, there is: ``` return iter([]) ```
You can use the lambda and iter functions to create an empty iterable in Python. ``` my_iterable = lambda: iter(()) ```
10,621,615
I was playing around with iterables and more specifically the `yield` operator in Python. While using test driven development to start writing a new iterable, I wondered what is the shortest code that could make this simple test for an iterable to pass: ```py def test(): for x in my_iterable(): pass ``` The shortest version I could think of was: ```py def my_iterable(): for i in []: yield i ``` Is it possible to write a simpler, shorter or more beautiful (pythonic) version?
2012/05/16
[ "https://Stackoverflow.com/questions/10621615", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2457/" ]
You can use the lambda and iter functions to create an empty iterable in Python. ``` my_iterable = lambda: iter(()) ```
Another answer, as I provide a completely new solution with a different approach. In one of by libraries, I have an `EmptyIterator` such as ``` class EmptyIter(object): __name__ = 'EmptyIter' """Iterable which is False and empty""" def __len__(self): return 0 def next(self): raise StopIteration # even that is redundant def __getitem__(self, index): raise IndexError ``` It is an alternative approach which uses the following properties: * alternative iteration via the sequence protocol (see [here](http://docs.python.org/library/functions.html#iter)) * alternative "falseness" protocol as described [here](http://docs.python.org/reference/datamodel.html#object.__nonzero__).
10,621,615
I was playing around with iterables and more specifically the `yield` operator in Python. While using test driven development to start writing a new iterable, I wondered what is the shortest code that could make this simple test for an iterable to pass: ```py def test(): for x in my_iterable(): pass ``` The shortest version I could think of was: ```py def my_iterable(): for i in []: yield i ``` Is it possible to write a simpler, shorter or more beautiful (pythonic) version?
2012/05/16
[ "https://Stackoverflow.com/questions/10621615", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2457/" ]
How about ``` my_iterable = str ``` this passes your test. To speak seriously, `Iterable` in the [collections module](http://hg.python.org/cpython/file/2.7/Lib/_abcoll.py) provides: ``` def __iter__(self): while False: yield None ``` This can be considered "most pythonic" because this is what python itself uses. Note that technically all answers so far provide *iterators* (`__iter__` + `next`), not *iterables* (just `__iter__`).
Another answer, as I provide a completely new solution with a different approach. In one of by libraries, I have an `EmptyIterator` such as ``` class EmptyIter(object): __name__ = 'EmptyIter' """Iterable which is False and empty""" def __len__(self): return 0 def next(self): raise StopIteration # even that is redundant def __getitem__(self, index): raise IndexError ``` It is an alternative approach which uses the following properties: * alternative iteration via the sequence protocol (see [here](http://docs.python.org/library/functions.html#iter)) * alternative "falseness" protocol as described [here](http://docs.python.org/reference/datamodel.html#object.__nonzero__).
10,621,615
I was playing around with iterables and more specifically the `yield` operator in Python. While using test driven development to start writing a new iterable, I wondered what is the shortest code that could make this simple test for an iterable to pass: ```py def test(): for x in my_iterable(): pass ``` The shortest version I could think of was: ```py def my_iterable(): for i in []: yield i ``` Is it possible to write a simpler, shorter or more beautiful (pythonic) version?
2012/05/16
[ "https://Stackoverflow.com/questions/10621615", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2457/" ]
Yes, there is: ``` return iter([]) ```
How about ``` my_iterable = str ``` this passes your test. To speak seriously, `Iterable` in the [collections module](http://hg.python.org/cpython/file/2.7/Lib/_abcoll.py) provides: ``` def __iter__(self): while False: yield None ``` This can be considered "most pythonic" because this is what python itself uses. Note that technically all answers so far provide *iterators* (`__iter__` + `next`), not *iterables* (just `__iter__`).
10,621,615
I was playing around with iterables and more specifically the `yield` operator in Python. While using test driven development to start writing a new iterable, I wondered what is the shortest code that could make this simple test for an iterable to pass: ```py def test(): for x in my_iterable(): pass ``` The shortest version I could think of was: ```py def my_iterable(): for i in []: yield i ``` Is it possible to write a simpler, shorter or more beautiful (pythonic) version?
2012/05/16
[ "https://Stackoverflow.com/questions/10621615", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2457/" ]
How about ``` my_iterable = str ``` this passes your test. To speak seriously, `Iterable` in the [collections module](http://hg.python.org/cpython/file/2.7/Lib/_abcoll.py) provides: ``` def __iter__(self): while False: yield None ``` This can be considered "most pythonic" because this is what python itself uses. Note that technically all answers so far provide *iterators* (`__iter__` + `next`), not *iterables* (just `__iter__`).
``` def do_yield(): return yield None ``` if usage of `yield` is important for you, one of the other answers otherwise.
10,621,615
I was playing around with iterables and more specifically the `yield` operator in Python. While using test driven development to start writing a new iterable, I wondered what is the shortest code that could make this simple test for an iterable to pass: ```py def test(): for x in my_iterable(): pass ``` The shortest version I could think of was: ```py def my_iterable(): for i in []: yield i ``` Is it possible to write a simpler, shorter or more beautiful (pythonic) version?
2012/05/16
[ "https://Stackoverflow.com/questions/10621615", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2457/" ]
Another solution, in Python 3, is to use the new `yield from` syntax: ``` def empty_gen(): yield from () ``` Which is readable, and keep `empty_gen` as a generator.
Another answer, as I provide a completely new solution with a different approach. In one of by libraries, I have an `EmptyIterator` such as ``` class EmptyIter(object): __name__ = 'EmptyIter' """Iterable which is False and empty""" def __len__(self): return 0 def next(self): raise StopIteration # even that is redundant def __getitem__(self, index): raise IndexError ``` It is an alternative approach which uses the following properties: * alternative iteration via the sequence protocol (see [here](http://docs.python.org/library/functions.html#iter)) * alternative "falseness" protocol as described [here](http://docs.python.org/reference/datamodel.html#object.__nonzero__).
10,621,615
I was playing around with iterables and more specifically the `yield` operator in Python. While using test driven development to start writing a new iterable, I wondered what is the shortest code that could make this simple test for an iterable to pass: ```py def test(): for x in my_iterable(): pass ``` The shortest version I could think of was: ```py def my_iterable(): for i in []: yield i ``` Is it possible to write a simpler, shorter or more beautiful (pythonic) version?
2012/05/16
[ "https://Stackoverflow.com/questions/10621615", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2457/" ]
Yes, there is: ``` return iter([]) ```
Another answer, as I provide a completely new solution with a different approach. In one of by libraries, I have an `EmptyIterator` such as ``` class EmptyIter(object): __name__ = 'EmptyIter' """Iterable which is False and empty""" def __len__(self): return 0 def next(self): raise StopIteration # even that is redundant def __getitem__(self, index): raise IndexError ``` It is an alternative approach which uses the following properties: * alternative iteration via the sequence protocol (see [here](http://docs.python.org/library/functions.html#iter)) * alternative "falseness" protocol as described [here](http://docs.python.org/reference/datamodel.html#object.__nonzero__).
10,621,615
I was playing around with iterables and more specifically the `yield` operator in Python. While using test driven development to start writing a new iterable, I wondered what is the shortest code that could make this simple test for an iterable to pass: ```py def test(): for x in my_iterable(): pass ``` The shortest version I could think of was: ```py def my_iterable(): for i in []: yield i ``` Is it possible to write a simpler, shorter or more beautiful (pythonic) version?
2012/05/16
[ "https://Stackoverflow.com/questions/10621615", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2457/" ]
Another solution, in Python 3, is to use the new `yield from` syntax: ``` def empty_gen(): yield from () ``` Which is readable, and keep `empty_gen` as a generator.
How about ``` my_iterable = str ``` this passes your test. To speak seriously, `Iterable` in the [collections module](http://hg.python.org/cpython/file/2.7/Lib/_abcoll.py) provides: ``` def __iter__(self): while False: yield None ``` This can be considered "most pythonic" because this is what python itself uses. Note that technically all answers so far provide *iterators* (`__iter__` + `next`), not *iterables* (just `__iter__`).
10,621,615
I was playing around with iterables and more specifically the `yield` operator in Python. While using test driven development to start writing a new iterable, I wondered what is the shortest code that could make this simple test for an iterable to pass: ```py def test(): for x in my_iterable(): pass ``` The shortest version I could think of was: ```py def my_iterable(): for i in []: yield i ``` Is it possible to write a simpler, shorter or more beautiful (pythonic) version?
2012/05/16
[ "https://Stackoverflow.com/questions/10621615", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2457/" ]
``` def do_yield(): return yield None ``` if usage of `yield` is important for you, one of the other answers otherwise.
Another answer, as I provide a completely new solution with a different approach. In one of by libraries, I have an `EmptyIterator` such as ``` class EmptyIter(object): __name__ = 'EmptyIter' """Iterable which is False and empty""" def __len__(self): return 0 def next(self): raise StopIteration # even that is redundant def __getitem__(self, index): raise IndexError ``` It is an alternative approach which uses the following properties: * alternative iteration via the sequence protocol (see [here](http://docs.python.org/library/functions.html#iter)) * alternative "falseness" protocol as described [here](http://docs.python.org/reference/datamodel.html#object.__nonzero__).
64,523,282
I installed anaconda from the [official website](https://www.anaconda.com/) and I want to integrate it with sublime text 3. I tried to build a sublime-build json file like this: ``` { "cmd": ["C:/Users/Minh Duy/anaconda3/python.exe", "-u", "$file"], "file_regex": "^[ ]*File \"(...*?)\", line ([0-9]*)", "selector": "source.python" } ``` But I got errors: ``` C:\Users\Minh Duy\anaconda3\lib\site-packages\numpy\__init__.py:138: UserWarning: mkl-service package failed to import, therefore Intel(R) MKL initialization ensuring its correct out-of-the box operation under condition when Gnu OpenMP had already been loaded by Python process is not assured. Please install mkl-service package, see http://github.com/IntelPython/mkl-service from . import _distributor_init Traceback (most recent call last): File "C:\Users\Minh Duy\anaconda3\lib\site-packages\numpy\core\__init__.py", line 22, in <module> from . import multiarray File "C:\Users\Minh Duy\anaconda3\lib\site-packages\numpy\core\multiarray.py", line 12, in <module> from . import overrides File "C:\Users\Minh Duy\anaconda3\lib\site-packages\numpy\core\overrides.py", line 7, in <module> from numpy.core._multiarray_umath import ( ImportError: DLL load failed while importing _multiarray_umath: The specified module could not be found. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\Users\Minh Duy\Documents\Self-study\Python\Exercise\test_code.py", line 1, in <module> import numpy as np File "C:\Users\Minh Duy\anaconda3\lib\site-packages\numpy\__init__.py", line 140, in <module> from . import core File "C:\Users\Minh Duy\anaconda3\lib\site-packages\numpy\core\__init__.py", line 48, in <module> raise ImportError(msg) ImportError: ``` I didn't add anaconda to PATH, but everything works fine on spyder and anaconda prompt. I don't really know if there is anything wrong with the way I set up anaconda or something else. Can someone help me with this issue?
2020/10/25
[ "https://Stackoverflow.com/questions/64523282", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12074366/" ]
The DLLs of the mkl-service that it's tried to load are by default located in the following directory: **C:/Users/<username>/anaconda3/Library/bin** since that path isn't in the PATH Environment Variable, it can't find them and raises the ImportError. To fix this, you can: 1. Add the mentioned path to the PATH Environment Variable: Open the start menu search, type *env*, click *edit environment variables for your account*, select path from the list at the top, click Edit then New, enter the mentioned path, and click OK. This isn't the best method, as it makes this directory available globally, while you need it only when you are building with Anaconda. 2. Configure your custom Sublime Text build system to add the directory to PATH every time you use that build system (temporarily for the duration of that run). This can be done simply by adding one line to the build system file, and it should look like this: ``` { "cmd": ["C:/Users/<username>/anaconda3/python.exe", "-u", "$file"], "file_regex": "^[ ]*File \"(...*?)\", line ([0-9]*)", "selector": "source.python", "env": { "PYTHONIOENCODING": "utf-8", "PATH": "$PATH;C:/Users/<username>/anaconda3/Library/bin"}, } ``` This should work, however, to make it more error resistant you should consider adding some other paths too: * C:/Users/<username>/anaconda3 * C:/Users/<username>/anaconda3/Library/mingw-w64/bin * C:/Users/<username>/anaconda3/Library/usr/bin * C:/Users/<username>/anaconda3/Scripts * C:/Users/<username>/anaconda3/bin * C:/Users/<username>/anaconda3/condabi 3. If you have more than one Anaconda environment and want more control from inside Sublime Text, then you consider installing the [Conda](https://docs.anaconda.com/anaconda/user-guide/tasks/integration/sublime/) [package](https://packagecontrol.io/packages/Conda) for Sublime Text. Press Shift+Control+P to open command palette inside Sublime Text, search for Conda and click to install; once installed, change the build system to Conda from Menu -> Tools -> Build System. Then you can open the command palette and use the commands that start with Conda to manage your Anaconda Environments. Note that you need to activate an environment before using Ctrl+B to build.
first configure it with python. write python in your cmd to get python path. then configure it with anaconda. ``` { "cmd": ["C:/Users/usr_name/AppData/Local/Programs/Python/Python37-32/python.exe", "-u", "$file"], "file_regex": "^[ ]*File \"(...*?)\", line ([0-9]*)", "selector": "source.python" } ```
64,708,800
I have been able to successfully detect an object(face and eye) using haar cascade classifier in python using opencv. When the object is detected, a rectangle is shown around the object. I want to get coordinates of mid point of the two eyes. and want to store them in a array. Can any one help me? how can i do this. any guide
2020/11/06
[ "https://Stackoverflow.com/questions/64708800", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11828549/" ]
Haskell doesn't allow this because it would be ambiguous. The value constructor `Prop` is effectively a function, which may be clearer if you ask GHCi about its type: ``` > :t Const Const :: Bool -> Prop ``` If you attempt to add one more `Const` constructor in the same module, you'd have two 'functions' called `Const` in the same module. You can't have that.
This is somewhat horrible, but will basically let you do what you want: ```hs {-# LANGUAGE PatternSynonyms, TypeFamilies, ViewPatterns #-} data Prop = PropConst Bool | PropVar Char | PropNot Prop | PropOr Prop Prop | PropAnd Prop Prop | PropImply Prop Prop data Formula = FormulaConst Bool | FormulaVar Prop | FormulaNot Formula | FormulaAnd Formula Formula | FormulaOr Formula Formula | FormulaImply Formula Formula class PropOrFormula t where type Var t constructConst :: Bool -> t deconstructConst :: t -> Maybe Bool constructVar :: Var t -> t deconstructVar :: t -> Maybe (Var t) constructNot :: t -> t deconstructNot :: t -> Maybe t constructOr :: t -> t -> t deconstructOr :: t -> Maybe (t, t) constructAnd :: t -> t -> t deconstructAnd :: t -> Maybe (t, t) constructImply :: t -> t -> t deconstructImply :: t -> Maybe (t, t) instance PropOrFormula Prop where type Var Prop = Char constructConst = PropConst deconstructConst (PropConst x) = Just x deconstructConst _ = Nothing constructVar = PropVar deconstructVar (PropVar x) = Just x deconstructVar _ = Nothing constructNot = PropNot deconstructNot (PropNot x) = Just x deconstructNot _ = Nothing constructOr = PropOr deconstructOr (PropOr x y) = Just (x, y) deconstructOr _ = Nothing constructAnd = PropAnd deconstructAnd (PropAnd x y) = Just (x, y) deconstructAnd _ = Nothing constructImply = PropImply deconstructImply (PropImply x y) = Just (x, y) deconstructImply _ = Nothing instance PropOrFormula Formula where type Var Formula = Prop constructConst = FormulaConst deconstructConst (FormulaConst x) = Just x deconstructConst _ = Nothing constructVar = FormulaVar deconstructVar (FormulaVar x) = Just x deconstructVar _ = Nothing constructNot = FormulaNot deconstructNot (FormulaNot x) = Just x deconstructNot _ = Nothing constructOr = FormulaOr deconstructOr (FormulaOr x y) = Just (x, y) deconstructOr _ = Nothing constructAnd = FormulaAnd deconstructAnd (FormulaAnd x y) = Just (x, y) deconstructAnd _ = Nothing constructImply = FormulaImply deconstructImply (FormulaImply x y) = Just (x, y) deconstructImply _ = Nothing pattern Const x <- (deconstructConst -> Just x) where Const x = constructConst x pattern Var x <- (deconstructVar -> Just x) where Var x = constructVar x pattern Not x <- (deconstructNot -> Just x) where Not x = constructNot x pattern Or x y <- (deconstructOr -> Just (x, y)) where Or x y = constructOr x y pattern And x y <- (deconstructAnd -> Just (x, y)) where And x y = constructAnd x y pattern Imply x y <- (deconstructImply -> Just (x, y)) where Imply x y = constructImply x y {-# COMPLETE Const, Var, Not, Or, And, Imply :: Prop #-} {-# COMPLETE Const, Var, Not, Or, And, Imply :: Formula #-} ``` If <https://gitlab.haskell.org/ghc/ghc/-/issues/8583> were ever done, then this could be substantially cleaned up.
53,014,961
It seems like a trivial task however, I can't find a solution for doing this using python. Given the following string: ``` "Lorem/ipsum/dolor/sit amet consetetur" ``` I would like to output ``` "Lorem/ipsum/dolor/sit ametconsetetur" ``` Hence, removing the single whitespace between `amet` and `consetetur`. Using `.replace(" ","")` replaces all whitespaces, giving me: ``` "Lorem/ipsum/dolor/sitametconsetetur" ``` which is not what I want. How can I solve this?
2018/10/26
[ "https://Stackoverflow.com/questions/53014961", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6341510/" ]
use regex and word boundary: ``` >>> s="Lorem/ipsum/dolor/sit amet consetetur" >>> import re >>> re.sub(r"\b \b","",s) 'Lorem/ipsum/dolor/sit ametconsetetur' >>> ``` This technique also handles the more general case: ``` >>> s="Lorem/ipsum/dolor/sit amet consetetur adipisci velit" >>> re.sub(r"\b \b","",s) 'Lorem/ipsum/dolor/sit ametconsetetur adipiscivelit' ``` for start & end spaces, you'll have to work slightly harder, but it's still doable: ``` >>> s=" Lorem/ipsum/dolor/sit amet consetetur adipisci velit " >>> re.sub(r"(^|\b) (\b|$)","",s) 'Lorem/ipsum/dolor/sit ametconsetetur adipiscivelit' ``` Just for fun, a last variant: use `re.split` with a multiple space separation, preserve the split char using a group, then join the strings again, removing the spaces only if the string has some non-space in it: ``` "".join([x if x.isspace() else x.replace(" ","") for x in re.split("( {2,})",s)]) ``` (I suppose that this is slower because of list creation & join though)
``` s[::-1].replace(' ', '', 1)[::-1] ``` * Reverse the string * Delete the first space * Reverse the string back
53,014,961
It seems like a trivial task however, I can't find a solution for doing this using python. Given the following string: ``` "Lorem/ipsum/dolor/sit amet consetetur" ``` I would like to output ``` "Lorem/ipsum/dolor/sit ametconsetetur" ``` Hence, removing the single whitespace between `amet` and `consetetur`. Using `.replace(" ","")` replaces all whitespaces, giving me: ``` "Lorem/ipsum/dolor/sitametconsetetur" ``` which is not what I want. How can I solve this?
2018/10/26
[ "https://Stackoverflow.com/questions/53014961", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6341510/" ]
use regex and word boundary: ``` >>> s="Lorem/ipsum/dolor/sit amet consetetur" >>> import re >>> re.sub(r"\b \b","",s) 'Lorem/ipsum/dolor/sit ametconsetetur' >>> ``` This technique also handles the more general case: ``` >>> s="Lorem/ipsum/dolor/sit amet consetetur adipisci velit" >>> re.sub(r"\b \b","",s) 'Lorem/ipsum/dolor/sit ametconsetetur adipiscivelit' ``` for start & end spaces, you'll have to work slightly harder, but it's still doable: ``` >>> s=" Lorem/ipsum/dolor/sit amet consetetur adipisci velit " >>> re.sub(r"(^|\b) (\b|$)","",s) 'Lorem/ipsum/dolor/sit ametconsetetur adipiscivelit' ``` Just for fun, a last variant: use `re.split` with a multiple space separation, preserve the split char using a group, then join the strings again, removing the spaces only if the string has some non-space in it: ``` "".join([x if x.isspace() else x.replace(" ","") for x in re.split("( {2,})",s)]) ``` (I suppose that this is slower because of list creation & join though)
``` import re a="Lorem/ipsum/dolor/sit amet consetetur" print(re.sub('(\w)\s{1}(\w)',r'\1\2',a)) ```
53,014,961
It seems like a trivial task however, I can't find a solution for doing this using python. Given the following string: ``` "Lorem/ipsum/dolor/sit amet consetetur" ``` I would like to output ``` "Lorem/ipsum/dolor/sit ametconsetetur" ``` Hence, removing the single whitespace between `amet` and `consetetur`. Using `.replace(" ","")` replaces all whitespaces, giving me: ``` "Lorem/ipsum/dolor/sitametconsetetur" ``` which is not what I want. How can I solve this?
2018/10/26
[ "https://Stackoverflow.com/questions/53014961", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6341510/" ]
``` s[::-1].replace(' ', '', 1)[::-1] ``` * Reverse the string * Delete the first space * Reverse the string back
``` import re a="Lorem/ipsum/dolor/sit amet consetetur" print(re.sub('(\w)\s{1}(\w)',r'\1\2',a)) ```
68,588,398
I would like to define python function which takes a list of dictionaries in which some keys could be lists and then returns a list of list of dictionaries in which each key is a single value, which corresponds to all the combinations of options (an option is picking a single value from each list). Consider the following input: ``` input = [ { "name": "A", "option1": [1, 2], "option2": ["a1", "a2"] } { "name": "B", "option1": [3, 4], "option2": "b1" } ] ``` Given this input, the desired output would be: ``` output = [[{"name": "A", "option1": 1, "option2": "a1"}{"name": "B", "option1": 3, "option2": "b1"}] [{"name": "A", "option1": 1, "option2": "a1"}{"name": "B", "option1": 4, "option2": "b1"}] [{"name": "A", "option1": 1, "option2": "a2"}{"name": "B", "option1": 3, "option2": "b1"}] [{"name": "A", "option1": 1, "option2": "a2"}{"name": "B", "option1": 4, "option2": "b1"}] [{"name": "A", "option1": 2, "option2": "a1"}{"name": "B", "option1": 3, "option2": "b1"}] [{"name": "A", "option1": 2, "option2": "a1"}{"name": "B", "option1": 4, "option2": "b1"}] [{"name": "A", "option1": 2, "option2": "a2"}{"name": "B", "option1": 3, "option2": "b1"}] [{"name": "A", "option1": 2, "option2": "a2"}{"name": "B", "option1": 4, "option2": "b1"}]] ```
2021/07/30
[ "https://Stackoverflow.com/questions/68588398", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13613091/" ]
If you have all vec lists in a single list of lists using, you can unpack this list when passing it to the product function: ``` list_vecs = [vec, vec2, vec3, vec4] list(product(*list_vecs, repeat=1)) ``` Concerning the \* (star-notation) see the python docs [here](https://docs.python.org/3/tutorial/controlflow.html#unpacking-argument-lists): > > For instance, the built-in range() function expects separate start and stop arguments. If they are not available separately, write the function call with the \*-operator to unpack the arguments out of a list or tuple: > > > ``` >>> list(range(3, 6)) # normal call with separate arguments [3, 4, 5] >>> args = [3, 6] >>> list(range(*args)) # call with arguments unpacked from a list [3, 4, 5] ``` In case `vec4` is only defined later, just append it to the `list_vecs`: `list_vecs.append(vec4)`
This solution is almost the same as @mcsoini, but a little more explanation: Here, ``` vec=[['A1','A2','A3'], ['B1','B2'], ['C1','C2','C3'],vec4] ``` `vec` is a list of lists. The first 3 lists are `vec1,2,3`. `vec4` can be added later on. Also, you can add more lists to `vec` using `vec.append(<list>)` Now, instead of doing `vec[0],vec[1]...`, we will simply use then `*` for unpacking the list. This will pass all the lists in the `itertools.product()`. ``` list(product(*vec,repeat=1)) ``` Also, this takes care of the nuber of lists because doing `vec[0]...` is not only tedious but also lead to errors if the index is out of range, or will only consider those lists which are indexed. ``` vec=[['A1','A2','A3'], ['B1','B2'], ['C1','C2','C3'],vec4] result = list(product(*vec,repeat=1)) ```
44,948,661
I am new to python and word2vec and keep getting a "you must first build vocabulary before training the model" error. What is wrong with my code? Here is my code: ``` file_object=open("SupremeCourt.txt","w") from gensim.models import word2vec data = word2vec.Text8Corpus('SupremeCourt.txt') model = word2vec.Word2Vec(data, size=200) out=model.most_similar() print(out[1]) print(out[2]) ```
2017/07/06
[ "https://Stackoverflow.com/questions/44948661", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8264914/" ]
Have a look at this: <https://tweepy.readthedocs.io/en/v3.5.0/cursor_tutorial.html> And try this: ``` import tweepy auth = tweepy.OAuthHandler(CONSUMER_TOKEN, CONSUMER_SECRET) api = tweepy.API(auth) for tweet in tweepy.Cursor(api.search, q='#python', rpp=100).items(): # Do something pass ``` In your case you have a max number of tweets to get, so as per the linked tutorial you could do: ``` import tweepy MAX_TWEETS = 5000000000000000000000 auth = tweepy.OAuthHandler(CONSUMER_TOKEN, CONSUMER_SECRET) api = tweepy.API(auth) for tweet in tweepy.Cursor(api.search, q='#python', rpp=100).items(MAX_TWEETS): # Do something pass ``` If you want tweets after a given ID, you can also pass that argument.
Check twitter api documentation, probably it allows just 300 tweets to parse. I would recommend to forget api, make it with requests with streaming. The api is an implementation of requests with limitations.
44,948,661
I am new to python and word2vec and keep getting a "you must first build vocabulary before training the model" error. What is wrong with my code? Here is my code: ``` file_object=open("SupremeCourt.txt","w") from gensim.models import word2vec data = word2vec.Text8Corpus('SupremeCourt.txt') model = word2vec.Word2Vec(data, size=200) out=model.most_similar() print(out[1]) print(out[2]) ```
2017/07/06
[ "https://Stackoverflow.com/questions/44948661", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8264914/" ]
Sorry, I can't answer in comment, too long. :) Sure :) Check this example: Advanced searched for #data keyword 2015 may - 2016 july Got this url: <https://twitter.com/search?l=&q=%23data%20since%3A2015-05-01%20until%3A2016-07-31&src=typd> ``` session = requests.session() keyword = 'data' date1 = '2015-05-01' date2 = '2016-07-31' session.get('https://twitter.com/search?l=&q=%23+keyword+%20since%3A+date1+%20until%3A+date2&src=typd', streaming = True) ``` Now we have all the requested tweets, Probably you could have problems with 'pagination' Pagination url -> <https://twitter.com/i/search/timeline?vertical=news&q=%23data%20since%3A2015-05-01%20until%3A2016-07-31&src=typd&include_available_features=1&include_entities=1&max_position=TWEET-759522481271078912-759538448860581892-BD1UO2FFu9QAAAAAAAAETAAAAAcAAAASAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA&reset_error_state=false> Probably you could put a random tweet id, or you can parse first, or requests some data from twitter. It can be done. Use Chrome's networking tab to find all the requested information :)
Check twitter api documentation, probably it allows just 300 tweets to parse. I would recommend to forget api, make it with requests with streaming. The api is an implementation of requests with limitations.
44,948,661
I am new to python and word2vec and keep getting a "you must first build vocabulary before training the model" error. What is wrong with my code? Here is my code: ``` file_object=open("SupremeCourt.txt","w") from gensim.models import word2vec data = word2vec.Text8Corpus('SupremeCourt.txt') model = word2vec.Word2Vec(data, size=200) out=model.most_similar() print(out[1]) print(out[2]) ```
2017/07/06
[ "https://Stackoverflow.com/questions/44948661", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8264914/" ]
This code worked for me. ``` import tweepy import pandas as pd import os #Twitter Access auth = tweepy.OAuthHandler( 'xxx','xxx') auth.set_access_token('xxx-xxx','xxx') api = tweepy.API(auth,wait_on_rate_limit = True) df = pd.DataFrame(columns=['text', 'source', 'url']) msgs = [] msg =[] for tweet in tweepy.Cursor(api.search, q='#bmw', rpp=100).items(10): msg = [tweet.text, tweet.source, tweet.source_url] msg = tuple(msg) msgs.append(msg) df = pd.DataFrame(msgs) ```
Check twitter api documentation, probably it allows just 300 tweets to parse. I would recommend to forget api, make it with requests with streaming. The api is an implementation of requests with limitations.
44,948,661
I am new to python and word2vec and keep getting a "you must first build vocabulary before training the model" error. What is wrong with my code? Here is my code: ``` file_object=open("SupremeCourt.txt","w") from gensim.models import word2vec data = word2vec.Text8Corpus('SupremeCourt.txt') model = word2vec.Word2Vec(data, size=200) out=model.most_similar() print(out[1]) print(out[2]) ```
2017/07/06
[ "https://Stackoverflow.com/questions/44948661", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8264914/" ]
Have a look at this: <https://tweepy.readthedocs.io/en/v3.5.0/cursor_tutorial.html> And try this: ``` import tweepy auth = tweepy.OAuthHandler(CONSUMER_TOKEN, CONSUMER_SECRET) api = tweepy.API(auth) for tweet in tweepy.Cursor(api.search, q='#python', rpp=100).items(): # Do something pass ``` In your case you have a max number of tweets to get, so as per the linked tutorial you could do: ``` import tweepy MAX_TWEETS = 5000000000000000000000 auth = tweepy.OAuthHandler(CONSUMER_TOKEN, CONSUMER_SECRET) api = tweepy.API(auth) for tweet in tweepy.Cursor(api.search, q='#python', rpp=100).items(MAX_TWEETS): # Do something pass ``` If you want tweets after a given ID, you can also pass that argument.
Sorry, I can't answer in comment, too long. :) Sure :) Check this example: Advanced searched for #data keyword 2015 may - 2016 july Got this url: <https://twitter.com/search?l=&q=%23data%20since%3A2015-05-01%20until%3A2016-07-31&src=typd> ``` session = requests.session() keyword = 'data' date1 = '2015-05-01' date2 = '2016-07-31' session.get('https://twitter.com/search?l=&q=%23+keyword+%20since%3A+date1+%20until%3A+date2&src=typd', streaming = True) ``` Now we have all the requested tweets, Probably you could have problems with 'pagination' Pagination url -> <https://twitter.com/i/search/timeline?vertical=news&q=%23data%20since%3A2015-05-01%20until%3A2016-07-31&src=typd&include_available_features=1&include_entities=1&max_position=TWEET-759522481271078912-759538448860581892-BD1UO2FFu9QAAAAAAAAETAAAAAcAAAASAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA&reset_error_state=false> Probably you could put a random tweet id, or you can parse first, or requests some data from twitter. It can be done. Use Chrome's networking tab to find all the requested information :)
44,948,661
I am new to python and word2vec and keep getting a "you must first build vocabulary before training the model" error. What is wrong with my code? Here is my code: ``` file_object=open("SupremeCourt.txt","w") from gensim.models import word2vec data = word2vec.Text8Corpus('SupremeCourt.txt') model = word2vec.Word2Vec(data, size=200) out=model.most_similar() print(out[1]) print(out[2]) ```
2017/07/06
[ "https://Stackoverflow.com/questions/44948661", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8264914/" ]
Have a look at this: <https://tweepy.readthedocs.io/en/v3.5.0/cursor_tutorial.html> And try this: ``` import tweepy auth = tweepy.OAuthHandler(CONSUMER_TOKEN, CONSUMER_SECRET) api = tweepy.API(auth) for tweet in tweepy.Cursor(api.search, q='#python', rpp=100).items(): # Do something pass ``` In your case you have a max number of tweets to get, so as per the linked tutorial you could do: ``` import tweepy MAX_TWEETS = 5000000000000000000000 auth = tweepy.OAuthHandler(CONSUMER_TOKEN, CONSUMER_SECRET) api = tweepy.API(auth) for tweet in tweepy.Cursor(api.search, q='#python', rpp=100).items(MAX_TWEETS): # Do something pass ``` If you want tweets after a given ID, you can also pass that argument.
This code worked for me. ``` import tweepy import pandas as pd import os #Twitter Access auth = tweepy.OAuthHandler( 'xxx','xxx') auth.set_access_token('xxx-xxx','xxx') api = tweepy.API(auth,wait_on_rate_limit = True) df = pd.DataFrame(columns=['text', 'source', 'url']) msgs = [] msg =[] for tweet in tweepy.Cursor(api.search, q='#bmw', rpp=100).items(10): msg = [tweet.text, tweet.source, tweet.source_url] msg = tuple(msg) msgs.append(msg) df = pd.DataFrame(msgs) ```
44,948,661
I am new to python and word2vec and keep getting a "you must first build vocabulary before training the model" error. What is wrong with my code? Here is my code: ``` file_object=open("SupremeCourt.txt","w") from gensim.models import word2vec data = word2vec.Text8Corpus('SupremeCourt.txt') model = word2vec.Word2Vec(data, size=200) out=model.most_similar() print(out[1]) print(out[2]) ```
2017/07/06
[ "https://Stackoverflow.com/questions/44948661", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8264914/" ]
Sorry, I can't answer in comment, too long. :) Sure :) Check this example: Advanced searched for #data keyword 2015 may - 2016 july Got this url: <https://twitter.com/search?l=&q=%23data%20since%3A2015-05-01%20until%3A2016-07-31&src=typd> ``` session = requests.session() keyword = 'data' date1 = '2015-05-01' date2 = '2016-07-31' session.get('https://twitter.com/search?l=&q=%23+keyword+%20since%3A+date1+%20until%3A+date2&src=typd', streaming = True) ``` Now we have all the requested tweets, Probably you could have problems with 'pagination' Pagination url -> <https://twitter.com/i/search/timeline?vertical=news&q=%23data%20since%3A2015-05-01%20until%3A2016-07-31&src=typd&include_available_features=1&include_entities=1&max_position=TWEET-759522481271078912-759538448860581892-BD1UO2FFu9QAAAAAAAAETAAAAAcAAAASAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA&reset_error_state=false> Probably you could put a random tweet id, or you can parse first, or requests some data from twitter. It can be done. Use Chrome's networking tab to find all the requested information :)
This code worked for me. ``` import tweepy import pandas as pd import os #Twitter Access auth = tweepy.OAuthHandler( 'xxx','xxx') auth.set_access_token('xxx-xxx','xxx') api = tweepy.API(auth,wait_on_rate_limit = True) df = pd.DataFrame(columns=['text', 'source', 'url']) msgs = [] msg =[] for tweet in tweepy.Cursor(api.search, q='#bmw', rpp=100).items(10): msg = [tweet.text, tweet.source, tweet.source_url] msg = tuple(msg) msgs.append(msg) df = pd.DataFrame(msgs) ```
55,013,809
OK I was afraid to use the terminal, so I installed the python-3.7.2-macosx10.9 package downloaded from python.org Ran the certificate and shell profile scripts, everything seems fine. Now the "which python3" has changed the path from 3.6 to the new 3.7.2 So everything seems fine, correct? My question (of 2) is what's going on with the old python3.6 folder still in the applications folder. Can you just delete it safely? Why when you install a new version does it not at least ask you if you want to update or install and keep both versions? Second question, how would you do this from the terminal? I see the first step is to sudo to the root. I've forgotten the rest. But from the terminal, would this simply add the new version and leave the older one like the package installer? It's pretty simple to use the package installer and then delete a folder. So, thanks in advance. I'm new to python and have not much confidence using the terminal and all the powerful shell commands. And yeah I see all the Brew enthusiasts. I DON'T want to use Brew for the moment. The python snakes nest of pathways is a little confusing, for the moment. I don't want to get lost with a zillion pathways from Brew because it's confusing for the moment. I love Brew, leave me alone.
2019/03/06
[ "https://Stackoverflow.com/questions/55013809", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9291766/" ]
Yes, you can install Python 3.7 or Python 3.8 using installer that you can download from [python.org](https://www.python.org/downloads/). It doesn't automatically delete the older version that you can keep using the older version. For example, if you have `python3.7` and `python3.8`, you can run either one on your terminal. On the other hand, it is quite easy to install using Homebrew, you can follow the instructions on this [article on how to install Python3 on MacOS](https://jun711.github.io/devops/how-to-install-python3-on-mac-os/#homebrew)
Each version of the Python installation is independent of each other. So its safe to delete the version you don't want, but be cautious of this because it can lead to broken dependencies :-). You can run any version by adding the specific version i.e $python3.6 or $python3.7 The best approach is to use virtual environments for your projects to enhance consistency. see pipenv
32,736,350
I did found quite a lot about this error, but somehow none of the suggested solutions resolved the problem. I am trying to use JNA bindings for libgphoto2 under Ubuntu in Eclipse (moderate experience with Java on Eclipse, none whatsoever on Ubuntu, I'm afraid). The bindings in question I want to use are here: <http://angryelectron.com/projects/libgphoto2-jna/> I followed the steps described on that page, and made a simple test client that failed with the above error. So I reduced the test client until the only thing I tried to do was to instantiate a GPhoto2 object, which still produced the error. The test client looks like this: ``` import com.angryelectron.gphoto2.*; public class test_class { public static void main(String[] args) { GPhoto2 cam = new GPhoto2(); } } ``` The errors I get take up considerably more space: ``` Exception in thread "main" java.lang.NoClassDefFoundError: com/sun/jna/Structure at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClass(ClassLoader.java:760) at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) at java.net.URLClassLoader.defineClass(URLClassLoader.java:467) at java.net.URLClassLoader.access$100(URLClassLoader.java:73) at java.net.URLClassLoader$1.run(URLClassLoader.java:368) at java.net.URLClassLoader$1.run(URLClassLoader.java:362) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:361) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) at test_class.main(test_class.java:12) Caused by: java.lang.ClassNotFoundException: com.sun.jna.Structure at java.net.URLClassLoader.findClass(URLClassLoader.java:381) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) ... 13 more ``` libgphoto2 itself is installed, it runs from the command line, I even have the development headers and am able to call GPhoto2 functions from python, so the problem can't be located there. When looking at the .class files in Eclipse, however, they didn't have any definitions. So I figured that might be the problem, especially since there was an error when building the whole thing with ant (although the .jar was succesfully exported, from what I could make out the error concerned only the generation of documentation). So I loaded the source into eclipse and built the .jar myself. At this occasion Eclipse stated there were warnings during the build (though no errors), but didn't show me the actual warnings. If anyone could tell me where the hell the build log went, that might already help something. I searched for it everywhere without success, and if I click on "details" in eclipse it merely tells me where the warnings occured, not what they were. Be that as it may, a warning isn't necessarily devastating, so I imported the resulting Jar into the above client. I checked the .class files, this time they contained all the code. But I still get the exact same list of errors (yes, I have made very sure that the old library was removed from the classpath and the new ones added. I repeated the process several times, just in case). Since I don't have experience with building jars, I made a small helloworld jar, just to see if I could call that from another program or if I'd be getting similar errors. It worked without a hitch. I even tried to reproduce the problem deliberately by exporting it with various options, but it still worked. I tried re-exporting the library I actully need with the settings that had worked during my experiment, but they still wouldn't run. I'm pretty much stuck by now. Any hints that help me resolve the problem would be greatly appreciated.
2015/09/23
[ "https://Stackoverflow.com/questions/32736350", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4428658/" ]
In addition to what @Paul Whelan has said. You might have better luck by just get the missing jar directly. Get the missing library [here](https://github.com/java-native-access/jna), set the classpath and then re-run the application again and see whether it will run fine or not.
What version of java are you using com/sun/jna/Structure may only work with certain JVMs. In general, packages such as sun.*, that are outside of the Java platform, can be different across OS platforms (Solaris, Windows, Linux, Macintosh, etc.) and can change at any time without notice with SDK versions (1.2, 1.2.1, 1.2.3, etc). Programs that contain direct calls to the sun.* packages are not 100% Pure Java. More details [here](http://www.oracle.com/technetwork/java/faq-sun-packages-142232.html)
32,736,350
I did found quite a lot about this error, but somehow none of the suggested solutions resolved the problem. I am trying to use JNA bindings for libgphoto2 under Ubuntu in Eclipse (moderate experience with Java on Eclipse, none whatsoever on Ubuntu, I'm afraid). The bindings in question I want to use are here: <http://angryelectron.com/projects/libgphoto2-jna/> I followed the steps described on that page, and made a simple test client that failed with the above error. So I reduced the test client until the only thing I tried to do was to instantiate a GPhoto2 object, which still produced the error. The test client looks like this: ``` import com.angryelectron.gphoto2.*; public class test_class { public static void main(String[] args) { GPhoto2 cam = new GPhoto2(); } } ``` The errors I get take up considerably more space: ``` Exception in thread "main" java.lang.NoClassDefFoundError: com/sun/jna/Structure at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClass(ClassLoader.java:760) at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) at java.net.URLClassLoader.defineClass(URLClassLoader.java:467) at java.net.URLClassLoader.access$100(URLClassLoader.java:73) at java.net.URLClassLoader$1.run(URLClassLoader.java:368) at java.net.URLClassLoader$1.run(URLClassLoader.java:362) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:361) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) at test_class.main(test_class.java:12) Caused by: java.lang.ClassNotFoundException: com.sun.jna.Structure at java.net.URLClassLoader.findClass(URLClassLoader.java:381) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) ... 13 more ``` libgphoto2 itself is installed, it runs from the command line, I even have the development headers and am able to call GPhoto2 functions from python, so the problem can't be located there. When looking at the .class files in Eclipse, however, they didn't have any definitions. So I figured that might be the problem, especially since there was an error when building the whole thing with ant (although the .jar was succesfully exported, from what I could make out the error concerned only the generation of documentation). So I loaded the source into eclipse and built the .jar myself. At this occasion Eclipse stated there were warnings during the build (though no errors), but didn't show me the actual warnings. If anyone could tell me where the hell the build log went, that might already help something. I searched for it everywhere without success, and if I click on "details" in eclipse it merely tells me where the warnings occured, not what they were. Be that as it may, a warning isn't necessarily devastating, so I imported the resulting Jar into the above client. I checked the .class files, this time they contained all the code. But I still get the exact same list of errors (yes, I have made very sure that the old library was removed from the classpath and the new ones added. I repeated the process several times, just in case). Since I don't have experience with building jars, I made a small helloworld jar, just to see if I could call that from another program or if I'd be getting similar errors. It worked without a hitch. I even tried to reproduce the problem deliberately by exporting it with various options, but it still worked. I tried re-exporting the library I actully need with the settings that had worked during my experiment, but they still wouldn't run. I'm pretty much stuck by now. Any hints that help me resolve the problem would be greatly appreciated.
2015/09/23
[ "https://Stackoverflow.com/questions/32736350", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4428658/" ]
In addition to what @Paul Whelan has said. You might have better luck by just get the missing jar directly. Get the missing library [here](https://github.com/java-native-access/jna), set the classpath and then re-run the application again and see whether it will run fine or not.
Your jar needs a MANIFEST.MF which tells your application where the library is found. Create the file in you project root-directory in eclipse and add the following lines: ``` Manifest-Version: 1.0 Class-Path: <PATH_TO_LIB__CAN_BE_RELATIVE>.jar // e.g Class-Path: ../test.jar <empty line> ``` Right-click your project in eclipse, go to **Export->next->next->next->Use existing manifest from workspace**, select it and click on finish. This should work. Another solution is to compile the classes into the jar itself with Maven.
45,384,065
I am looking for a way to run a method every second, regardless of how long it takes to run. In looking for help with that, I ran across [Run certain code every n seconds](https://stackoverflow.com/questions/3393612/run-certain-code-every-n-seconds) and in trying it, found that it doesn't work correctly. It appears to have the very problem I'm trying to avoid: drift. I tried adding a "sleep(0.5)" after the print, and it does in fact slow down the loop, and the interval stays at the 1.003 (roughly) seconds. Is there a way to fix this, to do what I want? ``` (venv) 20170728-153445 mpeck@bilbo:~/dev/whiskerlabs/aphid/loadtest$ cat a.py import threading import time def woof(): threading.Timer(1.0, woof).start() print "Hello at %s" % time.time() woof() (venv) 20170728-153449 mpeck@bilbo:~/dev/whiskerlabs/aphid/loadtest$ python a.py Hello at 1501281291.84 Hello at 1501281292.85 Hello at 1501281293.85 Hello at 1501281294.85 Hello at 1501281295.86 Hello at 1501281296.86 Hello at 1501281297.86 Hello at 1501281298.87 Hello at 1501281299.87 Hello at 1501281300.88 Hello at 1501281301.88 Hello at 1501281302.89 Hello at 1501281303.89 Hello at 1501281304.89 Hello at 1501281305.89 Hello at 1501281306.9 Hello at 1501281307.9 Hello at 1501281308.9 Hello at 1501281309.91 Hello at 1501281310.91 Hello at 1501281311.91 Hello at 1501281312.91 Hello at 1501281313.92 Hello at 1501281314.92 Hello at 1501281315.92 Hello at 1501281316.93 ```
2017/07/29
[ "https://Stackoverflow.com/questions/45384065", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8217211/" ]
1. Don't use a `threading.Timer` if you don't actually need a new thread each time; to run a function periodically `sleep` in a loop will do (possibly in a single separate thread). 2. Whatever method you use to schedule the next execution, don't wait for the exact amount of time you use as interval - execution of the other statements take time, so the result is drift as you can see. Instead, write down the initial time in a variable, and at each iteration calculate the next time when you want to schedule execution and sleep for the difference between now and then. ``` interval = 1. next_t = time.time() while True: next_t += interval time.sleep(next_t - time.time()) # do whatever you want to do ``` (of course you may refine it for better overall accuracy, but this at least should avoid drift)
I'm pretty sure the problem with that code is that it takes Python some time (apparently around .3s) to execute the call to your function `woof`, instantiate a new `threading.Timer` object, and print the current time. So basically, after your first call to the function, and the creation of a `threading.Timer`, Python waits exactly 1s, then calls the function `woof` (a decisecond or so), creates a new `Timer` object (yet another decisecond at least), and finally prints the current time with some delay. The solution to actually run a program every second seems to be the Twisted library, as said on [this other post](https://stackoverflow.com/a/474570/8232125), but I didn't really try it myself... **Edit:** I would mark the question as possible duplicate but I apparently don't have enough reputation to do that yet... If someone can be kind enough to do so with at least the link I provided, it would be cool :)
6,686,576
What i'm trying to achieve is playing a guitar chord from my python application. I know (or can calculate) the frequencies in the chord if needed. I'm thinking that even if I do the low level leg work of producing multiple sine waves at the right frequencies it wont sound right due to the envelope needing to be correct also, else it wont sound like a guitar but more of a hum. Tantilisingly, the linux sox command play can produce a pretty convincing individual note with: `play -n synth 0 pluck E3` So really what i'm asking is, a) is it possible to shoehorn the play command to do a whole chord (ideally with slightly differing start times to simulate the plectrum string stroke) -- i've not been able to do this but maybe theres some bash fairydust that'll fork a process or such so it sounds right. If this is possible i'd settle for just calling out to a bash command from my code (I dont like reinventing the wheel). b) (even better) is there a way in python of achieving this (a guitar chord sound) ? I've seen a few accessable python midi librarys but frankly midi isn't a good fit for the sound I want, as far as i can tell.
2011/07/13
[ "https://Stackoverflow.com/questions/6686576", "https://Stackoverflow.com", "https://Stackoverflow.com/users/384388/" ]
a) The hackish way is to spawn a background subprocess to run each `play` command. Since a background subprocess doesn't make the shell wait for it to finish, you can have multiple `play`s running at once. Something like this would work: ``` for p in "C3" "E3" "G3"; do ( play -n synth 3 pluck $p & ); done ``` I see that ninjagecko posted basically the same thing as I'm writing this. b) The key point to realize about MIDI data is that it's more like a high-level recipe for producing a sound, not the sound itself. In other words, each MIDI note is expressed as a pitch, a dynamic level, start and stop times, and assorted other metadata. The actual sound is produced by a synthesizer, and different synthesizers do the job with different levels of quality. If you don't like the sound you're getting from your MIDI files, it's not a problem with MIDI, it's a problem with your synthesizer, so you just need to find a better one. (In practice, that usually takes $$$; most free or cheap synthesizers are pretty bad.) An alternative would be to actually dig under the hood, so to speak, and implement an algorithm to create your own guitar sound. For that you'd want to look into [digital signal processing](http://en.wikipedia.org/wiki/Digital_signal_processing), in particular something like the [Karplus-Strong algorithm](http://en.wikipedia.org/wiki/Karplus-Strong_string_synthesis) (one of many ways to create a synthetic plucked string sound). It's a fascinating subject, but if your only exposure to sound synthesis is at the level of `play` and creating MIDI files, you'd have a bit of learning to do. Additionally, Python probably isn't the best choice of language, since execution speed is pretty critical. If you're curious about DSP, you might want to download and play with [ChucK](http://chuck.cs.princeton.edu/).
*a) is it possible to shoehorn the play command to do a whole chord... ?* If your sound architecture supports it, you can run multiple commands that output audio at the same time. If you're using ALSA, you need dmix or other variants in your `~/.asoundrc`. Use `subprocess.Popen` to spawn many child processes. If this were hypothetically a bash script, you could do: ``` command1 & command2 & ... ``` *b) (even better) is there a way in python of achieving this (a guitar chord sound)?* Compile to MIDI and output via a software synthesizer like FluidSynth.
6,686,576
What i'm trying to achieve is playing a guitar chord from my python application. I know (or can calculate) the frequencies in the chord if needed. I'm thinking that even if I do the low level leg work of producing multiple sine waves at the right frequencies it wont sound right due to the envelope needing to be correct also, else it wont sound like a guitar but more of a hum. Tantilisingly, the linux sox command play can produce a pretty convincing individual note with: `play -n synth 0 pluck E3` So really what i'm asking is, a) is it possible to shoehorn the play command to do a whole chord (ideally with slightly differing start times to simulate the plectrum string stroke) -- i've not been able to do this but maybe theres some bash fairydust that'll fork a process or such so it sounds right. If this is possible i'd settle for just calling out to a bash command from my code (I dont like reinventing the wheel). b) (even better) is there a way in python of achieving this (a guitar chord sound) ? I've seen a few accessable python midi librarys but frankly midi isn't a good fit for the sound I want, as far as i can tell.
2011/07/13
[ "https://Stackoverflow.com/questions/6686576", "https://Stackoverflow.com", "https://Stackoverflow.com/users/384388/" ]
The manual gives this example: ``` play -n synth pl G2 pl B2 pl D3 pl G3 pl D4 pl G4 \ delay 0 .05 .1 .15 .2 .25 remix - fade 0 4 .1 norm -1 ``` This creates 6 simultaneous instances of synth (as separate audio channels), delays 5 of the channels by slightly increasing times, then mixes them down to a single channel. The result is a pretty convincing guitar chord; you can of course change the notes or the delays very easily. You can also play around with the sustain and tone of the 'guitar', or add an overdrive effect—see the manual for details.
*a) is it possible to shoehorn the play command to do a whole chord... ?* If your sound architecture supports it, you can run multiple commands that output audio at the same time. If you're using ALSA, you need dmix or other variants in your `~/.asoundrc`. Use `subprocess.Popen` to spawn many child processes. If this were hypothetically a bash script, you could do: ``` command1 & command2 & ... ``` *b) (even better) is there a way in python of achieving this (a guitar chord sound)?* Compile to MIDI and output via a software synthesizer like FluidSynth.
6,686,576
What i'm trying to achieve is playing a guitar chord from my python application. I know (or can calculate) the frequencies in the chord if needed. I'm thinking that even if I do the low level leg work of producing multiple sine waves at the right frequencies it wont sound right due to the envelope needing to be correct also, else it wont sound like a guitar but more of a hum. Tantilisingly, the linux sox command play can produce a pretty convincing individual note with: `play -n synth 0 pluck E3` So really what i'm asking is, a) is it possible to shoehorn the play command to do a whole chord (ideally with slightly differing start times to simulate the plectrum string stroke) -- i've not been able to do this but maybe theres some bash fairydust that'll fork a process or such so it sounds right. If this is possible i'd settle for just calling out to a bash command from my code (I dont like reinventing the wheel). b) (even better) is there a way in python of achieving this (a guitar chord sound) ? I've seen a few accessable python midi librarys but frankly midi isn't a good fit for the sound I want, as far as i can tell.
2011/07/13
[ "https://Stackoverflow.com/questions/6686576", "https://Stackoverflow.com", "https://Stackoverflow.com/users/384388/" ]
The manual gives this example: ``` play -n synth pl G2 pl B2 pl D3 pl G3 pl D4 pl G4 \ delay 0 .05 .1 .15 .2 .25 remix - fade 0 4 .1 norm -1 ``` This creates 6 simultaneous instances of synth (as separate audio channels), delays 5 of the channels by slightly increasing times, then mixes them down to a single channel. The result is a pretty convincing guitar chord; you can of course change the notes or the delays very easily. You can also play around with the sustain and tone of the 'guitar', or add an overdrive effect—see the manual for details.
a) The hackish way is to spawn a background subprocess to run each `play` command. Since a background subprocess doesn't make the shell wait for it to finish, you can have multiple `play`s running at once. Something like this would work: ``` for p in "C3" "E3" "G3"; do ( play -n synth 3 pluck $p & ); done ``` I see that ninjagecko posted basically the same thing as I'm writing this. b) The key point to realize about MIDI data is that it's more like a high-level recipe for producing a sound, not the sound itself. In other words, each MIDI note is expressed as a pitch, a dynamic level, start and stop times, and assorted other metadata. The actual sound is produced by a synthesizer, and different synthesizers do the job with different levels of quality. If you don't like the sound you're getting from your MIDI files, it's not a problem with MIDI, it's a problem with your synthesizer, so you just need to find a better one. (In practice, that usually takes $$$; most free or cheap synthesizers are pretty bad.) An alternative would be to actually dig under the hood, so to speak, and implement an algorithm to create your own guitar sound. For that you'd want to look into [digital signal processing](http://en.wikipedia.org/wiki/Digital_signal_processing), in particular something like the [Karplus-Strong algorithm](http://en.wikipedia.org/wiki/Karplus-Strong_string_synthesis) (one of many ways to create a synthetic plucked string sound). It's a fascinating subject, but if your only exposure to sound synthesis is at the level of `play` and creating MIDI files, you'd have a bit of learning to do. Additionally, Python probably isn't the best choice of language, since execution speed is pretty critical. If you're curious about DSP, you might want to download and play with [ChucK](http://chuck.cs.princeton.edu/).
27,643,383
I am trying to install the elastic beanstalk CLI on an EC2 instance (running AMI) using these instructions: <http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/eb-cli3-getting-started.html> I have python 2.7.9 installed, pip and eb. However, when I try to run eb I get the error below. It looks like it is still using python 2.6. How do you fix that? Thanks! ``` Traceback (most recent call last): File "/usr/bin/eb", line 9, in <module> load_entry_point('awsebcli==3.0.10', 'console_scripts', 'eb')() File "/usr/lib/python2.6/site-packages/pkg_resources/__init__.py", line 473, in load_entry_point return get_distribution(dist).load_entry_point(group, name) File "/usr/lib/python2.6/site-packages/pkg_resources/__init__.py", line 2568, in load_entry_point return ep.load() File "/usr/lib/python2.6/site-packages/pkg_resources/__init__.py", line 2259, in load ['__name__']) File "/usr/lib/python2.6/site-packages/ebcli/core/ebcore.py", line 23, in <module> from ..controllers.initialize import InitController File "/usr/lib/python2.6/site-packages/ebcli/controllers/initialize.py", line 16, in <module> from ..core.abstractcontroller import AbstractBaseController File "/usr/lib/python2.6/site-packages/ebcli/core/abstractcontroller.py", line 21, in <module> from ..core import io, fileoperations, operations File "/usr/lib/python2.6/site-packages/ebcli/core/operations.py", line 762 vars = {n['OptionName']: n['Value'] for n in settings ^ SyntaxError: invalid syntax ```
2014/12/25
[ "https://Stackoverflow.com/questions/27643383", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1536188/" ]
Pip is probably set up with Python 2.6 instead of python 2.7. ``` pip --version ``` You can reinstall pip with Python 2.7, then reinstall 2.6 ``` pip uninstall awsebcli wget https://bootstrap.pypa.io/get-pip.py python get-pip.py pip install awsebcli ```
The "smartest" solution for me was to install python-dev tools sudo apt install python-dev found here: <http://ericbenson.azurewebsites.net/deployment-on-aws-elastic-beanstalk-for-ubuntu/>
27,643,383
I am trying to install the elastic beanstalk CLI on an EC2 instance (running AMI) using these instructions: <http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/eb-cli3-getting-started.html> I have python 2.7.9 installed, pip and eb. However, when I try to run eb I get the error below. It looks like it is still using python 2.6. How do you fix that? Thanks! ``` Traceback (most recent call last): File "/usr/bin/eb", line 9, in <module> load_entry_point('awsebcli==3.0.10', 'console_scripts', 'eb')() File "/usr/lib/python2.6/site-packages/pkg_resources/__init__.py", line 473, in load_entry_point return get_distribution(dist).load_entry_point(group, name) File "/usr/lib/python2.6/site-packages/pkg_resources/__init__.py", line 2568, in load_entry_point return ep.load() File "/usr/lib/python2.6/site-packages/pkg_resources/__init__.py", line 2259, in load ['__name__']) File "/usr/lib/python2.6/site-packages/ebcli/core/ebcore.py", line 23, in <module> from ..controllers.initialize import InitController File "/usr/lib/python2.6/site-packages/ebcli/controllers/initialize.py", line 16, in <module> from ..core.abstractcontroller import AbstractBaseController File "/usr/lib/python2.6/site-packages/ebcli/core/abstractcontroller.py", line 21, in <module> from ..core import io, fileoperations, operations File "/usr/lib/python2.6/site-packages/ebcli/core/operations.py", line 762 vars = {n['OptionName']: n['Value'] for n in settings ^ SyntaxError: invalid syntax ```
2014/12/25
[ "https://Stackoverflow.com/questions/27643383", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1536188/" ]
Pip is probably set up with Python 2.6 instead of python 2.7. ``` pip --version ``` You can reinstall pip with Python 2.7, then reinstall 2.6 ``` pip uninstall awsebcli wget https://bootstrap.pypa.io/get-pip.py python get-pip.py pip install awsebcli ```
I had the same problem, the fix for me was actually to upgrade to latest Beanstalk stack ( `eb upgrade` ). **Note there are downtime** etc. So investigate if you can run the latest stack before upgrading.
27,643,383
I am trying to install the elastic beanstalk CLI on an EC2 instance (running AMI) using these instructions: <http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/eb-cli3-getting-started.html> I have python 2.7.9 installed, pip and eb. However, when I try to run eb I get the error below. It looks like it is still using python 2.6. How do you fix that? Thanks! ``` Traceback (most recent call last): File "/usr/bin/eb", line 9, in <module> load_entry_point('awsebcli==3.0.10', 'console_scripts', 'eb')() File "/usr/lib/python2.6/site-packages/pkg_resources/__init__.py", line 473, in load_entry_point return get_distribution(dist).load_entry_point(group, name) File "/usr/lib/python2.6/site-packages/pkg_resources/__init__.py", line 2568, in load_entry_point return ep.load() File "/usr/lib/python2.6/site-packages/pkg_resources/__init__.py", line 2259, in load ['__name__']) File "/usr/lib/python2.6/site-packages/ebcli/core/ebcore.py", line 23, in <module> from ..controllers.initialize import InitController File "/usr/lib/python2.6/site-packages/ebcli/controllers/initialize.py", line 16, in <module> from ..core.abstractcontroller import AbstractBaseController File "/usr/lib/python2.6/site-packages/ebcli/core/abstractcontroller.py", line 21, in <module> from ..core import io, fileoperations, operations File "/usr/lib/python2.6/site-packages/ebcli/core/operations.py", line 762 vars = {n['OptionName']: n['Value'] for n in settings ^ SyntaxError: invalid syntax ```
2014/12/25
[ "https://Stackoverflow.com/questions/27643383", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1536188/" ]
I had the same problem, the fix for me was actually to upgrade to latest Beanstalk stack ( `eb upgrade` ). **Note there are downtime** etc. So investigate if you can run the latest stack before upgrading.
The "smartest" solution for me was to install python-dev tools sudo apt install python-dev found here: <http://ericbenson.azurewebsites.net/deployment-on-aws-elastic-beanstalk-for-ubuntu/>
69,437,836
I was trying to make a program that can make classification between runway and taxiway using mask rcnn. after importing custom dataset in json format I am getting key error ``` class CustomDataset(utils.Dataset): def load_custom(self, dataset_dir, subset): """Load a subset of the Horse-Man dataset. dataset_dir: Root directory of the dataset. subset: Subset to load: train or val """ # Add classes. We have only one class to add. self.add_class("object", 1, "runway") self.add_class("object", 2, "taxiway") # self.add_class("object", 3, "xyz") #likewise # Train or validation dataset? assert subset in ["trainn", "vall"] dataset_dir = os.path.join(dataset_dir, subset) # Load annotations # VGG Image Annotator saves each image in the form: # { 'filename': '28503151_5b5b7ec140_b.jpg', # 'regions': { # '0': { # 'region_attributes': {}, # 'shape_attributes': { # 'all_points_x': [...], # 'all_points_y': [...], # 'name': 'polygon'}}, # ... more regions ... # }, # 'size': 100202 # } # We mostly care about the x and y coordinates of each region annotations1 = json.load(open(os.path.join(dataset_dir, "f11_json.json"))) # print(annotations1) annotations = list(annotations1.values()) # don't need the dict keys # The VIA tool saves images in the JSON even if they don't have any # annotations. Skip unannotated images. annotations = [a for a in annotations if a['regions']] # Add images for a in annotations: # print(a) # Get the x, y coordinaets of points of the polygons that make up # the outline of each object instance. There are stores in the # shape_attributes (see json format above) polygons = [r['shape_attributes'] for r in a['regions']] objects = [s['region_attributes']['names'] for s in a['regions']] print("objects:",objects) name_dict = {"runway": 1,"taxiway": 2} #,"xyz": 3} # key = tuple(name_dict) num_ids = [name_dict[a] for a in objects] # num_ids = [int(n['Event']) for n in objects] # load_mask() needs the image size to convert polygons to masks. # Unfortunately, VIA doesn't include it in JSON, so we must read # the image. This is only managable since the dataset is tiny. print("numids",num_ids) image_path = os.path.join(dataset_dir, a['filename']) image = skimage.io.imread(image_path) height, width = image.shape[:2] self.add_image( "object", ## for a single class just add the name here image_id=a['filename'], # use file name as a unique image id path=image_path, width=width, height=height, polygons=polygons, num_ids=num_ids ) def load_mask(self, image_id): """Generate instance masks for an image. Returns: masks: A bool array of shape [height, width, instance count] with one mask per instance. class_ids: a 1D array of class IDs of the instance masks. """ # If not a Horse/Man dataset image, delegate to parent class. image_info = self.image_info[image_id] if image_info["source"] != "object": return super(self.__class__, self).load_mask(image_id) # Convert polygons to a bitmap mask of shape # [height, width, instance_count] info = self.image_info[image_id] if info["source"] != "object": return super(self.__class__, self).load_mask(image_id) num_ids = info['num_ids'] mask = np.zeros([info["height"], info["width"], len(info["polygons"])], dtype=np.uint8) for i, p in enumerate(info["polygons"]): # Get indexes of pixels inside the polygon and set them to 1 rr, cc = skimage.draw.polygon(p['all_points_y'], p['all_points_x']) mask[rr, cc, i] = 1 # Return mask, and array of class IDs of each instance. Since we have # one class ID only, we return an array of 1s # Map class names to class IDs. num_ids = np.array(num_ids, dtype=np.int32) return mask, num_ids #np.ones([mask.shape[-1]], dtype=np.int32) def image_reference(self, image_id): """Return the path of the image.""" info = self.image_info[image_id] if info["source"] == "object": return info["path"] else: super(self.__class__, self).image_reference(image_id) ``` error ``` objects: ['runway', 'runway', 'taxiway', 'taxiway', 'taxiway', 'taxiway', 'taxiway'] numids [1, 1, 2, 2, 2, 2, 2] objects: ['runway', 'runway', 'taxiway', 'taxiway'] numids [1, 1, 2, 2] error <ipython-input-8-fac8e3d87b86> in <listcomp>(.0) 45 # shape_attributes (see json format above) 46 polygons = [r['shape_attributes'] for r in a['regions']] ---> 47 objects = [s['region_attributes']['names'] for s in a['regions']] 48 print("objects:",objects) 49 name_dict = {"runway": 1,"taxiway": 2} #,"xyz": 3} KeyError: 'names' ``` I had done all possible changes but still getting same error. Basically I am doing image classification on custom dataset in this I had imported json file of custom dataset.
2021/10/04
[ "https://Stackoverflow.com/questions/69437836", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16702137/" ]
I think it should be `name`, not `names`, based on the file format in the comment: ``` { 'filename': '28503151_5b5b7ec140_b.jpg', 'regions': { '0': { 'region_attributes': {}, 'shape_attributes': { 'all_points_x': [...], 'all_points_y': [...], 'name': 'polygon'}}, ... more regions ... }, 'size': 100202 } ``` > > `'name': 'polygon'}},` > > >
i resolved this error by rechecking my annotations in VGG tool and found that i double labeled (wrongly labeled) two file. so my suggestion is to recheck all files in VGG Annotation Tool and check for missing or multiple times labelled files. Thanks
13,352,296
The following works and returns a list of all users ``` ldapsearch -x -b "ou=lunchbox,dc=office,dc=lbox,dc=com" -D "OFFICE\Administrator" -h ad.office.lbox.com -p 389 -W "(&(objectcategory=person)(objectclass=user))" ``` I'm trying to do the same in Python and I'm getting `Invalid credentials` ``` #!/usr/bin/env python import ldap dn = "cn=Administrator,dc=office,dc=lbox,dc=com" pw = "**password**" con = ldap.initialize('ldap://ad.office.lbox.com') con.simple_bind_s( dn, pw ) base_dn = 'ou=lunchbox,dc=office,dc=lbox,dc=com' filter = '(objectclass=person)' attrs = ['sn'] con.search_s( base_dn, ldap.SCOPE_SUBTREE, filter, attrs ) ``` Any suggestions to make this work would be great. I'm trying to learn `python-ldap` Thanks EDIT This is the full error I get: ``` `ldap.INVALID_CREDENTIALS: {'info': '80090308: LdapErr: DSID-0C0903A9, comment: AcceptSecurityContext error, data 52e, v1db1', 'desc': 'Invalid credentials'}` ``` The `LDAP` server is an Active Directory on Windows Server 2008 R2
2012/11/12
[ "https://Stackoverflow.com/questions/13352296", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1039166/" ]
You're using different credentials for the bind from the command line and the python script. The command line is using the bind dn of `OFFICE\Administrator` while the script is using the bind dn of `cn=Administrator,dc=office,dc=lbox,dc=com` On Active Directory, the built-in account `Administrator` doesn't reside at the top-level of the `AD` forest, it typically resides under at least the `Users` `OU`, so the dn you *probably* should be using is: `CN=Administrator,CN=Users,dc=office,dc=lbox,dc=com`. The easiest way to find the proper entry for the user is to actually use account name in a search from the command line e.g. ``` ldapsearch -x -b "ou=lunchbox,dc=office,dc=lbox,dc=com" -D "OFFICE\Administrator" -h ad.office.lbox.com -p 389 -W '(samaccountname=Administrator)' dn ``` and use the `dn` returned from the command line query in your python code as the `dn` for the bind.
The python-ldap library does not parse the user name, neither does ldapsearch. In you code, simply use the same username `OFFICE\Administrator` and let Active Directory handle it. Also it is not uncommon for ActiveDirectory to refuse simple bind over ldap. You must use LDAPS. Add this line to bypass certificat checking: ``` ldap.set_option(ldap.OPT_X_TLS_REQUIRE_CERT, ldap.OPT_X_TLS_NEVER) ``` So the whole code might look like this: ``` #!/usr/bin/env python import ldap dn = "OFFICE\Administrator" pw = "**password**" ldap.set_option(ldap.OPT_X_TLS_REQUIRE_CERT, ldap.OPT_X_TLS_NEVER) con = ldap.initialize('ldaps://ad.office.lbox.com') con.simple_bind_s( dn, pw ) base_dn = 'ou=lunchbox,dc=office,dc=lbox,dc=com' filter = '(objectclass=person)' attrs = ['sn'] con.search_s( base_dn, ldap.SCOPE_SUBTREE, filter, attrs ) ```
53,157,921
Please excuse my silly question as I am really new to python. I have 20 different .txt files (eg `"myfile_%s"` with `s` having been attributed to an integer in range=1,21). So I load them as follows: ``` runs=range(1,21) for i in runs: Myfile=np.loadtxt("myfile_%s.txt" %i, delimiter=',', unpack=True) ``` Hence, they're being loaded into a variable of "float64" type. I would like to load them into 20 different lists (so as to find the maximum value of each etc.). Thank you in advance! PS: I would be happy to hear any textbook recommendations for python beginners.
2018/11/05
[ "https://Stackoverflow.com/questions/53157921", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10042405/" ]
You can split using your delimiter and load into a native python list: ``` my_files = [] for i in range(1,21): with open("my_file_{0}.txt".format(i), 'r') as f: my_files.append(f.read().split(',')) ``` Now you have a list of lists. You can get the max overall, or get the max of each list, like so: ``` # max of each max_values = [max(map(float,my_list)) for my_list in my_files] # max overall max_overall = max(max_values) ```
Are your lists of equal length? If yes, you can do everything in one numpy array: ``` a = np.zeros((100,20)) for i in range(1,21): a[i-1,:]=np.loadtxt("myfile_%s.txt" %i, delimiter=',', unpack=True) ``` Now you can do all `numpy` functions on the resulting array such as ``` b = np.sum(a,axis=0) ```
56,066,816
I have several data frames (with equal # columns but different names). I'm trying to create one data frame with rows stacked below each other. I don't care now about the column names (I can always rename them later). I saw different SO links but they don't address this problem completely. Note I've 21 data frames and scalability is important. I was looking at [this](https://stackoverflow.com/questions/45590866/python-pandas-concat-dataframes-with-different-columns-ignoring-column-names) [![enter image description here](https://i.stack.imgur.com/U5W0x.jpg)](https://i.stack.imgur.com/U5W0x.jpg) How I get df: ``` df = [] for f in files: data = pd.read_csv(f, usecols = [0,1,2,3,4]) df.append(data) ```
2019/05/09
[ "https://Stackoverflow.com/questions/56066816", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9473446/" ]
Assuming your DataFrames are stored in some list `df_l`: Rename the columns and concat: ``` df_l = [df1, df2, df3] for df in df_l: df.columns = df_l[0].columns # Just chose any DataFrame pd.concat(df_l) # Columns named with above DataFrame # Index is preserved ``` Or construct a new DataFrame: ``` pd.DataFrame(np.vstack([df.to_numpy() for df in df_l])) # Columns are RangeIndex # Index is RangeIndex ```
Once you put all the data frames into a list, try this code. ``` import pandas as pd df = [df1, df2, df3] result = pd.DataFrame(columns=df1.columns) for df in df: result = pd.concat([result,df.rename(columns=df1.columns)], ignore_index=True) ```
56,066,816
I have several data frames (with equal # columns but different names). I'm trying to create one data frame with rows stacked below each other. I don't care now about the column names (I can always rename them later). I saw different SO links but they don't address this problem completely. Note I've 21 data frames and scalability is important. I was looking at [this](https://stackoverflow.com/questions/45590866/python-pandas-concat-dataframes-with-different-columns-ignoring-column-names) [![enter image description here](https://i.stack.imgur.com/U5W0x.jpg)](https://i.stack.imgur.com/U5W0x.jpg) How I get df: ``` df = [] for f in files: data = pd.read_csv(f, usecols = [0,1,2,3,4]) df.append(data) ```
2019/05/09
[ "https://Stackoverflow.com/questions/56066816", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9473446/" ]
Assuming your DataFrames are stored in some list `df_l`: Rename the columns and concat: ``` df_l = [df1, df2, df3] for df in df_l: df.columns = df_l[0].columns # Just chose any DataFrame pd.concat(df_l) # Columns named with above DataFrame # Index is preserved ``` Or construct a new DataFrame: ``` pd.DataFrame(np.vstack([df.to_numpy() for df in df_l])) # Columns are RangeIndex # Index is RangeIndex ```
I will do it at the beginning adding `skiprows=1` ``` names=[0,1,2,3,4]# what every you want to call them .. pd.concat([pd.read_csv(f, usecols = [0,1,2,3,4],skiprows=1,names=[0,1,2,3,4]) for f in files]) ```