qid
int64 46k
74.7M
| question
stringlengths 54
37.8k
| date
stringlengths 10
10
| metadata
sequencelengths 3
3
| response_j
stringlengths 17
26k
| response_k
stringlengths 26
26k
|
---|---|---|---|---|---|
3,589,214 | So here is the deal: I want to (for example) generate 4 pseudo-random numbers, that when added together would equal 40. How could this be dome in python? I could generate a random number 1-40, then generate another number between 1 and the remainder,etc, but then the first number would have a greater chance of "grabbing" more. | 2010/08/28 | [
"https://Stackoverflow.com/questions/3589214",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/433493/"
] | Generate 4 random numbers, compute their sum, divide each one by the sum and multiply by 40.
If you want Integers, then this will require a little non-randomness. | If you want true randomness then use:
```
import numpy as np
def randofsum_unbalanced(s, n):
# Where s = sum (e.g. 40 in your case) and n is the output array length (e.g. 4 in your case)
r = np.random.rand(n)
a = np.array(np.round((r/np.sum(r))*s,0),dtype=int)
while np.sum(a) > s:
a[np.random.choice(n)] -= 1
while np.sum(a) < s:
a[np.random.choice(n)] += 1
return a
```
If you want a greater level of uniformity then take advantage of the multinomial distribution:
```
def randofsum_balanced(s, n):
return np.random.multinomial(s,np.ones(n)/n,size=1)[0]
``` |
74,271,418 | I'm pretty new at Power BI (so forgive my rough terminology), and I'm trying to create a bar chart from some existing financial data. Specifically, I'd like to know how to transform my data. I've looked at DAX and python, and can't quite figure out the right commands.
My existing table looks like the following. The set of categories are arbitrary (not known up front, so can't be hardcoded), same with the set of years.
```
Category 2002 2003 2004 2005
A $10 $75 $75 $75
B $75 $59 $75 $79
C $15 $32 $13 $5
B $23 $12 $75 $7
C $17 $88 $75 $15
```
And I want my output table to have the number of rows as the number of *unique* categories, totaling up the dollar amounts for each year.
```
Category 2002 2003 2004 2005
A $10 $75 $75 $75
B $98 $71 $150 $86
C $32 $120 $88 $20
```
What's the best way to roll up the data this way? I intend to use the resulting table to make a composite bar chart, one bar per year.
Thank you! | 2022/11/01 | [
"https://Stackoverflow.com/questions/74271418",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4019700/"
] | 1. Avoid Excel-style cross-tables in Power BI. In the PowerQuery Editor transform your table by selecting Categorie and then **Unpivot other columns**
[](https://i.stack.imgur.com/UAHij.png)
2. Back in the designer view you can directly use this data to create a bar chart:
[](https://i.stack.imgur.com/Pc5a4.png)
3. If you like you can also create an aggregated table from your data with the **calculated table** expression
```
Aggregated =
SUMMARIZE(
'Table',
'Table'[Category],
'Table'[Year],
"Sum", SUM('Table'[Value])
)
```
but that's not needed for your purpose.
[](https://i.stack.imgur.com/FaCHF.png) | Here is the full M-Code to achieve your goal: Just change the source step with your source file:
```
let
Source = Table.FromRows(Json.Document(Binary.Decompress(Binary.FromText("i45WclTSUTI0ABLmpkhErE60khOMb2oJl7EEyziD9ID4xkYgljFIDVyLEYhraATXgtBhDiQsLGASQANiYwE=", BinaryEncoding.Base64), Compression.Deflate)), let _t = ((type nullable text) meta [Serialized.Text = true]) in type table [Category = _t, #"2002" = _t, #"2003" = _t, #"2004" = _t, #"2005" = _t]),
#"Changed Type" = Table.TransformColumnTypes(Source,{{"Category", type text}, {"2002", Int64.Type}, {"2003", Int64.Type}, {"2004", Int64.Type}, {"2005", Int64.Type}}),
#"Unpivoted Other Columns" = Table.UnpivotOtherColumns(#"Changed Type", {"Category"}, "Attribute", "Value"),
#"Renamed Columns" = Table.RenameColumns(#"Unpivoted Other Columns",{{"Attribute", "Year"}}),
#"Grouped Rows" = Table.Group(#"Renamed Columns", {"Category", "Year"}, {{"Total", each List.Sum([Value]), type number}}),
#"Pivoted Column" = Table.Pivot(#"Grouped Rows", List.Distinct(#"Grouped Rows"[Year]), "Year", "Total", List.Sum)
in
#"Pivoted Column"
```
If we test it:
[](https://i.stack.imgur.com/BGlcY.png) |
13,391,549 | I try to use a Bixolon receipt printer with OE on Windows 7. I success to print directly from a small python module using win32print (coming with py32win) with the code below :
win32print is not natively in OE so I paste win32print.pyd in OE server directory and put the code in a wizard of my OE module.
I can see my wizard, launch it without error but then nothing happens : no print, no error message.
Any ideas ?
Thank you
```
import win32print
printer=OpenPrinter(win32print.GetDefaultPrinter())
hJob = win32print.StartDocPrinter (printer, 1, ("RVGI Print", None, "RAW"))
g=open('test3.txt','r')
raw_data = bytes ( open( 'test3.txt' , 'r').read ())
try:
win32print.StartPagePrinter (printer)
win32print.WritePrinter (printer, raw_data)
win32print.EndPagePrinter (printer)
finally:
win32print.EndDocPrinter (printer)
win32print.ClosePrinter (printer)
``` | 2012/11/15 | [
"https://Stackoverflow.com/questions/13391549",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1682857/"
] | Remember that the python code runs on the server. Is your printer connected to the server?
Also, you don't have an `except` section in your `try`. That makes errors go by silently. Try removing the `try` block so that errors are raised. Looking at them you might figure out the issue. | Well, I don't know if you typed here incorrectly, but the way you imported the `win32print` module force you to attach it to module function calls and you haven't done this in your first line:
```
printer = OpenPrinter(win32print.GetDefaultPrinter())
```
should be
```
printer = win32print.OpenPrinter(win32print.GetDefaultPrinter())
```
And there is another point that I don't really understands, but [Tim Golden](http://timgolden.me.uk/python/win32_how_do_i/print.html) put in his tutorial and could be your problem:
```
raw_data = bytes('your_text')
```
could be just:
```
raw_data = 'your_text'
```
depending on your system version. You also haven't specified the encoding, but since you got no errors that may not be the problem.
For the version thing Tim Golden posted:
```
import sys
if sys.version_info >= (3,):
raw_data = bytes ("This is a test", "utf-8")
else:
raw_data = "This is a test"
```
Hope it helps! |
11,878,300 | I would like to serialize on machine A and deserialize on machine B a python lambda. There are a couple of obvious problems with that:
* the pickle module does not serialize or deserialize code. It only serializes the names of classes/methods/functions
* some of the answers I found with google suggest the use of the low-level marshal module to serialize the func\_code attribute of the lambda but they fail to describe how one could reconstruct a function object from the deserialized code object
* marhshal(l.func\_code) will not serialize the closure associated with the lambda which leads to the problem of detecting when a given lambda really needs a closure and warning the user that he is trying to serialize a lambda that uses a closure
Hence, my question(s):
* how would one reconstruct a function from the deserialized (demarshaled) code object ?
* how would one detect that a given lambda will not work properly without the associated closure ? | 2012/08/09 | [
"https://Stackoverflow.com/questions/11878300",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/782529/"
] | Surprisingly, checking whether a lambda will work without its associated closure is actually fairly easy. According to the [data model documentation](http://docs.python.org/release/2.6.2/reference/datamodel.html), you can just check the `func_closure` attribute:
```
>>> def get_lambdas():
... bar = 42
... return (lambda: 1, lambda: bar)
...
>>> no_vars, vars = get_lambdas()
>>> print no_vars.func_closure
None
>>> print vars.func_closure
(<cell at 0x1020d3d70: int object at 0x7fc150413708>,)
>>> print vars.func_closure[0].cell_contents
42
>>>
```
Then serializing + loading the lambda is fairly straight forward:
```
>>> import marshal, types
>>> old = lambda: 42
>>> old_code_serialized = marshal.dumps(old.func_code)
>>> new_code = marshal.loads(old_code_serialized)
>>> new = types.FunctionType(new_code, globals())
>>> new()
42
```
It's worth taking a look at the documentation for the `FunctionType`:
```
function(code, globals[, name[, argdefs[, closure]]])
Create a function object from a code object and a dictionary.
The optional name string overrides the name from the code object.
The optional argdefs tuple specifies the default argument values.
The optional closure tuple supplies the bindings for free variables.
```
Notice that you can also supply a closure… Which means you might even be able to serialize the old function's closure then load it at the other end :) | I'm not sure exactly what you want to do, but you could try [dill](https://github.com/uqfoundation/dill). Dill can serialize and deserialize lambdas and I believe also works for lambdas inside closures. The pickle API is a subset of it's API. To use it, just "import dill as pickle" and go about your business pickling stuff.
```
>>> import dill
>>> testme = lambda x: lambda y:x
>>> _testme = dill.loads(dill.dumps(testme))
>>> testme
<function <lambda> at 0x1d92530>
>>> _testme
<function <lambda> at 0x1d924f0>
>>>
>>> def complicated(a,b):
... def nested(x):
... return testme(x)(a) * b
... return nested
...
>>> _complicated = dill.loads(dill.dumps(complicated))
>>> complicated
<function complicated at 0x1d925b0>
>>> _complicated
<function complicated at 0x1d92570>
```
Dill registers it's types into the `pickle` registry, so if you have some black box code that uses `pickle` and you can't really edit it, then just importing dill can magically make it work without monkeypatching the 3rd party code. Or, if you want the whole interpreter session sent over the wire as an "python image", dill can do that too.
```
>>> # continuing from above
>>> dill.dump_session('foobar.pkl')
>>>
>>> ^D
dude@sakurai>$ python
Python 2.7.5 (default, Sep 30 2013, 20:15:49)
[GCC 4.2.1 (Apple Inc. build 5566)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import dill
>>> dill.load_session('foobar.pkl')
>>> testme(4)
<function <lambda> at 0x1d924b0>
>>> testme(4)(5)
4
>>> dill.source.getsource(testme)
'testme = lambda x: lambda y:x\n'
```
You can easily send the image across ssh to another computer, and start where you left off there as long as there's version compatibility of pickle and the usual caveats about python changing and things being installed. As shown, you can also extract the source of the lambda that was defined in the previous session.
Dill also has [some good tools](https://github.com/uqfoundation/dill/blob/master/dill/detect.py) for helping you understand what is causing your pickling to fail when your code fails. |
39,278,419 | I am trying to POST a request to server side from android client side, using AsyncHttpClient :
For now i just want to check whether the response is coming back or not , so i have not implemented anything to parse request parameters at server side and have just returned some json as response.
```
RequestParams params = new RequestParams();
params.put("key", "value");
params.put("more", "data");
PAAPI.post("http://sairav.pythonanywhere.com",params, new JsonHttpResponseHandler() {
@Override
public void onSuccess(int statusCode, Header[] headers, JSONObject response) {
// If the response is JSONObject instead of expected JSONArray
Log.d("Response::",response.toString());
Toast.makeText(act,response.toString(),Toast.LENGTH_LONG).show();
}
@Override
public void onStart() {
// called before request is started
Toast.makeText(act,"Going to make API CALL",Toast.LENGTH_LONG).show();
}
@Override
public void onFailure(int statusCode, Header[] headers, String responseString, Throwable throwable) {
super.onFailure(statusCode, headers, responseString, throwable);
Log.d("Failed: ", ""+statusCode);
Log.d("Error : ", "" + throwable);
}
@Override
public void onSuccess(int statusCode, Header[] headers, JSONArray timeline) {
// Pull out the first event on the public timeline
JSONObject firstEvent = null;
try {
firstEvent = timeline.getJSONObject(0);
} catch (JSONException e) {
e.printStackTrace();
}
String tweetText = null;
try {
tweetText = firstEvent.getString("text");
} catch (JSONException e) {
e.printStackTrace();
}
// Do something with the response
Toast.makeText(act,tweetText,Toast.LENGTH_LONG).show();
}
});
```
PAAPI class ::
```
class PAAPI {
protected static final String BASE_URL = "http://sairav.pythonanywhere.com";
private static AsyncHttpClient client = new AsyncHttpClient();
public static void get(String url, RequestParams params, AsyncHttpResponseHandler responseHandler) {
client.get(getAbsoluteUrl(url), params, responseHandler);
}
public static void post(String url, RequestParams params, AsyncHttpResponseHandler responseHandler) {
client.post(getAbsoluteUrl(url), params, responseHandler);
}
private static String getAbsoluteUrl(String relativeUrl) {
return BASE_URL + relativeUrl;
}
}
```
I get the toast from onStart when the app runs, but then in logcat i get this exception :
java.net.UnknownHostException: Unable to resolve host "**sairav.pythonanywhere.comhttp**": No address associated with hostname
**Note : I have already reset the Wifi connection and run again (on searching other similar questions) , but the problem persists.**
Logcat :
```
24154-24228/com.example.sairav.moneytor W/System.err: java.net.UnknownHostException: Unable to resolve host "sairav.pythonanywhere.comhttp": No address associated with hostname
09-01 23:32:11.718 24154-24228/com.example.sairav.moneytor W/System.err: at java.net.InetAddress.lookupHostByName(InetAddress.java:440)
09-01 23:32:11.718 24154-24228/com.example.sairav.moneytor W/System.err: at java.net.InetAddress.getAllByNameImpl(InetAddress.java:252)
09-01 23:32:11.718 24154-24228/com.example.sairav.moneytor W/System.err: at java.net.InetAddress.getAllByName(InetAddress.java:215)
09-01 23:32:11.718 24154-24228/com.example.sairav.moneytor W/System.err: at cz.msebera.android.httpclient.impl.conn.SystemDefaultDnsResolver.resolve(SystemDefaultDnsResolver.java:44)
09-01 23:32:11.718 24154-24228/com.example.sairav.moneytor W/System.err: at cz.msebera.android.httpclient.impl.conn.DefaultClientConnectionOperator.resolveHostname(DefaultClientConnectionOperator.java:259)
09-01 23:32:11.718 24154-24228/com.example.sairav.moneytor W/System.err: at cz.msebera.android.httpclient.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:159)
09-01 23:32:11.718 24154-24228/com.example.sairav.moneytor W/System.err: at cz.msebera.android.httpclient.impl.conn.AbstractPoolEntry.open(AbstractPoolEntry.java:145)
09-01 23:32:11.718 24154-24228/com.example.sairav.moneytor W/System.err: at cz.msebera.android.httpclient.impl.conn.AbstractPooledConnAdapter.open(AbstractPooledConnAdapter.java:131)
09-01 23:32:11.718 24154-24228/com.example.sairav.moneytor W/System.err: at cz.msebera.android.httpclient.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:611)
09-01 23:32:11.718 24154-24228/com.example.sairav.moneytor W/System.err: at cz.msebera.android.httpclient.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:446)
09-01 23:32:11.718 24154-24228/com.example.sairav.moneytor W/System.err: at cz.msebera.android.httpclient.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:860)
09-01 23:32:11.718 24154-24228/com.example.sairav.moneytor W/System.err: at cz.msebera.android.httpclient.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
09-01 23:32:11.718 24154-24228/com.example.sairav.moneytor W/System.err: at com.loopj.android.http.AsyncHttpRequest.makeRequest(AsyncHttpRequest.java:146)
09-01 23:32:11.718 24154-24228/com.example.sairav.moneytor W/System.err: at com.loopj.android.http.AsyncHttpRequest.makeRequestWithRetries(AsyncHttpRequest.java:177)
09-01 23:32:11.718 24154-24228/com.example.sairav.moneytor W/System.err: at com.loopj.android.http.AsyncHttpRequest.run(AsyncHttpRequest.java:106)
``` | 2016/09/01 | [
"https://Stackoverflow.com/questions/39278419",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3820753/"
] | The way I found to do it is by using the token provider from the namespace manager. So:
```
var namespaceMngr = NamespaceManager.CreateFromConnectionString(namespaceConnString);
MessagingFactorySettings mfs = new MessagingFactorySettings();
mfs.TokenProvider = namespaceMngr.Settings.TokenProvider;
mfs.NetMessagingTransportSettings.BatchFlushInterval = TimeSpan.FromSeconds(timeToFlush);
MessagingFactory mf = MessagingFactory.Create(namespaceMngr.Address, mfs);
```
If you are trying to set the `AmqpTransportSettings.BatchFlushInterval` instead of the `NetMessagingTransportSettings.BatchFlushInterval` then I can't help you, I actually stumbled on this post looking for an answer. Trying to change the `AmqpTransportSettings.BatchFlushInterval` doesn't seem to stick to the `MessageFactory` even if the `MessageFactorySettings` reflect the change. | JordanSchillers answer fixes the token provider issue but my address was now using port 9355 instead of 9354.
I ended using a mixture of the ServiceBusConnectionStringBuilder and the NamespaceManager:
```
var serviceBusConnectionString = new ServiceBusConnectionStringBuilder(connection.ConnectionString);
MessagingFactorySettings factorySettings = new MessagingFactorySettings();
factorySettings.TransportType = serviceBusConnectionString.TransportType;
//Use the namespacemanager to create the token provider.
var namespaceManager = NamespaceManager.CreateFromConnectionString(connection.ConnectionString);
factorySettings.TokenProvider = namespaceManager.Settings.TokenProvider;
factorySettings.NetMessagingTransportSettings.BatchFlushInterval = TimeSpan.FromMilliseconds(batchTimeInMs);
MessagingFactory factory = MessagingFactory.Create(serviceBusConnectionString.Endpoints, factorySettings);
return factory.CreateTopicClient(topicName);
``` |
17,004,946 | I have some logging in my application (it happens to be log4cxx but I am flexible on that), and I have some unit tests using the boost unit test framework. When my unit tests run, I get lots of log output, from both the passing and failing tests (not just boost assertions logged, but my own application code's debug logging too). I would like to get the unit test framework to throw away logs during tests that pass, and output logs from tests that fail (I grew to appreciate this behaviour while using python/nose).
Is there some standard way of doing this with the boost unit test framework? If not, are there some start of test/end of test hooks that I could use to buffer my logs and conditionally output them to implement this behaviour myself? | 2013/06/08 | [
"https://Stackoverflow.com/questions/17004946",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/99876/"
] | There are start of test and end of test hooks that you can use for this purpose. To set up these hooks you need to define a subclass of [boost::unit\_test::test\_observer](https://www.boost.org/doc/libs/1_70_0/libs/test/doc/html/boost/unit_test/test_observer.html), create an instance of the class that will persist throughout the entire test (either a static global object or a [BOOST\_TEST\_GLOBAL\_FIXTURE](https://www.boost.org/doc/libs/1_70_0/libs/test/doc/html/boost_test/utf_reference/test_org_reference/test_org_boost_test_global_fixture.html)), and then pass the class to [boost::unit\_test::framework::register\_observer](https://www.boost.org/doc/libs/1_70_0/libs/test/doc/html/boost/unit_test/framework/register_observer.html).
The method to override with a start of test hook is `test_unit_start`, and the method to override with an end of test hook is `test_unit_finish`. However, these hooks fire both for test suites as well as individual test cases, which may be an issue depending on how the hooks are set up. The `test_unit_finish` hook also doesn't explicitly tell you whether a given test actually passed, and there doesn't seem to be one clear and obvious way to get that information. There is a [boost::unit\_test::results\_collector](https://www.boost.org/doc/libs/1_70_0/libs/test/doc/html/boost/unit_test/results_collector_t.html) singleton, which has a [results()](https://www.boost.org/doc/libs/1_70_0/libs/test/doc/html/boost/unit_test/results_collector_t.html#idm45779057427216-bb) method, and if you pass it the `test_unit_id` of the test unit provided to `test_unit_finish`, you get a [test\_results](https://www.boost.org/doc/libs/1_70_0/libs/test/doc/html/boost/unit_test/test_results.html) object that has a [passed()](https://www.boost.org/doc/libs/1_70_0/libs/test/doc/html/boost/unit_test/test_results.html#idm45779057388688-bb) method. I can't really see a way to get the `test_unit_id` that is clearly part of the public API -- you can just directly access the p\_id member, but that could always change in a future boost version. You could also manually track whether each test is passing or failing using the `assertion_result`, `exception_caught`, `test_unit_aborted`, and `test_unit_timed_out` hooks from the test\_observer subclass (`assertion_result` indicates a failure of the current test whenever its argument is false and every other hook indicates a failure if it is called at all). | According to the [Boost.Test documentation](http://www.boost.org/doc/libs/1_53_0/libs/test/doc/html/utf/user-guide/runtime-config/reference.html), run your test executable with `--log_level=error`. This will catch only failing test cases.
I checked that it works using a `BOOST_CHECK(false)` on an otherwise correctly running project with a few thousand unit tests.
Running with `--log_level=all` gives the result of all assertions. I checked that by piping it to `wc -l` that the number of lines in the log is exactly the same as the number of assertions in the tests (which number is also reported by `--report_level=detailed`). You could of course also `grep` the log for the strings `error` or `failed`. |
16,092,153 | I wish to create a 'find' procedure **myself**, which is capable of finding a sub-string in a string and it also should be able to read a string backward and give position of match- just like the original find function in python.
I am unable to figure out what logic should I use- also I don't know how the original find functions?
I just started to use python and am fairly new to programming as well.
Any guidance shall be highly appreciated! | 2013/04/18 | [
"https://Stackoverflow.com/questions/16092153",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1050305/"
] | >
> also I don't know how the original find functions
>
>
>
A good way to learn about functions without googling is to use [Ipython](http://ipython.org/)and especially the [notebook variant](http://ipython.org/notebook.html/). These allow you to write python code interactively, and have some special features. Typing the name of a function in Ipython (either notebook or the interpreter) with a question mark returns some information about the function e.g
```
find?
Type: function
String Form:<function find at 0x2893cf8>
File: /usr/lib/pymodules/python2.7/matplotlib/mlab.py
Definition: find(condition)
Docstring: Return the indices where ravel(condition) is true
```
Typing two question marks reveals the source code
```
find??
Type: function
String Form:<function find at 0x2893cf8>
File: /usr/lib/pymodules/python2.7/matplotlib/mlab.py
Definition: find(condition)
Source:
def find(condition):
"Return the indices where ravel(condition) is true"
res, = np.nonzero(np.ravel(condition))
return res
```
You would then need to go down the rabbit hole further to find exactly how find worked. | There is a simple solution to this problem, however there are also much faster solutions which you may want to look at after you've implemented the simple version. What you want to be doing is checking each position in the string you're search over and seeing if the string you're searching for starts there. This is inefficient but works well enough for most purposes, if you're feeling comfortable with that then you may want to look at Boyer-Moore string searching, which is a much more complex solution but more efficient. It exploits the fact that you can determine that if a string doesn't start at a certain point you may not need to check some of the other positions. |
16,092,153 | I wish to create a 'find' procedure **myself**, which is capable of finding a sub-string in a string and it also should be able to read a string backward and give position of match- just like the original find function in python.
I am unable to figure out what logic should I use- also I don't know how the original find functions?
I just started to use python and am fairly new to programming as well.
Any guidance shall be highly appreciated! | 2013/04/18 | [
"https://Stackoverflow.com/questions/16092153",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1050305/"
] | Here is a solution that returns all the hints in a list, and `rfind` is defined using the original `find` keyword `backwards`. You can use for integers or floats also. You can easily modify it in order to return only the first hint.
```
def find( x, string, backward = False, ignore_case = False ):
x = str(x)
string = str(string)
if ignore_case:
x = x.lower()
string = string.lower()
str_list = [ i for i in string ]
x_list = [ i for i in x ]
if backward:
x_list.reverse()
str_list.reverse()
x = ''.join(x_list)
string = ''.join(str_list)
lenx = len(x)
ans = []
for i in range( len(str_list) - lenx ):
if x == string[i:i+lenx]:
ans.append( i )
return ans
def rfind( x, string, ignore_case = False):
return find( x, string, backward = True, ignore_case = ignore_case )
print find('f','abcdefgacdfh')
# [5, 10]
print rfind('f','abcdefgacdfh')
# [1, 6]
print find(12,'aaa3331222aa12a')
# [6, 12]
print rfind(12,'aaa3331222aa12a')
# [1, 7]
``` | There is a simple solution to this problem, however there are also much faster solutions which you may want to look at after you've implemented the simple version. What you want to be doing is checking each position in the string you're search over and seeing if the string you're searching for starts there. This is inefficient but works well enough for most purposes, if you're feeling comfortable with that then you may want to look at Boyer-Moore string searching, which is a much more complex solution but more efficient. It exploits the fact that you can determine that if a string doesn't start at a certain point you may not need to check some of the other positions. |
16,092,153 | I wish to create a 'find' procedure **myself**, which is capable of finding a sub-string in a string and it also should be able to read a string backward and give position of match- just like the original find function in python.
I am unable to figure out what logic should I use- also I don't know how the original find functions?
I just started to use python and am fairly new to programming as well.
Any guidance shall be highly appreciated! | 2013/04/18 | [
"https://Stackoverflow.com/questions/16092153",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1050305/"
] | >
> also I don't know how the original find functions
>
>
>
A good way to learn about functions without googling is to use [Ipython](http://ipython.org/)and especially the [notebook variant](http://ipython.org/notebook.html/). These allow you to write python code interactively, and have some special features. Typing the name of a function in Ipython (either notebook or the interpreter) with a question mark returns some information about the function e.g
```
find?
Type: function
String Form:<function find at 0x2893cf8>
File: /usr/lib/pymodules/python2.7/matplotlib/mlab.py
Definition: find(condition)
Docstring: Return the indices where ravel(condition) is true
```
Typing two question marks reveals the source code
```
find??
Type: function
String Form:<function find at 0x2893cf8>
File: /usr/lib/pymodules/python2.7/matplotlib/mlab.py
Definition: find(condition)
Source:
def find(condition):
"Return the indices where ravel(condition) is true"
res, = np.nonzero(np.ravel(condition))
return res
```
You would then need to go down the rabbit hole further to find exactly how find worked. | I think [Steve](https://stackoverflow.com/a/16092297/1258041) means something like this:
```
def find(s, sub):
for i, _ in enumerate(s):
if s.startswith(sub, i):
return i
return -1
def rfind(s, sub):
for i in range(len(s)-1, -1, -1):
if s.startswith(sub, i):
return i
return -1
```
This, however, is simpler than the regular `str.find` and `str.rfind` because you can't provide `start` and `end` arguments. |
16,092,153 | I wish to create a 'find' procedure **myself**, which is capable of finding a sub-string in a string and it also should be able to read a string backward and give position of match- just like the original find function in python.
I am unable to figure out what logic should I use- also I don't know how the original find functions?
I just started to use python and am fairly new to programming as well.
Any guidance shall be highly appreciated! | 2013/04/18 | [
"https://Stackoverflow.com/questions/16092153",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1050305/"
] | >
> also I don't know how the original find functions
>
>
>
A good way to learn about functions without googling is to use [Ipython](http://ipython.org/)and especially the [notebook variant](http://ipython.org/notebook.html/). These allow you to write python code interactively, and have some special features. Typing the name of a function in Ipython (either notebook or the interpreter) with a question mark returns some information about the function e.g
```
find?
Type: function
String Form:<function find at 0x2893cf8>
File: /usr/lib/pymodules/python2.7/matplotlib/mlab.py
Definition: find(condition)
Docstring: Return the indices where ravel(condition) is true
```
Typing two question marks reveals the source code
```
find??
Type: function
String Form:<function find at 0x2893cf8>
File: /usr/lib/pymodules/python2.7/matplotlib/mlab.py
Definition: find(condition)
Source:
def find(condition):
"Return the indices where ravel(condition) is true"
res, = np.nonzero(np.ravel(condition))
return res
```
You would then need to go down the rabbit hole further to find exactly how find worked. | ```
'mystring'.rindex('my_substring')
```
this returns the first position of the substring, beginning from the right side
```
'mystring'.index('my_substring')
```
does the same thing, but beginns searching the string from the left hand side. |
16,092,153 | I wish to create a 'find' procedure **myself**, which is capable of finding a sub-string in a string and it also should be able to read a string backward and give position of match- just like the original find function in python.
I am unable to figure out what logic should I use- also I don't know how the original find functions?
I just started to use python and am fairly new to programming as well.
Any guidance shall be highly appreciated! | 2013/04/18 | [
"https://Stackoverflow.com/questions/16092153",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1050305/"
] | Here is a solution that returns all the hints in a list, and `rfind` is defined using the original `find` keyword `backwards`. You can use for integers or floats also. You can easily modify it in order to return only the first hint.
```
def find( x, string, backward = False, ignore_case = False ):
x = str(x)
string = str(string)
if ignore_case:
x = x.lower()
string = string.lower()
str_list = [ i for i in string ]
x_list = [ i for i in x ]
if backward:
x_list.reverse()
str_list.reverse()
x = ''.join(x_list)
string = ''.join(str_list)
lenx = len(x)
ans = []
for i in range( len(str_list) - lenx ):
if x == string[i:i+lenx]:
ans.append( i )
return ans
def rfind( x, string, ignore_case = False):
return find( x, string, backward = True, ignore_case = ignore_case )
print find('f','abcdefgacdfh')
# [5, 10]
print rfind('f','abcdefgacdfh')
# [1, 6]
print find(12,'aaa3331222aa12a')
# [6, 12]
print rfind(12,'aaa3331222aa12a')
# [1, 7]
``` | I think [Steve](https://stackoverflow.com/a/16092297/1258041) means something like this:
```
def find(s, sub):
for i, _ in enumerate(s):
if s.startswith(sub, i):
return i
return -1
def rfind(s, sub):
for i in range(len(s)-1, -1, -1):
if s.startswith(sub, i):
return i
return -1
```
This, however, is simpler than the regular `str.find` and `str.rfind` because you can't provide `start` and `end` arguments. |
16,092,153 | I wish to create a 'find' procedure **myself**, which is capable of finding a sub-string in a string and it also should be able to read a string backward and give position of match- just like the original find function in python.
I am unable to figure out what logic should I use- also I don't know how the original find functions?
I just started to use python and am fairly new to programming as well.
Any guidance shall be highly appreciated! | 2013/04/18 | [
"https://Stackoverflow.com/questions/16092153",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1050305/"
] | Here is a solution that returns all the hints in a list, and `rfind` is defined using the original `find` keyword `backwards`. You can use for integers or floats also. You can easily modify it in order to return only the first hint.
```
def find( x, string, backward = False, ignore_case = False ):
x = str(x)
string = str(string)
if ignore_case:
x = x.lower()
string = string.lower()
str_list = [ i for i in string ]
x_list = [ i for i in x ]
if backward:
x_list.reverse()
str_list.reverse()
x = ''.join(x_list)
string = ''.join(str_list)
lenx = len(x)
ans = []
for i in range( len(str_list) - lenx ):
if x == string[i:i+lenx]:
ans.append( i )
return ans
def rfind( x, string, ignore_case = False):
return find( x, string, backward = True, ignore_case = ignore_case )
print find('f','abcdefgacdfh')
# [5, 10]
print rfind('f','abcdefgacdfh')
# [1, 6]
print find(12,'aaa3331222aa12a')
# [6, 12]
print rfind(12,'aaa3331222aa12a')
# [1, 7]
``` | ```
'mystring'.rindex('my_substring')
```
this returns the first position of the substring, beginning from the right side
```
'mystring'.index('my_substring')
```
does the same thing, but beginns searching the string from the left hand side. |
27,967,988 | So I was dissapointed to find out that JavaScript's `for ( var in array/object)` was not equivalent to pythons `for var in list:`.
In JavaScript you are iterating over the indices themselves e.g.
```
0,
1,
2,
...
```
where as with Python, you are iterating over the values pointed to by the indices e.g.
```
"string var at index 0",
46,
"string var at index 2",
["array","of","values"],
...
```
Is there a standard JavaScript equivalent to Python's looping mechanism?
Disclaimer:
===========
>
> I am aware that the for (var in object) construct is meant to be used to iterate over keys in a dictionary and not generally over indices of an array. I am asking a specific question that pertains to use cases in which I do not care about order(or very much about speed) and just don't feel like using a while loop.
>
>
> | 2015/01/15 | [
"https://Stackoverflow.com/questions/27967988",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3581485/"
] | for an array the most similar is the forEach loop (of course index is optional)
```
[1,2,3,4,].forEach(function(value,index){
console.log(value);
console.log(index);
});
```
So you will get the following output:
```
1
0
2
1
3
2
4
3
``` | In the next version of ECMAScript (ECMAScript6 aka Harmony) will be [for-of construct](http://tc39wiki.calculist.org/es6/for-of/):
```
for (let word of ["one", "two", "three"]) {
alert(word);
}
```
`for-of` could be used to iterate over various objects, Arrays, Maps, Sets and custom iterable objects. In that sense it's very close to Python's `for-in`. |
27,967,988 | So I was dissapointed to find out that JavaScript's `for ( var in array/object)` was not equivalent to pythons `for var in list:`.
In JavaScript you are iterating over the indices themselves e.g.
```
0,
1,
2,
...
```
where as with Python, you are iterating over the values pointed to by the indices e.g.
```
"string var at index 0",
46,
"string var at index 2",
["array","of","values"],
...
```
Is there a standard JavaScript equivalent to Python's looping mechanism?
Disclaimer:
===========
>
> I am aware that the for (var in object) construct is meant to be used to iterate over keys in a dictionary and not generally over indices of an array. I am asking a specific question that pertains to use cases in which I do not care about order(or very much about speed) and just don't feel like using a while loop.
>
>
> | 2015/01/15 | [
"https://Stackoverflow.com/questions/27967988",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3581485/"
] | for an array the most similar is the forEach loop (of course index is optional)
```
[1,2,3,4,].forEach(function(value,index){
console.log(value);
console.log(index);
});
```
So you will get the following output:
```
1
0
2
1
3
2
4
3
``` | I'm not sure I see MUCH difference. It's easy to access the value at a given index/key
```
var list = [1,2,3,4,5];
// or...
var list = {a: 'foo', b: 'bar', c: 'baz'};
for (var item in list) console.log(list[item]);
```
and as mentioned, you could use forEach for arrays or objects... heres an obj:
```
var list = {a: 'foo', b: 'bar', c: 'baz'};
Object.keys(list).forEach(function(key, i) {
console.log('VALUE: \n' + JSON.stringify(list[key], null, 4));
});
``` |
27,967,988 | So I was dissapointed to find out that JavaScript's `for ( var in array/object)` was not equivalent to pythons `for var in list:`.
In JavaScript you are iterating over the indices themselves e.g.
```
0,
1,
2,
...
```
where as with Python, you are iterating over the values pointed to by the indices e.g.
```
"string var at index 0",
46,
"string var at index 2",
["array","of","values"],
...
```
Is there a standard JavaScript equivalent to Python's looping mechanism?
Disclaimer:
===========
>
> I am aware that the for (var in object) construct is meant to be used to iterate over keys in a dictionary and not generally over indices of an array. I am asking a specific question that pertains to use cases in which I do not care about order(or very much about speed) and just don't feel like using a while loop.
>
>
> | 2015/01/15 | [
"https://Stackoverflow.com/questions/27967988",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3581485/"
] | In the next version of ECMAScript (ECMAScript6 aka Harmony) will be [for-of construct](http://tc39wiki.calculist.org/es6/for-of/):
```
for (let word of ["one", "two", "three"]) {
alert(word);
}
```
`for-of` could be used to iterate over various objects, Arrays, Maps, Sets and custom iterable objects. In that sense it's very close to Python's `for-in`. | I'm not sure I see MUCH difference. It's easy to access the value at a given index/key
```
var list = [1,2,3,4,5];
// or...
var list = {a: 'foo', b: 'bar', c: 'baz'};
for (var item in list) console.log(list[item]);
```
and as mentioned, you could use forEach for arrays or objects... heres an obj:
```
var list = {a: 'foo', b: 'bar', c: 'baz'};
Object.keys(list).forEach(function(key, i) {
console.log('VALUE: \n' + JSON.stringify(list[key], null, 4));
});
``` |
66,650,626 | Is there any to restore files from the recycle bin in python?
Here's the code:
```
from send2trash import send2trash
file_name = "test.txt"
operation = input("Enter the operation to perform[delete/restore]: ")
if operation == "delete":
send2trash(file_name)
print(f"Successfully deleted {file_name}")
else:
# Code to restore the file from recycle bin.
pass
```
Here when I type `"restore"` in the `input()` function, I want to restore my deleted file from the recycle bin.
Is there any way to achieve this in python?
It would be great if anyone could help me out.
EDIT:
Thanks for the answer @Kenivia, but I am facing one small issue:
```
import winshell
r = list(winshell.recycle_bin()) # this lists the original path of all the all items in the recycling bin
file_name = "C:\\test\\Untitled_1.txt" # This file is located in the recycle bin
index = r.index(file_name) # to determine the index of your file
winshell.undelete(r[index].original_filename())
```
When I run this code, I get an error: `ValueError: 'C:\\test\\Untitled_1.txt' is not in list`. Can you please help me out? | 2021/03/16 | [
"https://Stackoverflow.com/questions/66650626",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14909172/"
] | It would depend on your operating system.
**Linux**
it's as simple as moving it from the trash folder to the original path. The location of the trash folder differs from distro to distro, but this is where it typically is.
There is a [command line tool](https://github.com/andreafrancia/trash-cli) that you can use, or dig through the code to get some ideas.
```
import subprocess as sp # here subprocess is just used to run the command, you can also use os.system but that is discouraged
sp.run(['mv','/home/USERNAME/.local/share/Trash/files/test.txt', '/ORIGINAL/PATH/')
```
**macOS**
On macOS, you do the same thing as you do in Linux, except the trash path is `~/.Trash`
```
import subprocess as sp
sp.run(['mv','~/.Trash/test.txt', '/ORIGINAL/PATH/')
```
Note that macOS stores information about the files at `~/.Trash/.DS_Store`, where Linux stores them at `/home/USERNAME/.local/share/Trash/info/`. This can be useful if you don't know the original path of the files.
**Windows**
you have to use `winshell`. See [this article](https://medium.com/swlh/how-to-access-recycle-bin-in-python-192a685e31fb) for more details
```
import winshell
r = list(winshell.recycle_bin()) # this lists the original path of all the all items in the recycling bin
index = r.index("C:\ORIGINAL\PATH\test.txt") # to determine the index of your file
winshell.undelete(r[index].original_filename())
``` | **Google Colab** (you are the `root` user)
Import the shell utility for Python:
```py
import shutil
```
Move the file from trash to a selected destination:
```py
shutil.move('/root/.local/share/Trash/files/<deleted-file>', '<destination-path>')
``` |
54,207,540 | I'm trying to find any python library or package which implements [newgrnn (Generalized Regression Neural Network)](https://www.mathworks.com/help/deeplearning/ref/newgrnn.html) using python.
Is there any package or library available where I can use neural network for regression. I'm trying to find python equivalent of the [newgrnn (Generalized Regression Neural Network)](https://www.mathworks.com/help/deeplearning/ref/newgrnn.html) which is described here. | 2019/01/15 | [
"https://Stackoverflow.com/questions/54207540",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5347207/"
] | I found the library neupy which solved my problem:
```
from neupy import algorithms
from neupy.algorithms.rbfn.utils import pdf_between_data
grnn = algorithms.GRNN(std=0.003)
grnn.train(X, y)
# In this part of the code you can do any moifications you want
ratios = pdf_between_data(grnn.input_train, X, grnn.std)
predicted = (np.dot(grnn.target_train.T, ratios) / ratios.sum(axis=0)).T
```
This is the link for the library: <http://neupy.com/apidocs/neupy.algorithms.rbfn.grnn.html> | A more upgraded form is [pyGRNN](https://github.com/federhub/pyGRNN) which offers in addition to the normal GRNN the Anisotropic GRNN, which optimizes the hyperparameters automatically:
```
from sklearn import datasets
from sklearn import preprocessing
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error as MSE
from pyGRNN import GRNN
# get the data set
diabetes = datasets.load_diabetes()
X = diabetes.data
y = diabetes.target
X_train, X_test, y_train, y_test = train_test_split(preprocessing.minmax_scale(X),
preprocessing.minmax_scale(y.reshape((-1, 1))),
test_size=0.25)
# use Anisotropic GRNN with Limited-Memory BFGS algorithm
# to select the optimal bandwidths
AGRNN = GRNN(calibration = 'gradient_search')
AGRNN.fit(X_train, y_train.ravel())
sigma = AGRNN.sigma
y_pred = AGRNN.predict(X_test)
mse_AGRNN = MSE(y_test, y_pred)
mse_AGRNN ## 0.030437040
``` |
33,713,149 | I have a text file containing CPU stats as below (from sar/sysstat)
```
17:30:38 CPU %user %nice %system %iowait %steal %idle
17:32:49 all 14.56 2.71 3.79 0.00 0.00 78.94
17:42:49 all 12.68 2.69 3.44 0.00 0.00 81.19
17:52:49 all 12.14 2.67 3.22 0.01 0.00 81.96
18:02:49 all 12.28 2.67 3.20 0.03 0.00 81.82
```
My goal is to build lists for each column (except the CPU, %nice and %steal) so I can plot them using bokeh, so tried to split each line to list and then I don't know how you can ignore certain values i.e.
```
#!/usr/bin/python
cpu_time = []
cpu_user = []
cpu_system = []
cpu_iowait = []
cpu_idle = []
with open('stats.txt') as F:
for line in F:
time, ignore, user, ignore, system, iowait, ignore, idle = line.split()
cpu_time.append(time)
cpu_user.append(user)
cpu_system.append(system)
cpu_iowait.append(iowait)
cpu_idle.append(idle)
```
Is there a better/short way to do this? More specifically, the logic I used to ignore some of the items doesn't look good to me. | 2015/11/14 | [
"https://Stackoverflow.com/questions/33713149",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1247154/"
] | Here is a more dynamic version that would scale to more columns. But there isn't really anything bad about your implementation.
```
# build a dict of column name -> list of column values
stats = {}
with open('stats.txt') as F:
header = None
for idx, line in enumerate(F):
# This is the header
if idx == 0:
# save the header for later use
header = line.split()
for word in header:
stats[word] = []
else:
# combine the header with the line to get a dict
line_dict = dict(zip(header, line.split()))
for key, val in line_dict.iteritems():
stats[key].append(val)
# remove keys we don't want
stats.pop('%nice')
stats.pop('%steal')
``` | First you could use `_` or `__` to represent ignored values (this is a common convention).
Next you could store all values into a single list and then unpack the list into multiple lists using `zip`.
```
cpu_stats = []
with open('stats.txt') as stats_file:
for line in stats_file:
time, _, user, _, system, iowait, _, idle = line.split()
cpu_stats.append([time, user, system, iowait, idle])
cpu_time, cpu_user, cpu_system, cpu_iowait, cpu_idle = zip(*cpu_stats)
```
You could write this using a couple list comprehensions, but I don't think it's any more readable:
```
with open('stats.txt') as stats_file:
lines = (line.split() for line in stats_file)
cpu_stats = [
(time, user, system, iowait, idle)
for time, _, user, _, system, iowait, _, idle
in lines
]
cpu_time, cpu_user, cpu_system, cpu_iowait, cpu_idle = zip(*cpu_stats)
``` |
33,713,149 | I have a text file containing CPU stats as below (from sar/sysstat)
```
17:30:38 CPU %user %nice %system %iowait %steal %idle
17:32:49 all 14.56 2.71 3.79 0.00 0.00 78.94
17:42:49 all 12.68 2.69 3.44 0.00 0.00 81.19
17:52:49 all 12.14 2.67 3.22 0.01 0.00 81.96
18:02:49 all 12.28 2.67 3.20 0.03 0.00 81.82
```
My goal is to build lists for each column (except the CPU, %nice and %steal) so I can plot them using bokeh, so tried to split each line to list and then I don't know how you can ignore certain values i.e.
```
#!/usr/bin/python
cpu_time = []
cpu_user = []
cpu_system = []
cpu_iowait = []
cpu_idle = []
with open('stats.txt') as F:
for line in F:
time, ignore, user, ignore, system, iowait, ignore, idle = line.split()
cpu_time.append(time)
cpu_user.append(user)
cpu_system.append(system)
cpu_iowait.append(iowait)
cpu_idle.append(idle)
```
Is there a better/short way to do this? More specifically, the logic I used to ignore some of the items doesn't look good to me. | 2015/11/14 | [
"https://Stackoverflow.com/questions/33713149",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1247154/"
] | Here is a more dynamic version that would scale to more columns. But there isn't really anything bad about your implementation.
```
# build a dict of column name -> list of column values
stats = {}
with open('stats.txt') as F:
header = None
for idx, line in enumerate(F):
# This is the header
if idx == 0:
# save the header for later use
header = line.split()
for word in header:
stats[word] = []
else:
# combine the header with the line to get a dict
line_dict = dict(zip(header, line.split()))
for key, val in line_dict.iteritems():
stats[key].append(val)
# remove keys we don't want
stats.pop('%nice')
stats.pop('%steal')
``` | This is a bit more generic. You can define a list of desired column names. It uses [csv-Dictreader](https://docs.python.org/3/library/csv.html?highlight=csv#csv.DictReader) to read the file. The names are given without the `%` suffix. In addition, it converts the time into a `datetime.time` object from the module [datetime](https://docs.python.org/3/library/datetime.html?highlight=datetime#module-datetime) and all other columns into floats. You can specify your own data conversion functions for all columns, using the dictionary `converters`.
```
import csv
import datetime
def make_col_keys(fobj, col_names):
time_key = fobj.readline().split()[0]
cols = {'time': time_key}
cols.update({key: '%' + key for key in col_names})
fobj.seek(0)
return cols
def convert_time(time_string):
return datetime.datetime.strptime(time_string, '%H:%M:%S').time()
converters = {'time': convert_time}
def read_stats(file_name, col_names, converters=converters):
with open(file_name) as fobj:
cols = make_col_keys(fobj, col_names)
reader = csv.DictReader(fobj, delimiter=' ', skipinitialspace=True)
data = {}
for line in reader:
for new_key, old_key in cols.items():
value = converters.get(new_key, float)(line[old_key])
data.setdefault(new_key, []).append(value)
return data
def main(file_name, col_names=None):
if col_names is None:
col_names = ['user', 'system', 'iowait', 'idle']
return read_stats(file_name, col_names)
main('stats.txt')
```
Result:
```
{'idle': [78.94, 81.19, 81.96, 81.82],
'iowait': [0.0, 0.0, 0.01, 0.03],
'system': [3.79, 3.44, 3.22, 3.2],
'time': [datetime.time(17, 32, 49),
datetime.time(17, 42, 49),
datetime.time(17, 52, 49),
datetime.time(18, 2, 49)],
'user': [14.56, 12.68, 12.14, 12.28]}
``` |
33,713,149 | I have a text file containing CPU stats as below (from sar/sysstat)
```
17:30:38 CPU %user %nice %system %iowait %steal %idle
17:32:49 all 14.56 2.71 3.79 0.00 0.00 78.94
17:42:49 all 12.68 2.69 3.44 0.00 0.00 81.19
17:52:49 all 12.14 2.67 3.22 0.01 0.00 81.96
18:02:49 all 12.28 2.67 3.20 0.03 0.00 81.82
```
My goal is to build lists for each column (except the CPU, %nice and %steal) so I can plot them using bokeh, so tried to split each line to list and then I don't know how you can ignore certain values i.e.
```
#!/usr/bin/python
cpu_time = []
cpu_user = []
cpu_system = []
cpu_iowait = []
cpu_idle = []
with open('stats.txt') as F:
for line in F:
time, ignore, user, ignore, system, iowait, ignore, idle = line.split()
cpu_time.append(time)
cpu_user.append(user)
cpu_system.append(system)
cpu_iowait.append(iowait)
cpu_idle.append(idle)
```
Is there a better/short way to do this? More specifically, the logic I used to ignore some of the items doesn't look good to me. | 2015/11/14 | [
"https://Stackoverflow.com/questions/33713149",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1247154/"
] | This is a bit more generic. You can define a list of desired column names. It uses [csv-Dictreader](https://docs.python.org/3/library/csv.html?highlight=csv#csv.DictReader) to read the file. The names are given without the `%` suffix. In addition, it converts the time into a `datetime.time` object from the module [datetime](https://docs.python.org/3/library/datetime.html?highlight=datetime#module-datetime) and all other columns into floats. You can specify your own data conversion functions for all columns, using the dictionary `converters`.
```
import csv
import datetime
def make_col_keys(fobj, col_names):
time_key = fobj.readline().split()[0]
cols = {'time': time_key}
cols.update({key: '%' + key for key in col_names})
fobj.seek(0)
return cols
def convert_time(time_string):
return datetime.datetime.strptime(time_string, '%H:%M:%S').time()
converters = {'time': convert_time}
def read_stats(file_name, col_names, converters=converters):
with open(file_name) as fobj:
cols = make_col_keys(fobj, col_names)
reader = csv.DictReader(fobj, delimiter=' ', skipinitialspace=True)
data = {}
for line in reader:
for new_key, old_key in cols.items():
value = converters.get(new_key, float)(line[old_key])
data.setdefault(new_key, []).append(value)
return data
def main(file_name, col_names=None):
if col_names is None:
col_names = ['user', 'system', 'iowait', 'idle']
return read_stats(file_name, col_names)
main('stats.txt')
```
Result:
```
{'idle': [78.94, 81.19, 81.96, 81.82],
'iowait': [0.0, 0.0, 0.01, 0.03],
'system': [3.79, 3.44, 3.22, 3.2],
'time': [datetime.time(17, 32, 49),
datetime.time(17, 42, 49),
datetime.time(17, 52, 49),
datetime.time(18, 2, 49)],
'user': [14.56, 12.68, 12.14, 12.28]}
``` | First you could use `_` or `__` to represent ignored values (this is a common convention).
Next you could store all values into a single list and then unpack the list into multiple lists using `zip`.
```
cpu_stats = []
with open('stats.txt') as stats_file:
for line in stats_file:
time, _, user, _, system, iowait, _, idle = line.split()
cpu_stats.append([time, user, system, iowait, idle])
cpu_time, cpu_user, cpu_system, cpu_iowait, cpu_idle = zip(*cpu_stats)
```
You could write this using a couple list comprehensions, but I don't think it's any more readable:
```
with open('stats.txt') as stats_file:
lines = (line.split() for line in stats_file)
cpu_stats = [
(time, user, system, iowait, idle)
for time, _, user, _, system, iowait, _, idle
in lines
]
cpu_time, cpu_user, cpu_system, cpu_iowait, cpu_idle = zip(*cpu_stats)
``` |
21,881,748 | This may be a stupid question but I'm not sure how to phrase it in a google-friendly way...
In a terminal if you type something like:
```
nano some_file
```
then nano opens up an edit window inside the terminal. A text based application. Ctrl+X closes it again and you see the terminal as it was.
Here's another example:
```
man ls
```
How can I make a text based terminal application in python?
I hope this question makes sense, let me know if you need more clarification... | 2014/02/19 | [
"https://Stackoverflow.com/questions/21881748",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/742082/"
] | You probably need to use alternative screen buffer. To enable it just print '\0033[?1049h' and for disabling '\0033[?1049l' (Terminal Control Escape Sequences).
<http://invisible-island.net/xterm/ctlseqs/ctlseqs.html#The%20Alternate%20Screen%20Buffer>
Example:
```
print('\033[?1049h', end='')
print('Alternative screen buffer')
s = input()
print('\033[?1049l', end='')
print('Normal mode')
print(s) `
``` | This does the trick:
<http://docs.python.org/2/howto/curses.html>
Example:
```
import curses
oScreen = curses.initscr()
curses.noecho()
curses.curs_set(0)
oScreen.keypad(1)
oScreen.addstr("Woooooooooooooo\n\n",curses.A_BOLD)
while True:
oEvent = oScreen.getch()
if oEvent == ord("q"):
break
curses.endwin()
``` |
6,577,218 | I have a python application where I want to start doing more work in the background so that it will scale better as it gets busier. In the past I have used Celery for doing normal background tasks, and this has worked well.
The only difference between this application and the others I have done in the past is that I need to guarantee that these messages are processed, they can't be lost.
For this application I'm not too concerned about speed for my message queue, I need reliability and durability first and formost. To be safe I want to have two queue servers, both in different data centers in case something goes wrong, one a backup of the other.
Looking at Celery it looks like it supports a bunch of different backends, some with more features then the others. The two most popular look like redis and RabbitMQ so I took some time to examine them further.
**RabbitMQ:**
Supports durable queues and clustering, but the problem with the way they have clustering today is that if you lose a node in the cluster, all messages in that node are unavailable until you bring that node back online. It doesn't replicated the messages between the different nodes in the cluster, it just replicates the metadata about the message, and then it goes back to the originating node to get the message, if the node isn't running, you are S.O.L. Not ideal.
The way they recommend to get around this is to setup a second server and replicate the file system using DRBD, and then running something like pacemaker to switch the clients to the backup server when it needs too. This seems pretty complicated, not sure if there is a better way. Anyone know of a better way?
**Redis:**
Supports a read slave and this would allow me to have a backup in case of emergencies but it doesn't support master-master setup, and I'm not sure if it handles active failover between master and slave. It doesn't have the same features as RabbitMQ, but looks much easier to setup and maintain.
**Questions:**
1. What is the best way to setup celery
so that it will guarantee message
processing.
2. Has anyone done this before? If so,
would be mind sharing what you did? | 2011/07/05 | [
"https://Stackoverflow.com/questions/6577218",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/356788/"
] | I suspect that Celery bound to existing backends is the wrong solution for the reliability guarantees you need.
Given that you want a distributed queueing system with strong durability and reliability guarantees, I'd start by looking for such a system (they do exist) and then figuring out the best way to bind to it in Python. That may be via Celery & a new backend, or not. | I've used Amazon SQS for this propose and got good results. You will recieve message until you will delete it from queue and it allows to grow you app as high as you will need. |
6,577,218 | I have a python application where I want to start doing more work in the background so that it will scale better as it gets busier. In the past I have used Celery for doing normal background tasks, and this has worked well.
The only difference between this application and the others I have done in the past is that I need to guarantee that these messages are processed, they can't be lost.
For this application I'm not too concerned about speed for my message queue, I need reliability and durability first and formost. To be safe I want to have two queue servers, both in different data centers in case something goes wrong, one a backup of the other.
Looking at Celery it looks like it supports a bunch of different backends, some with more features then the others. The two most popular look like redis and RabbitMQ so I took some time to examine them further.
**RabbitMQ:**
Supports durable queues and clustering, but the problem with the way they have clustering today is that if you lose a node in the cluster, all messages in that node are unavailable until you bring that node back online. It doesn't replicated the messages between the different nodes in the cluster, it just replicates the metadata about the message, and then it goes back to the originating node to get the message, if the node isn't running, you are S.O.L. Not ideal.
The way they recommend to get around this is to setup a second server and replicate the file system using DRBD, and then running something like pacemaker to switch the clients to the backup server when it needs too. This seems pretty complicated, not sure if there is a better way. Anyone know of a better way?
**Redis:**
Supports a read slave and this would allow me to have a backup in case of emergencies but it doesn't support master-master setup, and I'm not sure if it handles active failover between master and slave. It doesn't have the same features as RabbitMQ, but looks much easier to setup and maintain.
**Questions:**
1. What is the best way to setup celery
so that it will guarantee message
processing.
2. Has anyone done this before? If so,
would be mind sharing what you did? | 2011/07/05 | [
"https://Stackoverflow.com/questions/6577218",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/356788/"
] | You might want to check out [IronMQ](http://iron.io/celery), it covers your requirements (durable, highly available, etc) and is a cloud native solution so zero maintenance. And there's a Celery broker for it: <https://github.com/iron-io/iron_celery> so you can start using it just by changing your Celery config. | I've used Amazon SQS for this propose and got good results. You will recieve message until you will delete it from queue and it allows to grow you app as high as you will need. |
6,577,218 | I have a python application where I want to start doing more work in the background so that it will scale better as it gets busier. In the past I have used Celery for doing normal background tasks, and this has worked well.
The only difference between this application and the others I have done in the past is that I need to guarantee that these messages are processed, they can't be lost.
For this application I'm not too concerned about speed for my message queue, I need reliability and durability first and formost. To be safe I want to have two queue servers, both in different data centers in case something goes wrong, one a backup of the other.
Looking at Celery it looks like it supports a bunch of different backends, some with more features then the others. The two most popular look like redis and RabbitMQ so I took some time to examine them further.
**RabbitMQ:**
Supports durable queues and clustering, but the problem with the way they have clustering today is that if you lose a node in the cluster, all messages in that node are unavailable until you bring that node back online. It doesn't replicated the messages between the different nodes in the cluster, it just replicates the metadata about the message, and then it goes back to the originating node to get the message, if the node isn't running, you are S.O.L. Not ideal.
The way they recommend to get around this is to setup a second server and replicate the file system using DRBD, and then running something like pacemaker to switch the clients to the backup server when it needs too. This seems pretty complicated, not sure if there is a better way. Anyone know of a better way?
**Redis:**
Supports a read slave and this would allow me to have a backup in case of emergencies but it doesn't support master-master setup, and I'm not sure if it handles active failover between master and slave. It doesn't have the same features as RabbitMQ, but looks much easier to setup and maintain.
**Questions:**
1. What is the best way to setup celery
so that it will guarantee message
processing.
2. Has anyone done this before? If so,
would be mind sharing what you did? | 2011/07/05 | [
"https://Stackoverflow.com/questions/6577218",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/356788/"
] | A lot has changed since the OP! There is now an option for high-availability aka "mirrored" queues. This goes pretty far toward solving the problem you described. See <http://www.rabbitmq.com/ha.html>. | I've used Amazon SQS for this propose and got good results. You will recieve message until you will delete it from queue and it allows to grow you app as high as you will need. |
6,577,218 | I have a python application where I want to start doing more work in the background so that it will scale better as it gets busier. In the past I have used Celery for doing normal background tasks, and this has worked well.
The only difference between this application and the others I have done in the past is that I need to guarantee that these messages are processed, they can't be lost.
For this application I'm not too concerned about speed for my message queue, I need reliability and durability first and formost. To be safe I want to have two queue servers, both in different data centers in case something goes wrong, one a backup of the other.
Looking at Celery it looks like it supports a bunch of different backends, some with more features then the others. The two most popular look like redis and RabbitMQ so I took some time to examine them further.
**RabbitMQ:**
Supports durable queues and clustering, but the problem with the way they have clustering today is that if you lose a node in the cluster, all messages in that node are unavailable until you bring that node back online. It doesn't replicated the messages between the different nodes in the cluster, it just replicates the metadata about the message, and then it goes back to the originating node to get the message, if the node isn't running, you are S.O.L. Not ideal.
The way they recommend to get around this is to setup a second server and replicate the file system using DRBD, and then running something like pacemaker to switch the clients to the backup server when it needs too. This seems pretty complicated, not sure if there is a better way. Anyone know of a better way?
**Redis:**
Supports a read slave and this would allow me to have a backup in case of emergencies but it doesn't support master-master setup, and I'm not sure if it handles active failover between master and slave. It doesn't have the same features as RabbitMQ, but looks much easier to setup and maintain.
**Questions:**
1. What is the best way to setup celery
so that it will guarantee message
processing.
2. Has anyone done this before? If so,
would be mind sharing what you did? | 2011/07/05 | [
"https://Stackoverflow.com/questions/6577218",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/356788/"
] | I suspect that Celery bound to existing backends is the wrong solution for the reliability guarantees you need.
Given that you want a distributed queueing system with strong durability and reliability guarantees, I'd start by looking for such a system (they do exist) and then figuring out the best way to bind to it in Python. That may be via Celery & a new backend, or not. | Is using a distributed rendering system an option? Normally reserved for HPC but alot of concepts are the same. Check out Qube or Deadline Render. There are other, open source solutions as well. All have failover in mind given the high degree of complexity and risk of failure in some renders that can take hours per image sequence frame. |
6,577,218 | I have a python application where I want to start doing more work in the background so that it will scale better as it gets busier. In the past I have used Celery for doing normal background tasks, and this has worked well.
The only difference between this application and the others I have done in the past is that I need to guarantee that these messages are processed, they can't be lost.
For this application I'm not too concerned about speed for my message queue, I need reliability and durability first and formost. To be safe I want to have two queue servers, both in different data centers in case something goes wrong, one a backup of the other.
Looking at Celery it looks like it supports a bunch of different backends, some with more features then the others. The two most popular look like redis and RabbitMQ so I took some time to examine them further.
**RabbitMQ:**
Supports durable queues and clustering, but the problem with the way they have clustering today is that if you lose a node in the cluster, all messages in that node are unavailable until you bring that node back online. It doesn't replicated the messages between the different nodes in the cluster, it just replicates the metadata about the message, and then it goes back to the originating node to get the message, if the node isn't running, you are S.O.L. Not ideal.
The way they recommend to get around this is to setup a second server and replicate the file system using DRBD, and then running something like pacemaker to switch the clients to the backup server when it needs too. This seems pretty complicated, not sure if there is a better way. Anyone know of a better way?
**Redis:**
Supports a read slave and this would allow me to have a backup in case of emergencies but it doesn't support master-master setup, and I'm not sure if it handles active failover between master and slave. It doesn't have the same features as RabbitMQ, but looks much easier to setup and maintain.
**Questions:**
1. What is the best way to setup celery
so that it will guarantee message
processing.
2. Has anyone done this before? If so,
would be mind sharing what you did? | 2011/07/05 | [
"https://Stackoverflow.com/questions/6577218",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/356788/"
] | You might want to check out [IronMQ](http://iron.io/celery), it covers your requirements (durable, highly available, etc) and is a cloud native solution so zero maintenance. And there's a Celery broker for it: <https://github.com/iron-io/iron_celery> so you can start using it just by changing your Celery config. | I suspect that Celery bound to existing backends is the wrong solution for the reliability guarantees you need.
Given that you want a distributed queueing system with strong durability and reliability guarantees, I'd start by looking for such a system (they do exist) and then figuring out the best way to bind to it in Python. That may be via Celery & a new backend, or not. |
6,577,218 | I have a python application where I want to start doing more work in the background so that it will scale better as it gets busier. In the past I have used Celery for doing normal background tasks, and this has worked well.
The only difference between this application and the others I have done in the past is that I need to guarantee that these messages are processed, they can't be lost.
For this application I'm not too concerned about speed for my message queue, I need reliability and durability first and formost. To be safe I want to have two queue servers, both in different data centers in case something goes wrong, one a backup of the other.
Looking at Celery it looks like it supports a bunch of different backends, some with more features then the others. The two most popular look like redis and RabbitMQ so I took some time to examine them further.
**RabbitMQ:**
Supports durable queues and clustering, but the problem with the way they have clustering today is that if you lose a node in the cluster, all messages in that node are unavailable until you bring that node back online. It doesn't replicated the messages between the different nodes in the cluster, it just replicates the metadata about the message, and then it goes back to the originating node to get the message, if the node isn't running, you are S.O.L. Not ideal.
The way they recommend to get around this is to setup a second server and replicate the file system using DRBD, and then running something like pacemaker to switch the clients to the backup server when it needs too. This seems pretty complicated, not sure if there is a better way. Anyone know of a better way?
**Redis:**
Supports a read slave and this would allow me to have a backup in case of emergencies but it doesn't support master-master setup, and I'm not sure if it handles active failover between master and slave. It doesn't have the same features as RabbitMQ, but looks much easier to setup and maintain.
**Questions:**
1. What is the best way to setup celery
so that it will guarantee message
processing.
2. Has anyone done this before? If so,
would be mind sharing what you did? | 2011/07/05 | [
"https://Stackoverflow.com/questions/6577218",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/356788/"
] | A lot has changed since the OP! There is now an option for high-availability aka "mirrored" queues. This goes pretty far toward solving the problem you described. See <http://www.rabbitmq.com/ha.html>. | I suspect that Celery bound to existing backends is the wrong solution for the reliability guarantees you need.
Given that you want a distributed queueing system with strong durability and reliability guarantees, I'd start by looking for such a system (they do exist) and then figuring out the best way to bind to it in Python. That may be via Celery & a new backend, or not. |
6,577,218 | I have a python application where I want to start doing more work in the background so that it will scale better as it gets busier. In the past I have used Celery for doing normal background tasks, and this has worked well.
The only difference between this application and the others I have done in the past is that I need to guarantee that these messages are processed, they can't be lost.
For this application I'm not too concerned about speed for my message queue, I need reliability and durability first and formost. To be safe I want to have two queue servers, both in different data centers in case something goes wrong, one a backup of the other.
Looking at Celery it looks like it supports a bunch of different backends, some with more features then the others. The two most popular look like redis and RabbitMQ so I took some time to examine them further.
**RabbitMQ:**
Supports durable queues and clustering, but the problem with the way they have clustering today is that if you lose a node in the cluster, all messages in that node are unavailable until you bring that node back online. It doesn't replicated the messages between the different nodes in the cluster, it just replicates the metadata about the message, and then it goes back to the originating node to get the message, if the node isn't running, you are S.O.L. Not ideal.
The way they recommend to get around this is to setup a second server and replicate the file system using DRBD, and then running something like pacemaker to switch the clients to the backup server when it needs too. This seems pretty complicated, not sure if there is a better way. Anyone know of a better way?
**Redis:**
Supports a read slave and this would allow me to have a backup in case of emergencies but it doesn't support master-master setup, and I'm not sure if it handles active failover between master and slave. It doesn't have the same features as RabbitMQ, but looks much easier to setup and maintain.
**Questions:**
1. What is the best way to setup celery
so that it will guarantee message
processing.
2. Has anyone done this before? If so,
would be mind sharing what you did? | 2011/07/05 | [
"https://Stackoverflow.com/questions/6577218",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/356788/"
] | You might want to check out [IronMQ](http://iron.io/celery), it covers your requirements (durable, highly available, etc) and is a cloud native solution so zero maintenance. And there's a Celery broker for it: <https://github.com/iron-io/iron_celery> so you can start using it just by changing your Celery config. | Is using a distributed rendering system an option? Normally reserved for HPC but alot of concepts are the same. Check out Qube or Deadline Render. There are other, open source solutions as well. All have failover in mind given the high degree of complexity and risk of failure in some renders that can take hours per image sequence frame. |
6,577,218 | I have a python application where I want to start doing more work in the background so that it will scale better as it gets busier. In the past I have used Celery for doing normal background tasks, and this has worked well.
The only difference between this application and the others I have done in the past is that I need to guarantee that these messages are processed, they can't be lost.
For this application I'm not too concerned about speed for my message queue, I need reliability and durability first and formost. To be safe I want to have two queue servers, both in different data centers in case something goes wrong, one a backup of the other.
Looking at Celery it looks like it supports a bunch of different backends, some with more features then the others. The two most popular look like redis and RabbitMQ so I took some time to examine them further.
**RabbitMQ:**
Supports durable queues and clustering, but the problem with the way they have clustering today is that if you lose a node in the cluster, all messages in that node are unavailable until you bring that node back online. It doesn't replicated the messages between the different nodes in the cluster, it just replicates the metadata about the message, and then it goes back to the originating node to get the message, if the node isn't running, you are S.O.L. Not ideal.
The way they recommend to get around this is to setup a second server and replicate the file system using DRBD, and then running something like pacemaker to switch the clients to the backup server when it needs too. This seems pretty complicated, not sure if there is a better way. Anyone know of a better way?
**Redis:**
Supports a read slave and this would allow me to have a backup in case of emergencies but it doesn't support master-master setup, and I'm not sure if it handles active failover between master and slave. It doesn't have the same features as RabbitMQ, but looks much easier to setup and maintain.
**Questions:**
1. What is the best way to setup celery
so that it will guarantee message
processing.
2. Has anyone done this before? If so,
would be mind sharing what you did? | 2011/07/05 | [
"https://Stackoverflow.com/questions/6577218",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/356788/"
] | A lot has changed since the OP! There is now an option for high-availability aka "mirrored" queues. This goes pretty far toward solving the problem you described. See <http://www.rabbitmq.com/ha.html>. | Is using a distributed rendering system an option? Normally reserved for HPC but alot of concepts are the same. Check out Qube or Deadline Render. There are other, open source solutions as well. All have failover in mind given the high degree of complexity and risk of failure in some renders that can take hours per image sequence frame. |
6,577,218 | I have a python application where I want to start doing more work in the background so that it will scale better as it gets busier. In the past I have used Celery for doing normal background tasks, and this has worked well.
The only difference between this application and the others I have done in the past is that I need to guarantee that these messages are processed, they can't be lost.
For this application I'm not too concerned about speed for my message queue, I need reliability and durability first and formost. To be safe I want to have two queue servers, both in different data centers in case something goes wrong, one a backup of the other.
Looking at Celery it looks like it supports a bunch of different backends, some with more features then the others. The two most popular look like redis and RabbitMQ so I took some time to examine them further.
**RabbitMQ:**
Supports durable queues and clustering, but the problem with the way they have clustering today is that if you lose a node in the cluster, all messages in that node are unavailable until you bring that node back online. It doesn't replicated the messages between the different nodes in the cluster, it just replicates the metadata about the message, and then it goes back to the originating node to get the message, if the node isn't running, you are S.O.L. Not ideal.
The way they recommend to get around this is to setup a second server and replicate the file system using DRBD, and then running something like pacemaker to switch the clients to the backup server when it needs too. This seems pretty complicated, not sure if there is a better way. Anyone know of a better way?
**Redis:**
Supports a read slave and this would allow me to have a backup in case of emergencies but it doesn't support master-master setup, and I'm not sure if it handles active failover between master and slave. It doesn't have the same features as RabbitMQ, but looks much easier to setup and maintain.
**Questions:**
1. What is the best way to setup celery
so that it will guarantee message
processing.
2. Has anyone done this before? If so,
would be mind sharing what you did? | 2011/07/05 | [
"https://Stackoverflow.com/questions/6577218",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/356788/"
] | A lot has changed since the OP! There is now an option for high-availability aka "mirrored" queues. This goes pretty far toward solving the problem you described. See <http://www.rabbitmq.com/ha.html>. | You might want to check out [IronMQ](http://iron.io/celery), it covers your requirements (durable, highly available, etc) and is a cloud native solution so zero maintenance. And there's a Celery broker for it: <https://github.com/iron-io/iron_celery> so you can start using it just by changing your Celery config. |
21,669,632 | I am trying to open a Windows Media Video file on a macintosh using OpenCV. To view this video in MacOS I had to install a player called Flip4Mac. I am assuming that this came with the codecs for decoding WMV. Is there something I can now do to get OpenCV to open the videos using the codec?
In python/opencv2 opening a video should be super easy:
```
cap = cv2.VideoCapture('0009.wmv')
```
But I get this:
```
WARNING: Couldn't read movie file 0009.wmv
``` | 2014/02/10 | [
"https://Stackoverflow.com/questions/21669632",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/391339/"
] | use split function.
```
var str = "Architecture, Royal Melbourne Institute of Technology";
console.log(str.split(",")[0]);// logs Architecture
```
output array after splitting your string by `,` would have the expected result at the zeroth index. | Its again a normal Javascript, all the methods can be used in nodeJS.
var name = "any string";
For example:
```
var str = "Hi, world",
arrayOfStrings = str.split(','),
output = arrayOfStrings[0]; // output contains "Hi"
```
You can update the required field by directly replacing the string ie.
```
arrayOfStrings[0] = "other string";
str = arrayOfStrings.join(' '); // "other string world"
```
Point to be noted:
If we update the output, as we are updating the string it contains the copy NOT the reference, while joining it gives the same text ie, "Hi world".
So we need to change the reference value ie arrayOfStrings[0] then .join(' ') will combine the required string. |
7,020,630 | I wish to run a long-running script in the background upon receiving a request. I read about `subprocess` but I require that the call is nonblocking so that the request can complete in time.
```
def controlCrawlers(request):
if request.method == 'POST' and 'type' in request.POST and 'cc' in request.POST:
if request.POST['type'] == '3':
if request.POST['cc'] == '1':
try: #temp solution checking socket is occupied by trying to connect
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect(('localhost',DISCOVERY_SOCKET))
s.close()
return HttpResponse(simplejson.dumps({'success':0,'message': 'Socket is occupied. Possible crawler is already running'}), \
mimetype='application/json')
except:
pid = os.fork()
if pid == 0:
#f = open('/home/foo/django','a')
#f.write('abc')
# f.close()
path = os.path.join(os.path.dirname(__file__), 'blogcontentReader/blogpost_crawler.py')
os.system("python %s" %path)
os._exit(0)
return HttpResponse(simplejson.dumps({'success':1,'message': 'Running...'}), \
mimetype='application/json')
```
I used os.fork as suggested from another [post](https://stackoverflow.com/questions/6441807/spawn-a-new-non-blocking-process-using-python-on-mac-os-x) but apparently control does not flow into my `if pid == 0` portion. Is this the correct method to do this? | 2011/08/11 | [
"https://Stackoverflow.com/questions/7020630",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/357236/"
] | Yeah, don't do this, use [celery](http://docs.celeryproject.org/en/master/getting-started/introduction.html) instead. It makes running asynchronous tasks a lot easier, more reliable. | If you don't want to use asynchronous task queues with something like celery you can always just run a python script via cron. There are several options to do this. An example:
* create a model which save the values which are needed by your process
* write a standalone python/django script which get the values from the model, executee the task and remove the database entries
* set up a cronjob to run your script |
19,742,451 | I'm trying to use Django with virtualenv. I actually got the Django hello world webpage to display with 127.0.0.1:8001. Later I had to do some minor tweaks and now its giving me this error when I try to launch it again (I ctrl-Z from the previous working gunicorn session so I don't think it is because of that).
```
user myenv # /opt/myenv/bin/gunicorn -c /opt/myenv/gunicorn_config.py myProject.wsgi
2013-11-02 08:26:37 [27880] [INFO] Starting gunicorn 18.0
2013-11-02 08:26:37 [27880] [ERROR] Connection in use: ('127.0.0.1', 8001)
2013-11-02 08:26:37 [27880] [ERROR] Retrying in 1 second.
2013-11-02 08:26:38 [27880] [ERROR] Connection in use: ('127.0.0.1', 8001)
2013-11-02 08:26:38 [27880] [ERROR] Retrying in 1 second.
2013-11-02 08:26:39 [27880] [ERROR] Connection in use: ('127.0.0.1', 8001)
2013-11-02 08:26:39 [27880] [ERROR] Retrying in 1 second.
^C2013-11-02 08:26:40 [27880] [ERROR] Connection in use: ('127.0.0.1', 8001)
2013-11-02 08:26:40 [27880] [ERROR] Retrying in 1 second.
2013-11-02 08:26:41 [27880] [ERROR] Connection in use: ('127.0.0.1', 8001)
2013-11-02 08:26:41 [27880] [ERROR] Retrying in 1 second.
2013-11-02 08:26:42 [27880] [ERROR] Can't connect to ('127.0.0.1', 8001)
user myenv #
```
Other commands I recently used include:
```
python manage.py syncdb
python manage.py startapp polls
```
I did 'killall python' to make sure they were not the cause.
gunicorn\_config.py:
```
command = '/opt/myenv/bin/gunicorn'
pythonpath = '/opt/myenv/myProject
workers = 1
user = 'tim'
```
myProject.wsgi:
```
import os
# os.environ["DJANGO_SETTINGS_MODULE"] = "myProject.settings"
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "myProject.settings")
from django.core.wsgi import get_wsgi_application
application = get_wsgi_application()
``` | 2013/11/02 | [
"https://Stackoverflow.com/questions/19742451",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1661745/"
] | `ctrl+z` halts the process, but does not close it. In consequence it does not release its ports. You can bring the process back with `fg` and then close it properly using `ctrl+c`. | The port 8000 was probably bound and thus unavailable for the connection. |
19,742,451 | I'm trying to use Django with virtualenv. I actually got the Django hello world webpage to display with 127.0.0.1:8001. Later I had to do some minor tweaks and now its giving me this error when I try to launch it again (I ctrl-Z from the previous working gunicorn session so I don't think it is because of that).
```
user myenv # /opt/myenv/bin/gunicorn -c /opt/myenv/gunicorn_config.py myProject.wsgi
2013-11-02 08:26:37 [27880] [INFO] Starting gunicorn 18.0
2013-11-02 08:26:37 [27880] [ERROR] Connection in use: ('127.0.0.1', 8001)
2013-11-02 08:26:37 [27880] [ERROR] Retrying in 1 second.
2013-11-02 08:26:38 [27880] [ERROR] Connection in use: ('127.0.0.1', 8001)
2013-11-02 08:26:38 [27880] [ERROR] Retrying in 1 second.
2013-11-02 08:26:39 [27880] [ERROR] Connection in use: ('127.0.0.1', 8001)
2013-11-02 08:26:39 [27880] [ERROR] Retrying in 1 second.
^C2013-11-02 08:26:40 [27880] [ERROR] Connection in use: ('127.0.0.1', 8001)
2013-11-02 08:26:40 [27880] [ERROR] Retrying in 1 second.
2013-11-02 08:26:41 [27880] [ERROR] Connection in use: ('127.0.0.1', 8001)
2013-11-02 08:26:41 [27880] [ERROR] Retrying in 1 second.
2013-11-02 08:26:42 [27880] [ERROR] Can't connect to ('127.0.0.1', 8001)
user myenv #
```
Other commands I recently used include:
```
python manage.py syncdb
python manage.py startapp polls
```
I did 'killall python' to make sure they were not the cause.
gunicorn\_config.py:
```
command = '/opt/myenv/bin/gunicorn'
pythonpath = '/opt/myenv/myProject
workers = 1
user = 'tim'
```
myProject.wsgi:
```
import os
# os.environ["DJANGO_SETTINGS_MODULE"] = "myProject.settings"
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "myProject.settings")
from django.core.wsgi import get_wsgi_application
application = get_wsgi_application()
``` | 2013/11/02 | [
"https://Stackoverflow.com/questions/19742451",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1661745/"
] | `ctrl+z` halts the process, but does not close it. In consequence it does not release its ports. You can bring the process back with `fg` and then close it properly using `ctrl+c`. | The error `Connection in use: ...` basically means that the port is still in use even though you exited the server. You need to find who is currently using the port and turn them off. This command can help you find who is there:
```py
$ sudo netstat -nlp | grep :80
```
Then you can sudo kill that process:
```py
sudo fuser -k 8000/tcp
```
You should be able to restart `gunicorn`. |
19,742,451 | I'm trying to use Django with virtualenv. I actually got the Django hello world webpage to display with 127.0.0.1:8001. Later I had to do some minor tweaks and now its giving me this error when I try to launch it again (I ctrl-Z from the previous working gunicorn session so I don't think it is because of that).
```
user myenv # /opt/myenv/bin/gunicorn -c /opt/myenv/gunicorn_config.py myProject.wsgi
2013-11-02 08:26:37 [27880] [INFO] Starting gunicorn 18.0
2013-11-02 08:26:37 [27880] [ERROR] Connection in use: ('127.0.0.1', 8001)
2013-11-02 08:26:37 [27880] [ERROR] Retrying in 1 second.
2013-11-02 08:26:38 [27880] [ERROR] Connection in use: ('127.0.0.1', 8001)
2013-11-02 08:26:38 [27880] [ERROR] Retrying in 1 second.
2013-11-02 08:26:39 [27880] [ERROR] Connection in use: ('127.0.0.1', 8001)
2013-11-02 08:26:39 [27880] [ERROR] Retrying in 1 second.
^C2013-11-02 08:26:40 [27880] [ERROR] Connection in use: ('127.0.0.1', 8001)
2013-11-02 08:26:40 [27880] [ERROR] Retrying in 1 second.
2013-11-02 08:26:41 [27880] [ERROR] Connection in use: ('127.0.0.1', 8001)
2013-11-02 08:26:41 [27880] [ERROR] Retrying in 1 second.
2013-11-02 08:26:42 [27880] [ERROR] Can't connect to ('127.0.0.1', 8001)
user myenv #
```
Other commands I recently used include:
```
python manage.py syncdb
python manage.py startapp polls
```
I did 'killall python' to make sure they were not the cause.
gunicorn\_config.py:
```
command = '/opt/myenv/bin/gunicorn'
pythonpath = '/opt/myenv/myProject
workers = 1
user = 'tim'
```
myProject.wsgi:
```
import os
# os.environ["DJANGO_SETTINGS_MODULE"] = "myProject.settings"
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "myProject.settings")
from django.core.wsgi import get_wsgi_application
application = get_wsgi_application()
``` | 2013/11/02 | [
"https://Stackoverflow.com/questions/19742451",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1661745/"
] | The error `Connection in use: ...` basically means that the port is still in use even though you exited the server. You need to find who is currently using the port and turn them off. This command can help you find who is there:
```py
$ sudo netstat -nlp | grep :80
```
Then you can sudo kill that process:
```py
sudo fuser -k 8000/tcp
```
You should be able to restart `gunicorn`. | The port 8000 was probably bound and thus unavailable for the connection. |
62,295,863 | I have this (python) list
my\_list = [['dog','cat','mat','fun'],['bob','cat','pan','fun'],['dog','ben','mat','rat'],
['cat','mat','fun','dog'],['mat','fun','dog','cat'],['fun','dog','cat','mat'],
['rat','dog','ben','mat'],['dog','mat','cat','fun'], ...
]
my\_list has 200704 elements
Note here
my\_list[0] = ['dog','cat','mat','fun']
dog->cat->mat->fun->dog
my\_list[3] = ['cat','mat','fun','dog']
cat->mat->fun->dog->cat
my\_list[4] = ['mat','fun','dog','cat']
mat->fun->dog->cat->mat
my\_list[5] = ['fun','dog','cat','mat']
fun->dog->cat->mat->fun
Going circular, they are all the same. So they should be marked duplicates.
Note:
my\_list[0] = ['dog','cat','mat','fun']
my\_list[7] = ['dog','mat','cat','fun']
These should NOT be marked duplicates since going circular, they are different.
Similarly,
my\_list[2] = ['dog','ben','mat','rat']
my\_list[6] = ['rat','dog','ben','mat']
They should be marked duplicates.
```
def remove_circular_duplicates(my_list):
# the quicker and more elegent logic here
# the function should identify that my_list[0], my_list[3], my_list[4] and my_list[5] are circular duplicates
# keep only my_list[0] and delete the rest 3
# same for my_list[2] and my_list[6] and so on
return (my_list_with_no_circular_duplicates)
```
----------------------------------------------------------------
My try:
----------------------------------------------------------------
This works but, takes more than 3 hrs to finish 200704 elements.
And its not an elegant way too.. (pardon my level)
```
t=my_list
tLen=len(t)
while i<tLen:
c=c+1
if c>2000:
# this is just to keep you informed of the progress
print(f'{i} of {tLen} finished ..')
c=0
if (finalT[i][4]=='unmarked'):
# make 0-1-2-3 -> 1-2-3-0 and check any duplicates
x0,x1,x2,x3 = t[i][1],t[i][2],t[i][3],t[i][0]
# make 0-1-2-3 -> 2-3-0-1 and check any duplicates
y0,y1,y2,y3 = t[i][2],t[i][3],t[i][0],t[i][1]
# make 0-1-2-3 -> 3-0-1-2 and check any duplicates
z0,z1,z2,z3 = t[i][3],t[i][0],t[i][1],t[i][2]
while j<tLen:
if (finalT[j][4]=='unmarked' and j!=i):
#j!=i skips checking the same (self) element
tString=t[j][0]+t[j][1]+t[j][2]+t[j][3]
if (x0+x1+x2+x3 == tString) or (y0+y1+y2+y3 == tString) or (z0+z1+z2+z3 == tString):
# duplicate found, mark it as 'duplicate'
finalT[j][4]='duplicate'
tString=''
j=j+1
finalT[i][4] = 'original'
j=0
i=i+1
# make list of only those marked as 'original'
i=0
ultimateT = []
while i<tLen:
if finalT[i][4] == 'original':
ultimateT.append(finalT[i])
i=i+1
# strip the 'oritinal' mark and keep only the quad
i=0
ultimateTLen=len(ultimateT)
while i<ultimateTLen:
ultimateT[i].remove('original')
i=i+1
my_list_with_no_curcular_duplicates = ultimateT
print (f'\n\nDONE!! \nStarted at: {start_time}\nEnded at {datetime.datetime.now()}')
return my_list_with_no_circular_duplicates
```
What i want is a quicker way of doing the same.
Tnx in advance. | 2020/06/10 | [
"https://Stackoverflow.com/questions/62295863",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13717822/"
] | Your implementation is an n-squared algorithm, which means that the implementation time will grow dramatically for a large data set. 200,000 squared is a very large number. You need to convert this to an order n or n-log(n) algorithm. To do that you need to preprocess the data so that you can check whether a circularly equivalent item is also in the list without having to search through the list. To do that put each of the entries into a form that they can be compared without needing to iterate through the list. I would recommend that you rotate each entry so that it has the alphabetically first item first. For example change ['dog','cat','mat','fun'] to ['cat','mat','fun','dog']. That is an order n operation to process each element of the list once.
Then with them all in a common format you have several choices to determine if each entry is unique. I would use a set. For each item check if the item is in a set, if not, it is unique and should be added to the set. If the item is already in the set, then an equivalent item has already been found and this item can be removed. Checking if an item is in a set is a constant time operation in Python. It does this by using a hash table in order to index to find an item instead of needing to search. The result is this is is also an order n operation to go through each entry doing the check. Overall the algorithm is order n and will be dramatically faster than what you were doing. | @BradBudlong
Brad Budlong's answer is right.
Following is the implementation result of the same.
My method (given in the question):
Time taken: ~274 min
Result: len(my\_list\_without\_circular\_duplicates) >> 50176
Brad Budlong's method:
Time taken: ~12 sec (great !)
Result: len(my\_list\_without\_circular\_duplicates) >> 50176
Following is just the implementation of Brad Budlong's method:
```
# extract all individual words like 'cat', 'rat', 'fun' and put in a list without duplicates
all_non_duplicate_words_from_my_list = {.. the appropriate code here}
# and sort them alphabetically
alphabetically_sorted_words = sorted(all_non_duplicate_words_from_my_list)
# mark all as 'unsorted'
all_q_marked=[]
for i in my_list:
all_q_marked.append([i,'unsorted'])
# format my_list- in Brad's words,
# rotate each entry so that it has the alphabetically first item first.
# For example change ['dog','cat','mat','fun'] to ['cat','mat','fun','dog']
for w in alphabetically_sorted_words:
print(f'{w} in progress ..')
for q in all_q_marked:
if q[1]=='unsorted':
# check if the word exist in the quad
if w in q[0]:
# word exist, then rotate this quad to put that word in first place
# rotation_count=q[0].index(w) -- alternate method lines
quad=q[0]
for j in range(4):
quad=quad[-1:] + quad[:-1]
if quad[0]==w:
q[0]=quad
break
# mark as sorted
q[1]='sorted'
# strip the 'sorted' mark and keep only the quad
i=0
formatted_my_list=[]
while i<len(all_q_marked):
formatted_my_list.append(all_q_marked[i][0])
i=i+1
# finally remove duplicate lists in the list
my_list_without_circular_duplicates = [list(t) for t in set(tuple(element) for element in formatted_my_list)]
print (my_list_without_circular_duplicates)
```
Note here, although it iterates and processes alphabetically\_sorted\_words (201) with entire all\_q\_marked (200704) still, the time taken to process exponentially reduces as elements in the all\_q\_marked gets marked as 'sorted'. |
58,909,624 | While reading this [article](https://pbpython.com/pandas_transform.html), I came across this statement.
```
order_total = df.groupby('order')["ext price"].sum().rename("Order_Total").reset_index()
```
Other than `reset_index()` method call, everything else is clear to me.
My question is what will happen if I don't call `reset_index()` considering the given below sequence?
```
order_total = df.groupby('order')["ext price"].sum().rename("Order_Total").reset_index()
df_1 = df.merge(order_total)
df_1["Percent_of_Order"] = df_1["ext price"] / df_1["Order_Total"]
```
I tried to understand about this method from <https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.reset_index.html>, but couldn't understand what does it mean to *reset the index* of a dataframe. | 2019/11/18 | [
"https://Stackoverflow.com/questions/58909624",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1465553/"
] | I think better here is use [`GroupBy.transform`](http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.transform.html) for new `Series` with same size like original DataFrame filled by aggregate values, so `merge` is not necessary:
```
df_1 = pd.DataFrame({
'A':list('abcdef'),
'ext price':[5,3,6,9,2,4],
'order':list('aaabbb')
})
order_total1 = df_1.groupby('order')["ext price"].transform('sum')
df_1["Percent_of_Order"] = df_1["ext price"] / order_total1
print (df_1)
A ext price order Percent_of_Order
0 a 5 a 0.357143
1 b 3 a 0.214286
2 c 6 a 0.428571
3 d 9 b 0.600000
4 e 2 b 0.133333
5 f 4 b 0.266667
```
>
> My question is what will happen if I don't call reset\_index() considering the sequence?
>
>
>
Here is `Series` before `reset_index()`, so after `reset_index` is converting `Series` to 2 columns DataFrame, first column is called by index name and second column by `Series` name.
```
order_total = df_1.groupby('order')["ext price"].sum().rename("Order_Total")
print (order_total)
order
a 14
b 15
Name: Order_Total, dtype: int64
print (type(order_total))
<class 'pandas.core.series.Series'>
print (order_total.name)
Order_Total
print (order_total.index.name)
order
print (order_total.reset_index())
order Order_Total
0 a 14
1 b 15
```
Reason why is necessry in your code to 2 columns DataFrame is no parameter in `merge`. It means it use parameter `on` by intersection of common columns names between both DataFrames, here `order` column. | A simplified explanation is that;
`reset_index()` takes the current index, and places it in column 'index'. Then it recreates a new 'linear' index for the data-set.
```
df=pd.DataFrame([20,30,40,50],index=[2,3,4,5])
0
2 20
3 30
4 40
5 50
df.reset_index()
index 0
0 2 20
1 3 30
2 4 40
3 5 50
``` |
58,909,624 | While reading this [article](https://pbpython.com/pandas_transform.html), I came across this statement.
```
order_total = df.groupby('order')["ext price"].sum().rename("Order_Total").reset_index()
```
Other than `reset_index()` method call, everything else is clear to me.
My question is what will happen if I don't call `reset_index()` considering the given below sequence?
```
order_total = df.groupby('order')["ext price"].sum().rename("Order_Total").reset_index()
df_1 = df.merge(order_total)
df_1["Percent_of_Order"] = df_1["ext price"] / df_1["Order_Total"]
```
I tried to understand about this method from <https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.reset_index.html>, but couldn't understand what does it mean to *reset the index* of a dataframe. | 2019/11/18 | [
"https://Stackoverflow.com/questions/58909624",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1465553/"
] | A simplified explanation is that;
`reset_index()` takes the current index, and places it in column 'index'. Then it recreates a new 'linear' index for the data-set.
```
df=pd.DataFrame([20,30,40,50],index=[2,3,4,5])
0
2 20
3 30
4 40
5 50
df.reset_index()
index 0
0 2 20
1 3 30
2 4 40
3 5 50
``` | To answer your question:
>
> My question is what will happen if I don't call reset\_index() considering the sequence?
>
>
>
You will have a multi-index formed by the keys you have applied group-by statement on.
for eg- 'order' in your case.
Specific to the article, difference in indices of two dataframes may cause wrong merges (done after the group-by statement).
Hence, a reset-index is needed to perform the correct merge. |
58,909,624 | While reading this [article](https://pbpython.com/pandas_transform.html), I came across this statement.
```
order_total = df.groupby('order')["ext price"].sum().rename("Order_Total").reset_index()
```
Other than `reset_index()` method call, everything else is clear to me.
My question is what will happen if I don't call `reset_index()` considering the given below sequence?
```
order_total = df.groupby('order')["ext price"].sum().rename("Order_Total").reset_index()
df_1 = df.merge(order_total)
df_1["Percent_of_Order"] = df_1["ext price"] / df_1["Order_Total"]
```
I tried to understand about this method from <https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.reset_index.html>, but couldn't understand what does it mean to *reset the index* of a dataframe. | 2019/11/18 | [
"https://Stackoverflow.com/questions/58909624",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1465553/"
] | I think better here is use [`GroupBy.transform`](http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.transform.html) for new `Series` with same size like original DataFrame filled by aggregate values, so `merge` is not necessary:
```
df_1 = pd.DataFrame({
'A':list('abcdef'),
'ext price':[5,3,6,9,2,4],
'order':list('aaabbb')
})
order_total1 = df_1.groupby('order')["ext price"].transform('sum')
df_1["Percent_of_Order"] = df_1["ext price"] / order_total1
print (df_1)
A ext price order Percent_of_Order
0 a 5 a 0.357143
1 b 3 a 0.214286
2 c 6 a 0.428571
3 d 9 b 0.600000
4 e 2 b 0.133333
5 f 4 b 0.266667
```
>
> My question is what will happen if I don't call reset\_index() considering the sequence?
>
>
>
Here is `Series` before `reset_index()`, so after `reset_index` is converting `Series` to 2 columns DataFrame, first column is called by index name and second column by `Series` name.
```
order_total = df_1.groupby('order')["ext price"].sum().rename("Order_Total")
print (order_total)
order
a 14
b 15
Name: Order_Total, dtype: int64
print (type(order_total))
<class 'pandas.core.series.Series'>
print (order_total.name)
Order_Total
print (order_total.index.name)
order
print (order_total.reset_index())
order Order_Total
0 a 14
1 b 15
```
Reason why is necessry in your code to 2 columns DataFrame is no parameter in `merge`. It means it use parameter `on` by intersection of common columns names between both DataFrames, here `order` column. | Reset Index will create index starting from 0 and remove if there is any column set as index.
```
import pandas as pd
df = pd.DataFrame(
{
"ID": [1, 2, 3, 4, 5],
"name": [
"Hello Kitty",
"Hello Puppy",
"It is an Helloexample",
"for stackoverflow",
"Hello World",
],
}
)
newdf = df.set_index('ID')
print newdf.reset_index()
```
Output Before reset\_index():
```
name
ID
1 Hello Kitty
2 Hello Puppy
3 It is an Helloexample
4 for stackoverflow
5 Hello World
```
Output after reset\_index():
```
ID name
0 1 Hello Kitty
1 2 Hello Puppy
2 3 It is an Helloexample
3 4 for stackoverflow
4 5 Hello World
``` |
58,909,624 | While reading this [article](https://pbpython.com/pandas_transform.html), I came across this statement.
```
order_total = df.groupby('order')["ext price"].sum().rename("Order_Total").reset_index()
```
Other than `reset_index()` method call, everything else is clear to me.
My question is what will happen if I don't call `reset_index()` considering the given below sequence?
```
order_total = df.groupby('order')["ext price"].sum().rename("Order_Total").reset_index()
df_1 = df.merge(order_total)
df_1["Percent_of_Order"] = df_1["ext price"] / df_1["Order_Total"]
```
I tried to understand about this method from <https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.reset_index.html>, but couldn't understand what does it mean to *reset the index* of a dataframe. | 2019/11/18 | [
"https://Stackoverflow.com/questions/58909624",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1465553/"
] | I think better here is use [`GroupBy.transform`](http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.transform.html) for new `Series` with same size like original DataFrame filled by aggregate values, so `merge` is not necessary:
```
df_1 = pd.DataFrame({
'A':list('abcdef'),
'ext price':[5,3,6,9,2,4],
'order':list('aaabbb')
})
order_total1 = df_1.groupby('order')["ext price"].transform('sum')
df_1["Percent_of_Order"] = df_1["ext price"] / order_total1
print (df_1)
A ext price order Percent_of_Order
0 a 5 a 0.357143
1 b 3 a 0.214286
2 c 6 a 0.428571
3 d 9 b 0.600000
4 e 2 b 0.133333
5 f 4 b 0.266667
```
>
> My question is what will happen if I don't call reset\_index() considering the sequence?
>
>
>
Here is `Series` before `reset_index()`, so after `reset_index` is converting `Series` to 2 columns DataFrame, first column is called by index name and second column by `Series` name.
```
order_total = df_1.groupby('order')["ext price"].sum().rename("Order_Total")
print (order_total)
order
a 14
b 15
Name: Order_Total, dtype: int64
print (type(order_total))
<class 'pandas.core.series.Series'>
print (order_total.name)
Order_Total
print (order_total.index.name)
order
print (order_total.reset_index())
order Order_Total
0 a 14
1 b 15
```
Reason why is necessry in your code to 2 columns DataFrame is no parameter in `merge`. It means it use parameter `on` by intersection of common columns names between both DataFrames, here `order` column. | To answer your question:
>
> My question is what will happen if I don't call reset\_index() considering the sequence?
>
>
>
You will have a multi-index formed by the keys you have applied group-by statement on.
for eg- 'order' in your case.
Specific to the article, difference in indices of two dataframes may cause wrong merges (done after the group-by statement).
Hence, a reset-index is needed to perform the correct merge. |
58,909,624 | While reading this [article](https://pbpython.com/pandas_transform.html), I came across this statement.
```
order_total = df.groupby('order')["ext price"].sum().rename("Order_Total").reset_index()
```
Other than `reset_index()` method call, everything else is clear to me.
My question is what will happen if I don't call `reset_index()` considering the given below sequence?
```
order_total = df.groupby('order')["ext price"].sum().rename("Order_Total").reset_index()
df_1 = df.merge(order_total)
df_1["Percent_of_Order"] = df_1["ext price"] / df_1["Order_Total"]
```
I tried to understand about this method from <https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.reset_index.html>, but couldn't understand what does it mean to *reset the index* of a dataframe. | 2019/11/18 | [
"https://Stackoverflow.com/questions/58909624",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1465553/"
] | Reset Index will create index starting from 0 and remove if there is any column set as index.
```
import pandas as pd
df = pd.DataFrame(
{
"ID": [1, 2, 3, 4, 5],
"name": [
"Hello Kitty",
"Hello Puppy",
"It is an Helloexample",
"for stackoverflow",
"Hello World",
],
}
)
newdf = df.set_index('ID')
print newdf.reset_index()
```
Output Before reset\_index():
```
name
ID
1 Hello Kitty
2 Hello Puppy
3 It is an Helloexample
4 for stackoverflow
5 Hello World
```
Output after reset\_index():
```
ID name
0 1 Hello Kitty
1 2 Hello Puppy
2 3 It is an Helloexample
3 4 for stackoverflow
4 5 Hello World
``` | To answer your question:
>
> My question is what will happen if I don't call reset\_index() considering the sequence?
>
>
>
You will have a multi-index formed by the keys you have applied group-by statement on.
for eg- 'order' in your case.
Specific to the article, difference in indices of two dataframes may cause wrong merges (done after the group-by statement).
Hence, a reset-index is needed to perform the correct merge. |
55,276,170 | I have been using Selenium and python to web scrape for a couple of weeks now. It has been working fairly good. Been running on a macOS and windows 7. However all the sudden the headless web driver has stopped working. I have been using chromedriver with the following settings:
```
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
options = Options()
options.add_argument("--headless")
options.add_argument('--no-sandbox')
options.add_argument('--disable-gpu')
chrome_options.add_argument("--window-size=1920x1080")
driver = webdriver.Chrome(chrome_options=options)
driver.get('url')
```
Initially I had to add the window, gpu and sandbox arguments to get it work and it did work up until now. However, when running the script now it gets stuck at driver.get('url'). It doesn't produce an error or anything just seems to run indefinitely. When I run without headless and simply run:
```
from selenium import webdriver
driver = webdriver.Chrome()
driver.get('url')
```
it works exactly as intended. This problem is also isolated to my windows machine. Where do I start? | 2019/03/21 | [
"https://Stackoverflow.com/questions/55276170",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9428990/"
] | You could try giving your svg an id (or class) and then styling it like so:
```
#test{
opacity:0;
}
#test:hover{
opacity:1;
}
```
---
the id should be inside your svg:
```
<svg id="test" .............. >
</svg>
```
Im not sure if this is what you exactly mean but its an easy way to do it | I would suggest taking a look at [ngx-svg](https://www.npmjs.com/package/ngx-svg) which allows to create containers and add multiple elements within those containers - in your case circles. It has other elements as well, and there is a documentation, which allows to understand what you have to do as well. |
17,779,480 | Recently, I've been attempting to defeat one of my main weaknesses in programming in general, random generation. I thought it would be an easy thing to do, but the lack of simple information is killing me on it. I don't want to sound dumb, but it feels to me like most of the information from places like [this](http://freespace.virgin.net/hugo.elias/models/m_perlin.htm) are written for mathematicians who went to college to graduate in theoretical mathematics. I just don't understand what I'm meant to do with that information in order to apply it to programming in a language such as python.
I've been working a few days staring at equations and attempting attempt after attempt, but still after all those days, after ripping my code apart again and again, all that's been working properly this entire time is this noise generator to generate basic noise:
```
import random
import math
random.seed(0)
def generateWhiteNoise(width,height):
noise = [[r for r in range(width)] for i in range(height)]
for i in range(0,height):
for j in range(0,width):
noise[i][j] = random.randint(0,1)
return noise
noise = generateWhiteNoise(50,12)
for i in noise:
print()
for o in i:
if(o == 0):
print('-',end='')
else:
print('#',end='')
```
This code produces this result:
```
##-######--#--#-#--##-###-###---#-##-#-----#--##-#
#-#-##-##-#----##------##--#####-#-##---#--#-##---
-------#-#------#---#-#---###--#--#-###-----##-#--
######--#-#-#--####-###---#---###-##--#-#-##--####
-#----###--------##--##--##-#-#--#----###-####--##
---####-#--#--###-#-#--#--#####--####-#-##-##--#--
----#--####-#-#-#-#-#---#--###------###--#-######-
--###--#-###-------#-##--###---#-####----###-#####
#----##--##-#--##-###--#----#-#-##--##-#-##---###-
##---##----##--##--#--#--###-###-#--#-##---#------
-##----#-###---######---#-#---#---###---#---###-##
#--##-##-###-###---#--##-##--##-##-#-#-##--#-#-##-
```
I'm wanting it to eventually produce something like this:
```
--------------------------------------------------
------------------####----------------------------
-----------------#####----------------------------
----------------#####-----------------------------
---------------#####--------------###-------------
---------------#####--------------###-------------
---------------------------------####-------------
---######------------------------####-------------
---######------------###--------------------------
----########---------###--------------------------
-----#######---------###--------------------------
------###-----------------------------------------
```
How can I manage to smooth out the white-noise I generate, and turn it into islands? Can anyone explain it in a very simplistic way for me?
I may be thinking about all of this very wrong. | 2013/07/22 | [
"https://Stackoverflow.com/questions/17779480",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2577669/"
] | The direct answer to your question is "No, you cannot do what you are asking", and the second answer is "Yes, you are thinking about this all wrong".
The reason is that you are generating completely random noise. What you are asking for is coherent noise. They are two completely different animals and you cannot get coherent noise from random noise. Hence my answer.
To explain why, you must understand this simple statement which I am repeating from the [excellent libnoise documentation](http://libnoise.sourceforge.net/glossary/index.html#coherentnoise):
---
**Coherent noise**
==================
*A type of smooth pseudorandom noise.*
Coherent noise is generated by a coherent-noise function, which has three important properties:
* Passing in the same input value will always return the same output value.
* A small change in the input value will produce a small change in the output value.
* A large change in the input value will produce a random change in the output value.
---
Random noise does not have these properties, and therefore is completely unsuitable for what you are trying to achieve.
I would suggest studying [Ken Perlin's latest (improved) reference implementation](http://mrl.nyu.edu/~perlin/noise/) and his [SIGGRAPH 2002](http://mrl.nyu.edu/~perlin/paper445.pdf) notes.
If you cannot understand or implement this, then just use a library such as [libnoise](http://libnoise.sourceforge.net/), an excellent and well used LGPL library originally in C++ which has also been ported to many other languages. | Rather use cellular automatons. The algorithm that you find [here](http://www.roguebasin.com/index.php?title=Cellular_Automata_Method_for_Generating_Random_Cave-Like_Levels) creates similar patterns that you you would like to see:
```
. . . . . . . . . . . . . . .
. . . . . # # . . . . . # . .
. . . . # # # # . . . # # # .
. . . . . # # # # . . # # # .
. . . . . . # # # # # # # . .
. . . . . . # # # # # # # . .
. . . . # # # # # # # # # . .
. . . # # # # # # # # # # . .
. . # # # # # # . # . # # . .
. . # # # # # . . # . . . . .
. . . # # # # . . . # # # . .
. . . # # # # . . . # # # # .
. . # # # # . . . . . # # # .
. . # # # # . . . . . # # . .
. . . . . . . . . . . . . . .
``` |
17,779,480 | Recently, I've been attempting to defeat one of my main weaknesses in programming in general, random generation. I thought it would be an easy thing to do, but the lack of simple information is killing me on it. I don't want to sound dumb, but it feels to me like most of the information from places like [this](http://freespace.virgin.net/hugo.elias/models/m_perlin.htm) are written for mathematicians who went to college to graduate in theoretical mathematics. I just don't understand what I'm meant to do with that information in order to apply it to programming in a language such as python.
I've been working a few days staring at equations and attempting attempt after attempt, but still after all those days, after ripping my code apart again and again, all that's been working properly this entire time is this noise generator to generate basic noise:
```
import random
import math
random.seed(0)
def generateWhiteNoise(width,height):
noise = [[r for r in range(width)] for i in range(height)]
for i in range(0,height):
for j in range(0,width):
noise[i][j] = random.randint(0,1)
return noise
noise = generateWhiteNoise(50,12)
for i in noise:
print()
for o in i:
if(o == 0):
print('-',end='')
else:
print('#',end='')
```
This code produces this result:
```
##-######--#--#-#--##-###-###---#-##-#-----#--##-#
#-#-##-##-#----##------##--#####-#-##---#--#-##---
-------#-#------#---#-#---###--#--#-###-----##-#--
######--#-#-#--####-###---#---###-##--#-#-##--####
-#----###--------##--##--##-#-#--#----###-####--##
---####-#--#--###-#-#--#--#####--####-#-##-##--#--
----#--####-#-#-#-#-#---#--###------###--#-######-
--###--#-###-------#-##--###---#-####----###-#####
#----##--##-#--##-###--#----#-#-##--##-#-##---###-
##---##----##--##--#--#--###-###-#--#-##---#------
-##----#-###---######---#-#---#---###---#---###-##
#--##-##-###-###---#--##-##--##-##-#-#-##--#-#-##-
```
I'm wanting it to eventually produce something like this:
```
--------------------------------------------------
------------------####----------------------------
-----------------#####----------------------------
----------------#####-----------------------------
---------------#####--------------###-------------
---------------#####--------------###-------------
---------------------------------####-------------
---######------------------------####-------------
---######------------###--------------------------
----########---------###--------------------------
-----#######---------###--------------------------
------###-----------------------------------------
```
How can I manage to smooth out the white-noise I generate, and turn it into islands? Can anyone explain it in a very simplistic way for me?
I may be thinking about all of this very wrong. | 2013/07/22 | [
"https://Stackoverflow.com/questions/17779480",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2577669/"
] | This is a fun little problem, you can solve it with this sort of algorithm:
1. generate a small uniform noise
2. resample it to a higher resolution (giving you a smooth noise image)
3. Apply threshold to get a False/True array
4. Map False/True to '-'/'#'
And with a bit of printing formatting it works well. Demonstration:
```
import numpy as np
import sys
np.set_printoptions(threshold=sys.maxsize)
from scipy.ndimage.interpolation import zoom
arr = np.random.uniform(size=(4,4))
arr = zoom(arr, 8)
arr = arr > 0.5
arr = np.where(arr, '-', '#')
arr = np.array_str(arr, max_line_width=500)
print(arr)
```
output:
```
[['-' '-' '-' '-' '-' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '-' '-' '-' '-' '-' '-' '-' '-']
['-' '-' '-' '-' '-' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '-' '-' '-' '-' '-' '-' '-' '-']
['-' '-' '-' '-' '-' '-' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '-' '-' '-' '-' '-' '-' '-' '-']
['-' '-' '-' '-' '-' '-' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '-' '-' '-' '-' '-' '-' '-']
['-' '-' '-' '-' '-' '-' '-' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '-' '-' '-' '-' '-' '-']
['-' '-' '-' '-' '-' '-' '-' '-' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#']
['-' '-' '-' '-' '-' '-' '-' '-' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#']
['-' '-' '-' '-' '-' '-' '-' '-' '-' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#']
['-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#']
['-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '#' '#' '#' '#' '#' '-' '-' '-' '-' '-' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#']
['-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '#' '#' '#' '#' '#' '-' '-' '-' '-' '-' '-' '-' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#']
['-' '-' '-' '-' '-' '-' '-' '-' '-' '#' '#' '#' '#' '#' '#' '-' '-' '-' '-' '-' '-' '-' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#']
['-' '-' '-' '-' '-' '-' '-' '-' '#' '#' '#' '#' '#' '#' '#' '#' '-' '-' '-' '-' '-' '-' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#']
['-' '-' '-' '-' '-' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '-' '-' '-' '-' '-' '-' '#' '#' '#' '#' '#' '#' '#' '#' '#']
['-' '-' '-' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '-' '-' '-' '-' '-' '-' '#' '#' '#' '#' '#' '#' '#' '#' '#']
['#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '-' '-' '-' '-' '-' '-' '#' '#' '#' '#' '#' '#' '#' '#']
['#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '-' '-' '-' '-' '-' '-' '-' '#' '#' '#' '#' '#' '#' '#']
['#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '-' '-' '-' '-' '-' '-' '-' '#' '#' '#' '#' '#' '#']
['#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '-' '-' '-' '-' '-' '-' '-' '-' '#' '#' '#' '#' '#']
['#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '#' '#' '#']
['#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-']
['#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-']
['#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-']
['#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-']
['#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-']
['#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '#' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-']
['#' '#' '#' '#' '#' '#' '#' '#' '#' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-']
['#' '#' '#' '#' '#' '#' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-']
['#' '#' '#' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-']
['-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-']
['-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-']
['-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-' '-']]
```
Of course a Perlin or Simplex noise like other answerers indicated would give a slightly better look. If you want to try that, replace steps 1 and 2 with Perlin/Simplex or any other noise you can grab and try again. | Rather use cellular automatons. The algorithm that you find [here](http://www.roguebasin.com/index.php?title=Cellular_Automata_Method_for_Generating_Random_Cave-Like_Levels) creates similar patterns that you you would like to see:
```
. . . . . . . . . . . . . . .
. . . . . # # . . . . . # . .
. . . . # # # # . . . # # # .
. . . . . # # # # . . # # # .
. . . . . . # # # # # # # . .
. . . . . . # # # # # # # . .
. . . . # # # # # # # # # . .
. . . # # # # # # # # # # . .
. . # # # # # # . # . # # . .
. . # # # # # . . # . . . . .
. . . # # # # . . . # # # . .
. . . # # # # . . . # # # # .
. . # # # # . . . . . # # # .
. . # # # # . . . . . # # . .
. . . . . . . . . . . . . . .
``` |
48,166,183 | I have a problem which my novice knowledge cannot solve.
I'm trying to copy some python-2.x code (which is working) to python-3.x. Now it gives me an error.
Here's a snippet of the code:
```
def littleUglyDataCollectionInTheSourceCode():
a = {
'Aabenraa': [842.86917819535, 25.58264089252],
'Aalborg': [706.92644963185, 27.22746146366],
'Aarhus': [696.60346488317, 25.67540525994],
'Albertslund': [632.49007681987, 27.70499807418],
'Allerød': [674.10474259426, 27.91964123274],
'Assens': [697.02257492453, 25.83386400960],
'Ballerup': [647.05121493736, 27.72466920284],
'Billund': [906.63431520239, 26.23136823557],
'Bornholm': [696.05765684503, 28.98396327957],
'Brøndby': [644.89390717471, 28.18974127413],
}
return a
```
and:
```
def calcComponent(data):
# Todo: implement inteface to set these values by
# the corresponding 'Kommune'
T = float(data.period)
k = 1.1
rH = 1.0
# import with s/\([^\s-].*?\)\t\([0-9.]*\)$/'\1':'\2',/
myDict = littleUglyDataCollectionInTheSourceCode();
#if data.kommune in myDict:
# https://docs.djangoproject.com/en/1.10/ref/unicode/
key = data.kommune.encode("utf-8")
rd = myDict.get(key.strip(), 0)
laP = float(rd[0]) # average precipitation
midV = float(rd[1]) # Middelværdi Klimagrid
print(("lap " + str(laP)))
print(("mid V" + str(midV)))
```
It gives the error:
```
line 14, in calcComponent
laP = float(rd[0]) # average precipitation
TypeError: 'int' object is not subscriptable
```
I've tried different approaches and read dozens of aticles with no luck. Being a novice it is like tumbling in the dark. | 2018/01/09 | [
"https://Stackoverflow.com/questions/48166183",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6551344/"
] | In your example `myDict` is a dictionary with strings as keys and lists as values.
```
key = data.kommune.encode("utf-8")
```
will be a bytes object, so there can't ever be any corresponding value for that key in the dictionary. This worked in python2 where automatic conversion was performed, but not anymore in python3, you need to use the correct type for lookups.
```
rd = myDict.get(key.strip(), 0)
```
will always return the integer `0`, which means that `rd[0]` can not work because integers are not indexable, as the error message tells you.
Generally the default value in a `get()` call should be compatible with what is returned in all other cases. Returning `0` as default where all non-default cases return lists can only lead to problems. | You are using `0` as a default value for `rd`, whereas the values in the dict are lists, so if the key is not found, `rd[0]` or `rd[1]` will fail. Instead, use a list or tuple as default, then it should work.
```
rd = myDict.get(key.strip(), [0, 0])
``` |
48,166,183 | I have a problem which my novice knowledge cannot solve.
I'm trying to copy some python-2.x code (which is working) to python-3.x. Now it gives me an error.
Here's a snippet of the code:
```
def littleUglyDataCollectionInTheSourceCode():
a = {
'Aabenraa': [842.86917819535, 25.58264089252],
'Aalborg': [706.92644963185, 27.22746146366],
'Aarhus': [696.60346488317, 25.67540525994],
'Albertslund': [632.49007681987, 27.70499807418],
'Allerød': [674.10474259426, 27.91964123274],
'Assens': [697.02257492453, 25.83386400960],
'Ballerup': [647.05121493736, 27.72466920284],
'Billund': [906.63431520239, 26.23136823557],
'Bornholm': [696.05765684503, 28.98396327957],
'Brøndby': [644.89390717471, 28.18974127413],
}
return a
```
and:
```
def calcComponent(data):
# Todo: implement inteface to set these values by
# the corresponding 'Kommune'
T = float(data.period)
k = 1.1
rH = 1.0
# import with s/\([^\s-].*?\)\t\([0-9.]*\)$/'\1':'\2',/
myDict = littleUglyDataCollectionInTheSourceCode();
#if data.kommune in myDict:
# https://docs.djangoproject.com/en/1.10/ref/unicode/
key = data.kommune.encode("utf-8")
rd = myDict.get(key.strip(), 0)
laP = float(rd[0]) # average precipitation
midV = float(rd[1]) # Middelværdi Klimagrid
print(("lap " + str(laP)))
print(("mid V" + str(midV)))
```
It gives the error:
```
line 14, in calcComponent
laP = float(rd[0]) # average precipitation
TypeError: 'int' object is not subscriptable
```
I've tried different approaches and read dozens of aticles with no luck. Being a novice it is like tumbling in the dark. | 2018/01/09 | [
"https://Stackoverflow.com/questions/48166183",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6551344/"
] | In your example `myDict` is a dictionary with strings as keys and lists as values.
```
key = data.kommune.encode("utf-8")
```
will be a bytes object, so there can't ever be any corresponding value for that key in the dictionary. This worked in python2 where automatic conversion was performed, but not anymore in python3, you need to use the correct type for lookups.
```
rd = myDict.get(key.strip(), 0)
```
will always return the integer `0`, which means that `rd[0]` can not work because integers are not indexable, as the error message tells you.
Generally the default value in a `get()` call should be compatible with what is returned in all other cases. Returning `0` as default where all non-default cases return lists can only lead to problems. | And that is why googling the TypeError text didn't lead me to a solution, as my problem were twofold. I forgot about the integrated encoding in Python3.
I changed:
```
key = data.kommune.encode("utf-8")
rd = myDict.get(key.strip(), 0)
```
to:
```
key = data.kommune
rd = myDict.get(key.strip(), [0, 0])
```
And now it works:-) |
48,166,183 | I have a problem which my novice knowledge cannot solve.
I'm trying to copy some python-2.x code (which is working) to python-3.x. Now it gives me an error.
Here's a snippet of the code:
```
def littleUglyDataCollectionInTheSourceCode():
a = {
'Aabenraa': [842.86917819535, 25.58264089252],
'Aalborg': [706.92644963185, 27.22746146366],
'Aarhus': [696.60346488317, 25.67540525994],
'Albertslund': [632.49007681987, 27.70499807418],
'Allerød': [674.10474259426, 27.91964123274],
'Assens': [697.02257492453, 25.83386400960],
'Ballerup': [647.05121493736, 27.72466920284],
'Billund': [906.63431520239, 26.23136823557],
'Bornholm': [696.05765684503, 28.98396327957],
'Brøndby': [644.89390717471, 28.18974127413],
}
return a
```
and:
```
def calcComponent(data):
# Todo: implement inteface to set these values by
# the corresponding 'Kommune'
T = float(data.period)
k = 1.1
rH = 1.0
# import with s/\([^\s-].*?\)\t\([0-9.]*\)$/'\1':'\2',/
myDict = littleUglyDataCollectionInTheSourceCode();
#if data.kommune in myDict:
# https://docs.djangoproject.com/en/1.10/ref/unicode/
key = data.kommune.encode("utf-8")
rd = myDict.get(key.strip(), 0)
laP = float(rd[0]) # average precipitation
midV = float(rd[1]) # Middelværdi Klimagrid
print(("lap " + str(laP)))
print(("mid V" + str(midV)))
```
It gives the error:
```
line 14, in calcComponent
laP = float(rd[0]) # average precipitation
TypeError: 'int' object is not subscriptable
```
I've tried different approaches and read dozens of aticles with no luck. Being a novice it is like tumbling in the dark. | 2018/01/09 | [
"https://Stackoverflow.com/questions/48166183",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6551344/"
] | You are using `0` as a default value for `rd`, whereas the values in the dict are lists, so if the key is not found, `rd[0]` or `rd[1]` will fail. Instead, use a list or tuple as default, then it should work.
```
rd = myDict.get(key.strip(), [0, 0])
``` | And that is why googling the TypeError text didn't lead me to a solution, as my problem were twofold. I forgot about the integrated encoding in Python3.
I changed:
```
key = data.kommune.encode("utf-8")
rd = myDict.get(key.strip(), 0)
```
to:
```
key = data.kommune
rd = myDict.get(key.strip(), [0, 0])
```
And now it works:-) |
6,493,681 | I have a list of ids in python. For example:
```
x = [1,2,3,4,5,6]
```
And i want to select a list of records in my (mysql ) data-base under the condition that the ids of these records are in x. something like below:
```
SELECT * FROM mytable WHERE id IN x
```
but I don't know who I can do this in python. I have seen some examples using %s in their sql string. However this does not work when the variable is a list. does anyone know how I can do this?
Thanks | 2011/06/27 | [
"https://Stackoverflow.com/questions/6493681",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/313245/"
] | Try something like this:
```
'(%s)' % ','.join(map(str,x))
```
This will give you a string that you could use to send to MySql as a valid `IN` clause:
```
(1,2,3,4,5,6)
``` | Well, if all of those are known to be numbers of good standing, then you can simply call
```
"SELECT * FROM mytable WHERE ID IN ({0})".format(','.join(x))
```
If you know that they are numbers but *any* of them might have been from the user, then I might use:
```
"SELECT * FROM mytable WHERE ID IN ({0})".format(','.join(list(map(int,x))))
```
`format` will perform the replacement at the appropriate index. join is used so that you don't have the `[]`. list converts everything to a list, map applies a function to a list/tuple/iterable. In Python 3, however, map returns a generator, which isn't what you need. You need a list. So, list(map(x,y)) will return the list form of map(x,y). |
11,360,161 | I get this error while running a python script (called by ./waf --run):
TypeError: abspath() takes exactly 1 argument (2 given)
The problem is that it is indeed called with: obj.path.abspath(env).
This is not a python issue, because that code worked perfectly before, and it's part of a huge project (ns3) so I doubt this is broken.
However something must have changed in my settings, because this code worked before, and now it doesn't.
Can you help me to figure out why I get this error ?
Here is the python code: <http://pastebin.com/EbJ50BBt>. The error occurs line 61. | 2012/07/06 | [
"https://Stackoverflow.com/questions/11360161",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1502564/"
] | The documentation of the method [`Node.abspath()`](http://docs.waf.googlecode.com/git/apidocs_16/Node.html#waflib.Node.Node.abspath) states it does not take an additional `env` parameter, and I confirmed that it never did by checking the git history. I suggest replacing
```
if not (obj.path.abspath().startswith(launch_dir)
or obj.path.abspath(env).startswith(launch_dir)):
continue
```
with
```
if not obj.path.abspath().startswith(launch_dir):
continue
```
If this code worked before, this is probably due to the fact that the first operator of the `or` expression happened to always be `True`, so the second operator was never executed. It seems to be a bug in your code anyway. | You should have a file name and line number in the traceback. Go to that file and line and find out was "obj" and "obj.path.abspath" are. A simple solution would be to put the offending line in a try/except block to print (or log) more informations, ie:
```
# your code here
try:
whatever = obj.path.abspath(env)
except Exception, e:
# if you have a logger
logger.exception("oops : obj is '%s' (%s)" % (obj, type(obj)))
# else
import sys
print >> sys.stderr, "oops, got %s on '%s' (%s)" % (e, obj, type(obj))
# if you can run this code directly from a shell,
# this will send you in the interactive debugger so you can
# inspect the offending objet and the whole call stack.
# else comment out this line
import pdb; pdb.set_trace()
# and re-raise the exception
raise
```
My bet is that "obj.path" is NOT the python 'os.path' module, and that "obj.path.abspath" is a an instance method that only takes "self" as argument. |
11,360,161 | I get this error while running a python script (called by ./waf --run):
TypeError: abspath() takes exactly 1 argument (2 given)
The problem is that it is indeed called with: obj.path.abspath(env).
This is not a python issue, because that code worked perfectly before, and it's part of a huge project (ns3) so I doubt this is broken.
However something must have changed in my settings, because this code worked before, and now it doesn't.
Can you help me to figure out why I get this error ?
Here is the python code: <http://pastebin.com/EbJ50BBt>. The error occurs line 61. | 2012/07/06 | [
"https://Stackoverflow.com/questions/11360161",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1502564/"
] | The documentation of the method [`Node.abspath()`](http://docs.waf.googlecode.com/git/apidocs_16/Node.html#waflib.Node.Node.abspath) states it does not take an additional `env` parameter, and I confirmed that it never did by checking the git history. I suggest replacing
```
if not (obj.path.abspath().startswith(launch_dir)
or obj.path.abspath(env).startswith(launch_dir)):
continue
```
with
```
if not obj.path.abspath().startswith(launch_dir):
continue
```
If this code worked before, this is probably due to the fact that the first operator of the `or` expression happened to always be `True`, so the second operator was never executed. It seems to be a bug in your code anyway. | The problem came from the fact that apparently waf doesn't like symlinks, the python code must not be prepared for such cases.
Problem solved, thanks for your help everybody |
48,264,720 | I am starting to learn the application of different types of classifiers in python sklearn module. The clf\_LR.predict(X\_predict) predicts the 'Loan\_Status' of the test data. In the training data it is either 1 or 0 depending on loan approval. But the predict gives a numpy array of float values around 0 and 1. I want to convert these values to nearest 1 or 0.
```
#regression
X = np.array(train_data.drop(['Loan_Status'],1))
y = np.array(train_data['Loan_Status'])
X_predict = np.array(test_data)
clf_LR = LinearRegression()
clf_LR.fit(X,y)
accuracy = clf_LR.score(X,y)
clf_LR.predict(X_predict)
```
The output is:
```
array([ 1.0531505 , 0.54463698, 0.66512836, 0.91817899, 0.81084038,
0.4400971 , 0.05132584, 0.5797642 , 0.72760712, 0.78624 ,
0.60043618, 0.79904144, 0.78164806, 0.63140686, 0.66746683,
0.56799806, 0.62462483, -0.27487531, 0.77595855, 0.62112923,
0.42499627, 0.21962665, 0.73747749, 0.62580336, 1.08242647,
0.60546731, 0.58980138, 0.68778534, 0.80729382, -0.25906255,
0.5911749 , 0.57754607, 0.71869494, 0.7414411 , 0.79574657,
1.053294 , 0.77238618, 0.84663303, 0.93977499, 0.39076889,
0.79835196, -0.31202102, 0.57969628, 0.6782184 , 0.62406822,
0.76141175, -0.14311827, 0.87284553, 0.45152395, 0.70505136,
0.80529711, 0.88614397, 0.0036123 , 0.59748637, 1.15082822,
0.6804735 , 0.64551666, -0.28882904, 0.71713245, 0.66373934,
0.5250008 , 0.81825485, 0.71661801, 0.74462875, 0.66047019,
0.62186449, -0.2895147 , 0.78990148, -0.198547 , 0.02752572,
1.0440052 , 0.58668459, 0.82012492, 0.50745345, -0.07448848,
0.56636204, 0.85462188, 0.4723699 , 0.5501792 , 0.91271145,
0.61796331, 0.47130567, 0.74644572, 0.38340698, 0.65640869,
0.75736077, -0.23866258, 0.89198235, 0.74552824, 0.58952803,
0.75363266, 0.44341609, 0.76332621, 0.60706656, 0.548128 ,
-0.05460422, 0.81488009, 0.51959111, 0.91001994, 0.71223763,
0.67600868, 0.79102218, -0.00530356, 0.20135057, 0.73923083,
0.56965262, 0.80045725, 0.67266281, 0.81694555, 0.70263141,
0.38996739, 0.38449832, 0.77388573, 0.92362979, 0.54006616,
0.76432229, 0.61683807, 0.44803386, 0.79751796, 0.55321023,
1.10480386, 1.03004599, 0.54718652, 0.74741632, 0.83907984,
0.86407637, 1.10821273, 0.6227142 , 0.94443767, -0.02906777,
0.68258672, 0.38914101, 0.86936186, -0.17331518, 0.35980983,
-0.32387964, 0.86583445, 0.5480951 , 0.5846661 , 0.96815188,
0.45474766, 0.54342586, 0.41997578, 0.73069535, 0.05828308,
0.4716423 , 0.70579418, 0.76672804, 0.90476146, 0.45363533,
0.78646442, 0.76841914, 0.77227952, 0.75068078, 0.94713967,
0.67417191, -0.16948404, 0.80726176, 1.12127705, 0.74715634,
0.44632464, 0.61668874, 0.6578295 , 0.60631521, 0.42455094,
0.65104766, -0.01636441, 0.87456921, -0.24877682, 0.76791838,
0.85037569, 0.75076961, 0.91323444, 0.27976108, 0.89643734,
0.14388116, 0.7340059 , 0.46372024, 0.91726212, 0.43539411,
0.44859789, -0.04401285, 0.28901989, 0.62105238, 0.56949422,
0.49728522, 0.65641239, 1.11183953, 0.76159204, 0.55822867,
0.79752582, 0.72726221, 0.49171728, -0.32777583, -0.30767082,
0.70702693, 0.91792405, 0.76112155, 0.68748705, 0.6172974 ,
0.70335159, 0.74522648, 1.01560133, 0.62808723, 0.50816819,
0.61760714, 0.55879101, 0.50060645, 0.87832261, 0.73523273,
0.60360986, 0.78153534, -0.2063286 , 0.85540569, 0.59231311,
0.75875401, 0.34422049, 0.58667666, -0.14887532, 0.81458285,
0.90631338, 0.5508966 , 0.93534451, 0.0048111 , 0.66506743,
0.5844512 , 0.67768398, 0.91190474, 0.39758323, 0.44284897,
0.47347625, 0.7603246 , 0.41066447, 0.50419741, 0.74437409,
0.44916515, 0.14160128, 0.72991652, 1.15215444, 0.50707437,
0.61020873, 0.8831041 , 0.78476914, 0.4953215 , 0.71862044,
0.66574986, 0.89547805, 0.93534669, 0.57742771, 0.9225718 ,
0.67209865, 0.34461023, 0.52848926, 0.95846303, 0.88237609,
-0.01603499, 0.94158916, 0.44069838, -0.17133448, 0.35288583,
0.55302018, 0.36446662, 0.62047864, 0.3803367 , 0.60398751,
0.9152663 , 0.48237299, 0.05646119, -0.65950771, 0.52644392,
-0.14182158, 0.65408783, -0.01741803, 0.76022561, 0.70883902,
0.56782191, 0.66484671, 0.79638622, 0.6668274 , 0.94365746,
0.76132423, 0.63407964, 0.43784118, 0.74599199, 0.69594847,
0.96794245, 0.49120557, -0.30985337, 0.48242465, 0.78788 ,
0.74562549, 0.61188416, -0.13990599, 0.59192289, 0.52577439,
0.62118612, 0.47292839, 0.38433912, 0.58535049, 0.61180443,
0.68363366, -0.17158279, -0.16752298, -0.12006642, 0.11420194,
0.54435597, 0.76707794, 0.94712879, 0.90341355, 0.41133755,
0.78063296, 1.06335948, 0.65061658, 0.55463919, -0.16184664,
0.45612831, 0.2974657 , 0.74769718, 0.73568274, 0.91792405,
0.69938454, 0.07815941, 0.73400855, 0.33905491, 0.48330823,
0.76760269, -0.03303408, 0.64432907, 0.44763337, 0.59214243,
0.78339532, 0.74755724, 0.70328769, 0.61766433, -0.34196805,
0.74271219, 0.66617484, 0.75939014, 0.46274977, 0.43760914,
-0.11568388, 1.12101126, 0.65718951, 0.74632966, -0.3918828 ,
0.29915035, 0.6155425 , 0.66089274, 0.8555285 , 0.54121081,
0.74758901, 0.84686185, 0.68150433, 0.44953323, 0.71672738,
0.86416735, 0.97374945, 0.36594854, 0.5508358 , 0.60524084,
-0.04479449, 0.56064679, 0.46826815, 0.75353414, 0.63092004,
0.52340796, 0.36622527, 0.42553235, 0.81877722, -0.03474048,
0.56185539, 0.57384744, 0.86959987, -0.35002778, 0.59209448,
0.43892519, 0.83366299, 0.55630127, 0.68092981, 0.79639642,
0.96289854, -0.15094804, 0.5866888 , 0.88245453, 0.65447514,
1.00194182, 0.45130259, -0.16774169, 0.66529484, 0.87330175,
0.12493249, 0.07427334, 0.79084776, 0.60848656, 0.7706963 ,
0.76846985, 0.74796571, 0.52316893, 0.62116966, 0.52497383,
0.05855483, 0.75575428, -0.20233853, 0.77693886, 0.15845594,
0.88457158, 0.0846857 , 0.7831948 , 0.54955829, 0.71151434,
1.23277406, 0.0153455 , 0.7111069 , 0.64140878, 0.69578766,
0.72386089, 0.3291767 , 0.8414526 , -0.14267676, 0.93841726,
0.94248916, 0.61492774, 0.60835432, -0.05542942, 1.01387972,
0.81980896, 0.39519755, 0.85483256, 0.79124875, 0.46196837,
0.5157149 , -0.2076404 , 0.57935033, 0.86477299, 0.62917312,
0.85446301, 0.40595525, 0.64527099, 0.7452028 , 0.58527638,
0.66419528, 0.49120555, 0.83966651, 0.86063059, 0.85615707,
-0.22704174])
```
I want to convert these values to nearest 1 or 0. Is there any way of doing this inplace? | 2018/01/15 | [
"https://Stackoverflow.com/questions/48264720",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8380563/"
] | ```
import numpy as np
np.round(np.clip(clf_LR.predict(X_predict), 0, 1)) # floats
np.round(np.clip(clf_LR.predict(X_predict), 0, 1)).astype(bool) # binary
```
* [numpy.clip](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.clip.html)
* [numpy.round](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.round_.html)
Technically above code is not **in-place**, but can be converted (using out arguments)!
(untested: try it!) | As said in @Pault comment what you need is a classifier, sklearn has many classifiers!
The choice of a classifier to use depend on many factors:
The following picture from [sklearn](http://scikit-learn.org/stable/tutorial/machine_learning_map/index.html) can help you to choose :
[](https://i.stack.imgur.com/MLhG2.png)
basically for logistic regression classifier , you can do the following :
```
from sklearn.linear_model import LogisticRegression
clf = LogisticRegression(C=1.0, penalty='l1')
clf.fit(X, y)
clf.predict(X_predict) # will give you 0 or 1 as the class
``` |
36,427,747 | I'm using Ipython Notebook to my research. As my file grows bigger, I constantly extract code out, things like plot method, fitting method etc.
I think I need a way to organize this. Is there any good way to do it??
---
Currently, I do this by:
```
data/
helpers/
my_notebook.ipynb
import_file.py
```
I store data at `data/`, and extract `helper method` into `helpers/`, and divide them into files like `plot_helper.py`, `app_helper.py`, etc.
I summarize the imports in `import_file.py`,
```
from IPython.display import display
import numpy as np
import scipy as sp
import pandas as pd
import matplotlib as mpl
from matplotlib import pyplot as plt
import sklearn
import re
```
And then I can import everything I need in `.ipynb` at top cell as
[](https://i.stack.imgur.com/XHbi3.png)
The structure can be seen at <https://github.com/cqcn1991/Wind-Speed-Analysis>
One problem I have right now is that I have too many submodule at `helpers/`, and it's hard to think which method should be put into which file.
I think a possible way is to organize in `pre-processing`, `processing`, `post-processing`.
UPDATE:
My big jupyter research notebook:
<https://cdn.rawgit.com/cqcn1991/Wind-Speed-Analysis/master/output_HTML/marham.html>
The top cell is `standard import` + `magic` + `extentions`
```
%matplotlib inline
%load_ext autoreload
%autoreload 2
from __future__ import division
from import_file import *
load_libs()
``` | 2016/04/05 | [
"https://Stackoverflow.com/questions/36427747",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1794744/"
] | There are many ways to organise ipython research project. I am managing a team of 5 Data Scientists and 3 Data Engineers and I found those tips to be working well for our usecase:
This is a summary of my PyData London talk:
<http://www.slideshare.net/vladimirkazantsev/clean-code-in-jupyter-notebook>
**1. Create a shared (multi-project) utils library**
You most likely have to reuse/repeat some code in different research projects. Start refactoring those things into "common utils" package. Make setup.py file, push module to github (or similar), so that team members can "pip install" it from VCS.
Examples of functionality to put in there are:
* Data Warehouse or Storage access functions
* common plotting functions
* re-usable math/stats methods
**2. Split your fat master notebook into smaller notebooks**
In my experience, the good length of file with code (any language) is only few screens (100-400 lines). Jupyter Notebook is still the source file, but with output! Reading a notebook with 20+ cells is very hard. I like my notebooks to have 4-10 cells max.
Ideally, each notebook should have one "hypothesis-data-conclusions" triplet.
Example of splitting the notebook:
1\_data\_preparation.ipynb
2\_data\_validation.ipynb
3\_exploratory\_plotting.ipynb
4\_simple\_linear\_model.ipynb
5\_hierarchical\_model.ipynb
playground.ipynb
Save output of 1\_data\_preparation.ipynb to pickle `df.to_pickle('clean_data.pkl')`, csv or fast DB and use `pd.read_pickle("clean_data.pkl")` at the top of each notebook.
**3. It is not Python - it is IPython Notebook**
What makes notebook unique is **cells**. Use them well.
Each cell should be "idea-execution-output" triplet. If cell does not output anything - combine with the following cell. Import cell should output nothing -this is an expected output for it.
If cell have few outputs - it may be worth splitting it.
Hiding imports may or may not be good idea:
```
from myimports import *
```
Your reader may want to figure out what exactly you are importing to use the same stuff for her research. So use with caution. We do use it for `pandas, numpy, matplotlib, sql` however.
Hiding "secret sauce" in /helpers/model.py is bad:
```
myutil.fit_model_and_calculate(df)
```
This may save you typing and you will remove duplicate code, but your collaborator will have to open another file to figure out what's going on. Unfortunately, notebook (jupyter) is quite inflexible and basic environment, but you still don't want to force your reader to leave it for every piece of code. I hope that in the future IDE will improve, but for now, **keep "secret sauce" inside a notebook**. While "boring and obvious utils" - wherever you see fit. DRY still apply - you have to find the balance.
This should not stop you from packaging re-usable code into functions or even small classes. But "flat is better than nested".
**4. Keep notebooks clean**
You should be able to "reset & Run All" at any point in time.
Each re-run should be fast! Which means you may have to invest in writing some caching functions. May be you even want to put those into your "common utils" module.
Each cell should be executable multiple times, without the need to re-initialise the notebook. This saves you time and keep the code more robust.
But it may depend on state created by previous cells. Making each cell completely independent from the cells above is an anti-pattern, IMO.
After you are done with research - you are not done with notebook. Refactor.
**5. Create a project module, but be very selective**
If you keep re-using plotting or analytics function - do refactor it into this module. But in my experience, people expect to read and understand a notebook, without opening multiple util sub-modules. So naming your sub-routines well is even more important here, compared to normal Python.
"Clean code reads like well written prose" Grady Booch (developer of UML)
**6. Host Jupyter server in the cloud for the entire team**
You will have one environment, so everyone can quickly review and validate research without the need to match the environment (even though conda makes this pretty easy).
And you can configure defaults, like mpl style/colors and make matplot lib inline, by default:
In `~/.ipython/profile_default/ipython_config.py`
Add line `c.InteractiveShellApp.matplotlib = 'inline'`
**7. (experimental idea) Run a notebook from another notebook, with different parameters**
Quite often you may want to re-run the whole notebook, but with a different input parameters.
To do this, you can structure your research notebook as following:
Place ***params*** dictionary **in the first cell** of "source notebook".
```
params = dict(platform='iOS',
start_date='2016-05-01',
retention=7)
df = get_data(params ..)
do_analysis(params ..)
```
And in another (higher logical level) notebook, execute it using this function:
```
def run_notebook(nbfile, **kwargs):
"""
example:
run_notebook('report.ipynb', platform='google_play', start_date='2016-06-10')
"""
def read_notebook(nbfile):
if not nbfile.endswith('.ipynb'):
nbfile += '.ipynb'
with io.open(nbfile) as f:
nb = nbformat.read(f, as_version=4)
return nb
ip = get_ipython()
gl = ip.ns_table['user_global']
gl['params'] = None
arguments_in_original_state = True
for cell in read_notebook(nbfile).cells:
if cell.cell_type != 'code':
continue
ip.run_cell(cell.source)
if arguments_in_original_state and type(gl['params']) == dict:
gl['params'].update(kwargs)
arguments_in_original_state = False
```
Whether this "design pattern" proves to be useful is yet to be seen. We had some success with it - at least we stopped duplicating notebooks only to change few inputs.
Refactoring the notebook into a class or module break quick feedback loop of "idea-execute-output" that cells provide. And, IMHO, is not "ipythonic"..
**8. Write (unit) tests for shared library in notebooks and run with py.test**
There is a Plugin for py.test that can discover and run tests inside notebooks!
<https://pypi.python.org/pypi/pytest-ipynb> | You should ideally have a library hierarchy. I would organize it as follows:
Package wsautils
----------------
Fundamental, lowest level package [No dependencies]
stringutils.py: Contains the most basic files such string manipulation
dateutils.py: Date manipulation methods
Package wsadata
---------------
* Parsing data, dataframe manipulations, helper methods for Pandas etc.
* Depends on [wsautils]
+ pandasutils.py
+ parseutils.py
+ jsonutils.py [this could also go in wsautils]
+ etc.
Package wsamath (or wsastats)
-----------------------------
Math related utilities, models, PDF, CDFs [Depends on wsautils, wsadata]
Contains:
- probabilityutils.py
- statutils.py
etc.
Package wsacharts [or wsaplot]
------------------------------
* GUI, Plotting, Matplotlib, GGplot etc
* Depends on [wsautils, wsamath]
+ histogram.py
+ pichart.py
+ etc. Just an idea, you could also just have a single file here called chartutils or something
You get the idea. Create more libraries as necessary without making too many.
Few other tips:
---------------
* Follow the principles of good python package management thoroughly. Read this <http://python-packaging-user-guide.readthedocs.org/en/latest/installing/>
* Enforce strict dependency management via a script or a tool such that there are no circular dependencies between packages
* Define the name and purpose of each library/module well so that other users also can intuitively tell where a method/utility should go
* Follow good python coding standards (see PEP-8)
* Write test cases for every library/package
* Use a good editor (PyCharm is a good one for Python/iPython)
* Document your APIs, methods
Finally, remember that there are many ways to skin a cat and the above is just one that I happen to like.
HTH. |
36,427,747 | I'm using Ipython Notebook to my research. As my file grows bigger, I constantly extract code out, things like plot method, fitting method etc.
I think I need a way to organize this. Is there any good way to do it??
---
Currently, I do this by:
```
data/
helpers/
my_notebook.ipynb
import_file.py
```
I store data at `data/`, and extract `helper method` into `helpers/`, and divide them into files like `plot_helper.py`, `app_helper.py`, etc.
I summarize the imports in `import_file.py`,
```
from IPython.display import display
import numpy as np
import scipy as sp
import pandas as pd
import matplotlib as mpl
from matplotlib import pyplot as plt
import sklearn
import re
```
And then I can import everything I need in `.ipynb` at top cell as
[](https://i.stack.imgur.com/XHbi3.png)
The structure can be seen at <https://github.com/cqcn1991/Wind-Speed-Analysis>
One problem I have right now is that I have too many submodule at `helpers/`, and it's hard to think which method should be put into which file.
I think a possible way is to organize in `pre-processing`, `processing`, `post-processing`.
UPDATE:
My big jupyter research notebook:
<https://cdn.rawgit.com/cqcn1991/Wind-Speed-Analysis/master/output_HTML/marham.html>
The top cell is `standard import` + `magic` + `extentions`
```
%matplotlib inline
%load_ext autoreload
%autoreload 2
from __future__ import division
from import_file import *
load_libs()
``` | 2016/04/05 | [
"https://Stackoverflow.com/questions/36427747",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1794744/"
] | While the given answers cover the topic thoroughly it is still worth mentioning [Cookiecutter](https://cookiecutter.readthedocs.io/en/latest/) which provides a data science boilerplate project structure:
### [Cookiecutter Data Sciencee](https://drivendata.github.io/cookiecutter-data-science/)
provides data science template for projects in Python with a logical, reasonably standardized, yet flexible project structure for doing and sharing data science work.
Your analysis doesn't have to be in Python, but the template does provide some Python boilerplate (in the src folder for example, and the Sphinx documentation skeleton in docs). However, nothing is binding.
The following quote from the project description sums it up pretty nicely:
>
> Nobody sits around before creating a new Rails project to figure out
> where they want to put their views; they just run `rails new` to get a
> standard project skeleton like everybody else.
>
>
>
### Requirements:
* Python 2.7 or 3.5
* cookiecutter Python package >= 1.4.0: `pip install cookiecutter`
### Getting started
>
> Starting a new project is as easy as running this command at the
> command line. No need to create a directory first, the cookiecutter
> will do it for you.
>
>
>
```
cookiecutter https://github.com/drivendata/cookiecutter-data-science
```
### Directory structure
```
├── LICENSE
├── Makefile <- Makefile with commands like `make data` or `make train`
├── README.md <- The top-level README for developers using this project.
├── data
│ ├── external <- Data from third party sources.
│ ├── interim <- Intermediate data that has been transformed.
│ ├── processed <- The final, canonical data sets for modeling.
│ └── raw <- The original, immutable data dump.
│
├── docs <- A default Sphinx project; see sphinx-doc.org for details
│
├── models <- Trained and serialized models, model predictions, or model summaries
│
├── notebooks <- Jupyter notebooks. Naming convention is a number (for ordering),
│ the creator's initials, and a short `-` delimited description, e.g.
│ `1.0-jqp-initial-data-exploration`.
│
├── references <- Data dictionaries, manuals, and all other explanatory materials.
│
├── reports <- Generated analysis as HTML, PDF, LaTeX, etc.
│ └── figures <- Generated graphics and figures to be used in reporting
│
├── requirements.txt <- The requirements file for reproducing the analysis environment, e.g.
│ generated with `pip freeze > requirements.txt`
│
├── src <- Source code for use in this project.
│ ├── __init__.py <- Makes src a Python module
│ │
│ ├── data <- Scripts to download or generate data
│ │ └── make_dataset.py
│ │
│ ├── features <- Scripts to turn raw data into features for modeling
│ │ └── build_features.py
│ │
│ ├── models <- Scripts to train models and then use trained models to make
│ │ │ predictions
│ │ ├── predict_model.py
│ │ └── train_model.py
│ │
│ └── visualization <- Scripts to create exploratory and results-oriented visualizations
│ └── visualize.py
│
└── tox.ini <- tox file with settings for running tox; see tox.testrun.org
```
### Related:
[ProjectTemplate](http://projecttemplate.net/index.html) - provides a similar system for R data analysis. | You should ideally have a library hierarchy. I would organize it as follows:
Package wsautils
----------------
Fundamental, lowest level package [No dependencies]
stringutils.py: Contains the most basic files such string manipulation
dateutils.py: Date manipulation methods
Package wsadata
---------------
* Parsing data, dataframe manipulations, helper methods for Pandas etc.
* Depends on [wsautils]
+ pandasutils.py
+ parseutils.py
+ jsonutils.py [this could also go in wsautils]
+ etc.
Package wsamath (or wsastats)
-----------------------------
Math related utilities, models, PDF, CDFs [Depends on wsautils, wsadata]
Contains:
- probabilityutils.py
- statutils.py
etc.
Package wsacharts [or wsaplot]
------------------------------
* GUI, Plotting, Matplotlib, GGplot etc
* Depends on [wsautils, wsamath]
+ histogram.py
+ pichart.py
+ etc. Just an idea, you could also just have a single file here called chartutils or something
You get the idea. Create more libraries as necessary without making too many.
Few other tips:
---------------
* Follow the principles of good python package management thoroughly. Read this <http://python-packaging-user-guide.readthedocs.org/en/latest/installing/>
* Enforce strict dependency management via a script or a tool such that there are no circular dependencies between packages
* Define the name and purpose of each library/module well so that other users also can intuitively tell where a method/utility should go
* Follow good python coding standards (see PEP-8)
* Write test cases for every library/package
* Use a good editor (PyCharm is a good one for Python/iPython)
* Document your APIs, methods
Finally, remember that there are many ways to skin a cat and the above is just one that I happen to like.
HTH. |
36,427,747 | I'm using Ipython Notebook to my research. As my file grows bigger, I constantly extract code out, things like plot method, fitting method etc.
I think I need a way to organize this. Is there any good way to do it??
---
Currently, I do this by:
```
data/
helpers/
my_notebook.ipynb
import_file.py
```
I store data at `data/`, and extract `helper method` into `helpers/`, and divide them into files like `plot_helper.py`, `app_helper.py`, etc.
I summarize the imports in `import_file.py`,
```
from IPython.display import display
import numpy as np
import scipy as sp
import pandas as pd
import matplotlib as mpl
from matplotlib import pyplot as plt
import sklearn
import re
```
And then I can import everything I need in `.ipynb` at top cell as
[](https://i.stack.imgur.com/XHbi3.png)
The structure can be seen at <https://github.com/cqcn1991/Wind-Speed-Analysis>
One problem I have right now is that I have too many submodule at `helpers/`, and it's hard to think which method should be put into which file.
I think a possible way is to organize in `pre-processing`, `processing`, `post-processing`.
UPDATE:
My big jupyter research notebook:
<https://cdn.rawgit.com/cqcn1991/Wind-Speed-Analysis/master/output_HTML/marham.html>
The top cell is `standard import` + `magic` + `extentions`
```
%matplotlib inline
%load_ext autoreload
%autoreload 2
from __future__ import division
from import_file import *
load_libs()
``` | 2016/04/05 | [
"https://Stackoverflow.com/questions/36427747",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1794744/"
] | You should ideally have a library hierarchy. I would organize it as follows:
Package wsautils
----------------
Fundamental, lowest level package [No dependencies]
stringutils.py: Contains the most basic files such string manipulation
dateutils.py: Date manipulation methods
Package wsadata
---------------
* Parsing data, dataframe manipulations, helper methods for Pandas etc.
* Depends on [wsautils]
+ pandasutils.py
+ parseutils.py
+ jsonutils.py [this could also go in wsautils]
+ etc.
Package wsamath (or wsastats)
-----------------------------
Math related utilities, models, PDF, CDFs [Depends on wsautils, wsadata]
Contains:
- probabilityutils.py
- statutils.py
etc.
Package wsacharts [or wsaplot]
------------------------------
* GUI, Plotting, Matplotlib, GGplot etc
* Depends on [wsautils, wsamath]
+ histogram.py
+ pichart.py
+ etc. Just an idea, you could also just have a single file here called chartutils or something
You get the idea. Create more libraries as necessary without making too many.
Few other tips:
---------------
* Follow the principles of good python package management thoroughly. Read this <http://python-packaging-user-guide.readthedocs.org/en/latest/installing/>
* Enforce strict dependency management via a script or a tool such that there are no circular dependencies between packages
* Define the name and purpose of each library/module well so that other users also can intuitively tell where a method/utility should go
* Follow good python coding standards (see PEP-8)
* Write test cases for every library/package
* Use a good editor (PyCharm is a good one for Python/iPython)
* Document your APIs, methods
Finally, remember that there are many ways to skin a cat and the above is just one that I happen to like.
HTH. | Strange that no one mentioned this. Write out your next project using [nbdev](https://github.com/fastai/nbdev/tree/master/). From the [docs](https://nbdev.fast.ai/), we have
Features of Nbdev
-----------------
`nbdev` provides the following tools for developers:
* **Automatically generate docs** from Jupyter notebooks. These docs are searchable and automatically hyperlinked to appropriate documentation pages by introspecting keywords you surround in backticks.
* Utilities to **automate the publishing of PyPI and conda packages** including version number management.
* A robust, **two-way sync between notebooks and source code**, which allow you to use your IDE for code navigation or quick edits if desired.
* **Fine-grained control on hiding/showing cells**: you can choose to hide entire cells, just the output, or just the input. Furthermore, you can embed cells in collapsible elements that are open or closed by default.
* Ability to **write tests directly in notebooks** without having to learn special APIs. These tests get executed in parallel with a single CLI command. You can even define specific groups of tests such that you don't always have to run long-running tests.
* Tools for **merge/conflict resolution** with notebooks in a **human readable format**.
* **Continuous integration (CI) comes with [GitHub Actions](https://github.com/features/actions)** set up for you out of the box, that will run tests automatically for you. Even if you are not familiar with CI or GitHub Actions, this starts working right away for you without any manual intervention.
* **Integration With GitHub Pages for docs hosting**: nbdev allows you to easily host your documentation for free, using GitHub pages.
* Create Python modules, following **best practices such as automatically defining `__all__`** ([more details](http://xion.io/post/code/python-all-wild-imports.html)) with your exported functions, classes, and variables.
* **Math equation support** with LaTeX.
* ... and much more! See the [Getting Started](https://nbdev.fast.ai/#Getting-Started) section for more information.
For a quick start
* The [tutorial](https://nbdev.fast.ai/tutorial.html).
* A [minimal, end-to-end example](https://nbdev.fast.ai/example.html) of using nbdev. I suggest replicating this example after reading through the tutorial to solidify your understanding.
* use the [nbdev\_template](https://github.com/fastai/nbdev_template)
* Wonder why you did not try this out earlier, even after knowing about it 1.5 years ago (like me).
If you like videos
* [nbdev tutorial by Jeremy Howard and Sylvain gugger](https://www.dropbox.com/s/bm6jgn9a26hzmk8/nbdev%20tutorial.mp4?dl=0)
* [nbdev live coding with Hamel Husain](https://www.dropbox.com/s/ipsbipun7pv9jyc/nbdev%20live%20coding%20with%20Hamel%20Husain.mp4?dl=0)
If the video links fail, search the titles on Youtube to get them.
Also, follow all guidelines as in [volodymyr's answer above](https://stackoverflow.com/a/38192558/7952027)
All these comments were specific to notebooks.
For any code, you have to
* Write tests ( before or at least after you write code)
* add documentation for functions (preferably numpy style since this is a scientific package)
* Share it when you publish a paper so that others need not reinvent the wheel (Especially those who work in Physics). |
36,427,747 | I'm using Ipython Notebook to my research. As my file grows bigger, I constantly extract code out, things like plot method, fitting method etc.
I think I need a way to organize this. Is there any good way to do it??
---
Currently, I do this by:
```
data/
helpers/
my_notebook.ipynb
import_file.py
```
I store data at `data/`, and extract `helper method` into `helpers/`, and divide them into files like `plot_helper.py`, `app_helper.py`, etc.
I summarize the imports in `import_file.py`,
```
from IPython.display import display
import numpy as np
import scipy as sp
import pandas as pd
import matplotlib as mpl
from matplotlib import pyplot as plt
import sklearn
import re
```
And then I can import everything I need in `.ipynb` at top cell as
[](https://i.stack.imgur.com/XHbi3.png)
The structure can be seen at <https://github.com/cqcn1991/Wind-Speed-Analysis>
One problem I have right now is that I have too many submodule at `helpers/`, and it's hard to think which method should be put into which file.
I think a possible way is to organize in `pre-processing`, `processing`, `post-processing`.
UPDATE:
My big jupyter research notebook:
<https://cdn.rawgit.com/cqcn1991/Wind-Speed-Analysis/master/output_HTML/marham.html>
The top cell is `standard import` + `magic` + `extentions`
```
%matplotlib inline
%load_ext autoreload
%autoreload 2
from __future__ import division
from import_file import *
load_libs()
``` | 2016/04/05 | [
"https://Stackoverflow.com/questions/36427747",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1794744/"
] | You should ideally have a library hierarchy. I would organize it as follows:
Package wsautils
----------------
Fundamental, lowest level package [No dependencies]
stringutils.py: Contains the most basic files such string manipulation
dateutils.py: Date manipulation methods
Package wsadata
---------------
* Parsing data, dataframe manipulations, helper methods for Pandas etc.
* Depends on [wsautils]
+ pandasutils.py
+ parseutils.py
+ jsonutils.py [this could also go in wsautils]
+ etc.
Package wsamath (or wsastats)
-----------------------------
Math related utilities, models, PDF, CDFs [Depends on wsautils, wsadata]
Contains:
- probabilityutils.py
- statutils.py
etc.
Package wsacharts [or wsaplot]
------------------------------
* GUI, Plotting, Matplotlib, GGplot etc
* Depends on [wsautils, wsamath]
+ histogram.py
+ pichart.py
+ etc. Just an idea, you could also just have a single file here called chartutils or something
You get the idea. Create more libraries as necessary without making too many.
Few other tips:
---------------
* Follow the principles of good python package management thoroughly. Read this <http://python-packaging-user-guide.readthedocs.org/en/latest/installing/>
* Enforce strict dependency management via a script or a tool such that there are no circular dependencies between packages
* Define the name and purpose of each library/module well so that other users also can intuitively tell where a method/utility should go
* Follow good python coding standards (see PEP-8)
* Write test cases for every library/package
* Use a good editor (PyCharm is a good one for Python/iPython)
* Document your APIs, methods
Finally, remember that there are many ways to skin a cat and the above is just one that I happen to like.
HTH. | If you hate notebooks, try out these cookiecutters
* [Dr Michael Goerz's cookiecutter](https://github.com/goerz/cookiecutter-pypackage)
* [Ionel Cristian Mărieș](https://github.com/ionelmc/cookiecutter-pylibrary)
* [University of Washington Escience institute's shablona](https://github.com/uwescience/shablona) |
36,427,747 | I'm using Ipython Notebook to my research. As my file grows bigger, I constantly extract code out, things like plot method, fitting method etc.
I think I need a way to organize this. Is there any good way to do it??
---
Currently, I do this by:
```
data/
helpers/
my_notebook.ipynb
import_file.py
```
I store data at `data/`, and extract `helper method` into `helpers/`, and divide them into files like `plot_helper.py`, `app_helper.py`, etc.
I summarize the imports in `import_file.py`,
```
from IPython.display import display
import numpy as np
import scipy as sp
import pandas as pd
import matplotlib as mpl
from matplotlib import pyplot as plt
import sklearn
import re
```
And then I can import everything I need in `.ipynb` at top cell as
[](https://i.stack.imgur.com/XHbi3.png)
The structure can be seen at <https://github.com/cqcn1991/Wind-Speed-Analysis>
One problem I have right now is that I have too many submodule at `helpers/`, and it's hard to think which method should be put into which file.
I think a possible way is to organize in `pre-processing`, `processing`, `post-processing`.
UPDATE:
My big jupyter research notebook:
<https://cdn.rawgit.com/cqcn1991/Wind-Speed-Analysis/master/output_HTML/marham.html>
The top cell is `standard import` + `magic` + `extentions`
```
%matplotlib inline
%load_ext autoreload
%autoreload 2
from __future__ import division
from import_file import *
load_libs()
``` | 2016/04/05 | [
"https://Stackoverflow.com/questions/36427747",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1794744/"
] | There are many ways to organise ipython research project. I am managing a team of 5 Data Scientists and 3 Data Engineers and I found those tips to be working well for our usecase:
This is a summary of my PyData London talk:
<http://www.slideshare.net/vladimirkazantsev/clean-code-in-jupyter-notebook>
**1. Create a shared (multi-project) utils library**
You most likely have to reuse/repeat some code in different research projects. Start refactoring those things into "common utils" package. Make setup.py file, push module to github (or similar), so that team members can "pip install" it from VCS.
Examples of functionality to put in there are:
* Data Warehouse or Storage access functions
* common plotting functions
* re-usable math/stats methods
**2. Split your fat master notebook into smaller notebooks**
In my experience, the good length of file with code (any language) is only few screens (100-400 lines). Jupyter Notebook is still the source file, but with output! Reading a notebook with 20+ cells is very hard. I like my notebooks to have 4-10 cells max.
Ideally, each notebook should have one "hypothesis-data-conclusions" triplet.
Example of splitting the notebook:
1\_data\_preparation.ipynb
2\_data\_validation.ipynb
3\_exploratory\_plotting.ipynb
4\_simple\_linear\_model.ipynb
5\_hierarchical\_model.ipynb
playground.ipynb
Save output of 1\_data\_preparation.ipynb to pickle `df.to_pickle('clean_data.pkl')`, csv or fast DB and use `pd.read_pickle("clean_data.pkl")` at the top of each notebook.
**3. It is not Python - it is IPython Notebook**
What makes notebook unique is **cells**. Use them well.
Each cell should be "idea-execution-output" triplet. If cell does not output anything - combine with the following cell. Import cell should output nothing -this is an expected output for it.
If cell have few outputs - it may be worth splitting it.
Hiding imports may or may not be good idea:
```
from myimports import *
```
Your reader may want to figure out what exactly you are importing to use the same stuff for her research. So use with caution. We do use it for `pandas, numpy, matplotlib, sql` however.
Hiding "secret sauce" in /helpers/model.py is bad:
```
myutil.fit_model_and_calculate(df)
```
This may save you typing and you will remove duplicate code, but your collaborator will have to open another file to figure out what's going on. Unfortunately, notebook (jupyter) is quite inflexible and basic environment, but you still don't want to force your reader to leave it for every piece of code. I hope that in the future IDE will improve, but for now, **keep "secret sauce" inside a notebook**. While "boring and obvious utils" - wherever you see fit. DRY still apply - you have to find the balance.
This should not stop you from packaging re-usable code into functions or even small classes. But "flat is better than nested".
**4. Keep notebooks clean**
You should be able to "reset & Run All" at any point in time.
Each re-run should be fast! Which means you may have to invest in writing some caching functions. May be you even want to put those into your "common utils" module.
Each cell should be executable multiple times, without the need to re-initialise the notebook. This saves you time and keep the code more robust.
But it may depend on state created by previous cells. Making each cell completely independent from the cells above is an anti-pattern, IMO.
After you are done with research - you are not done with notebook. Refactor.
**5. Create a project module, but be very selective**
If you keep re-using plotting or analytics function - do refactor it into this module. But in my experience, people expect to read and understand a notebook, without opening multiple util sub-modules. So naming your sub-routines well is even more important here, compared to normal Python.
"Clean code reads like well written prose" Grady Booch (developer of UML)
**6. Host Jupyter server in the cloud for the entire team**
You will have one environment, so everyone can quickly review and validate research without the need to match the environment (even though conda makes this pretty easy).
And you can configure defaults, like mpl style/colors and make matplot lib inline, by default:
In `~/.ipython/profile_default/ipython_config.py`
Add line `c.InteractiveShellApp.matplotlib = 'inline'`
**7. (experimental idea) Run a notebook from another notebook, with different parameters**
Quite often you may want to re-run the whole notebook, but with a different input parameters.
To do this, you can structure your research notebook as following:
Place ***params*** dictionary **in the first cell** of "source notebook".
```
params = dict(platform='iOS',
start_date='2016-05-01',
retention=7)
df = get_data(params ..)
do_analysis(params ..)
```
And in another (higher logical level) notebook, execute it using this function:
```
def run_notebook(nbfile, **kwargs):
"""
example:
run_notebook('report.ipynb', platform='google_play', start_date='2016-06-10')
"""
def read_notebook(nbfile):
if not nbfile.endswith('.ipynb'):
nbfile += '.ipynb'
with io.open(nbfile) as f:
nb = nbformat.read(f, as_version=4)
return nb
ip = get_ipython()
gl = ip.ns_table['user_global']
gl['params'] = None
arguments_in_original_state = True
for cell in read_notebook(nbfile).cells:
if cell.cell_type != 'code':
continue
ip.run_cell(cell.source)
if arguments_in_original_state and type(gl['params']) == dict:
gl['params'].update(kwargs)
arguments_in_original_state = False
```
Whether this "design pattern" proves to be useful is yet to be seen. We had some success with it - at least we stopped duplicating notebooks only to change few inputs.
Refactoring the notebook into a class or module break quick feedback loop of "idea-execute-output" that cells provide. And, IMHO, is not "ipythonic"..
**8. Write (unit) tests for shared library in notebooks and run with py.test**
There is a Plugin for py.test that can discover and run tests inside notebooks!
<https://pypi.python.org/pypi/pytest-ipynb> | While the given answers cover the topic thoroughly it is still worth mentioning [Cookiecutter](https://cookiecutter.readthedocs.io/en/latest/) which provides a data science boilerplate project structure:
### [Cookiecutter Data Sciencee](https://drivendata.github.io/cookiecutter-data-science/)
provides data science template for projects in Python with a logical, reasonably standardized, yet flexible project structure for doing and sharing data science work.
Your analysis doesn't have to be in Python, but the template does provide some Python boilerplate (in the src folder for example, and the Sphinx documentation skeleton in docs). However, nothing is binding.
The following quote from the project description sums it up pretty nicely:
>
> Nobody sits around before creating a new Rails project to figure out
> where they want to put their views; they just run `rails new` to get a
> standard project skeleton like everybody else.
>
>
>
### Requirements:
* Python 2.7 or 3.5
* cookiecutter Python package >= 1.4.0: `pip install cookiecutter`
### Getting started
>
> Starting a new project is as easy as running this command at the
> command line. No need to create a directory first, the cookiecutter
> will do it for you.
>
>
>
```
cookiecutter https://github.com/drivendata/cookiecutter-data-science
```
### Directory structure
```
├── LICENSE
├── Makefile <- Makefile with commands like `make data` or `make train`
├── README.md <- The top-level README for developers using this project.
├── data
│ ├── external <- Data from third party sources.
│ ├── interim <- Intermediate data that has been transformed.
│ ├── processed <- The final, canonical data sets for modeling.
│ └── raw <- The original, immutable data dump.
│
├── docs <- A default Sphinx project; see sphinx-doc.org for details
│
├── models <- Trained and serialized models, model predictions, or model summaries
│
├── notebooks <- Jupyter notebooks. Naming convention is a number (for ordering),
│ the creator's initials, and a short `-` delimited description, e.g.
│ `1.0-jqp-initial-data-exploration`.
│
├── references <- Data dictionaries, manuals, and all other explanatory materials.
│
├── reports <- Generated analysis as HTML, PDF, LaTeX, etc.
│ └── figures <- Generated graphics and figures to be used in reporting
│
├── requirements.txt <- The requirements file for reproducing the analysis environment, e.g.
│ generated with `pip freeze > requirements.txt`
│
├── src <- Source code for use in this project.
│ ├── __init__.py <- Makes src a Python module
│ │
│ ├── data <- Scripts to download or generate data
│ │ └── make_dataset.py
│ │
│ ├── features <- Scripts to turn raw data into features for modeling
│ │ └── build_features.py
│ │
│ ├── models <- Scripts to train models and then use trained models to make
│ │ │ predictions
│ │ ├── predict_model.py
│ │ └── train_model.py
│ │
│ └── visualization <- Scripts to create exploratory and results-oriented visualizations
│ └── visualize.py
│
└── tox.ini <- tox file with settings for running tox; see tox.testrun.org
```
### Related:
[ProjectTemplate](http://projecttemplate.net/index.html) - provides a similar system for R data analysis. |
36,427,747 | I'm using Ipython Notebook to my research. As my file grows bigger, I constantly extract code out, things like plot method, fitting method etc.
I think I need a way to organize this. Is there any good way to do it??
---
Currently, I do this by:
```
data/
helpers/
my_notebook.ipynb
import_file.py
```
I store data at `data/`, and extract `helper method` into `helpers/`, and divide them into files like `plot_helper.py`, `app_helper.py`, etc.
I summarize the imports in `import_file.py`,
```
from IPython.display import display
import numpy as np
import scipy as sp
import pandas as pd
import matplotlib as mpl
from matplotlib import pyplot as plt
import sklearn
import re
```
And then I can import everything I need in `.ipynb` at top cell as
[](https://i.stack.imgur.com/XHbi3.png)
The structure can be seen at <https://github.com/cqcn1991/Wind-Speed-Analysis>
One problem I have right now is that I have too many submodule at `helpers/`, and it's hard to think which method should be put into which file.
I think a possible way is to organize in `pre-processing`, `processing`, `post-processing`.
UPDATE:
My big jupyter research notebook:
<https://cdn.rawgit.com/cqcn1991/Wind-Speed-Analysis/master/output_HTML/marham.html>
The top cell is `standard import` + `magic` + `extentions`
```
%matplotlib inline
%load_ext autoreload
%autoreload 2
from __future__ import division
from import_file import *
load_libs()
``` | 2016/04/05 | [
"https://Stackoverflow.com/questions/36427747",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1794744/"
] | There are many ways to organise ipython research project. I am managing a team of 5 Data Scientists and 3 Data Engineers and I found those tips to be working well for our usecase:
This is a summary of my PyData London talk:
<http://www.slideshare.net/vladimirkazantsev/clean-code-in-jupyter-notebook>
**1. Create a shared (multi-project) utils library**
You most likely have to reuse/repeat some code in different research projects. Start refactoring those things into "common utils" package. Make setup.py file, push module to github (or similar), so that team members can "pip install" it from VCS.
Examples of functionality to put in there are:
* Data Warehouse or Storage access functions
* common plotting functions
* re-usable math/stats methods
**2. Split your fat master notebook into smaller notebooks**
In my experience, the good length of file with code (any language) is only few screens (100-400 lines). Jupyter Notebook is still the source file, but with output! Reading a notebook with 20+ cells is very hard. I like my notebooks to have 4-10 cells max.
Ideally, each notebook should have one "hypothesis-data-conclusions" triplet.
Example of splitting the notebook:
1\_data\_preparation.ipynb
2\_data\_validation.ipynb
3\_exploratory\_plotting.ipynb
4\_simple\_linear\_model.ipynb
5\_hierarchical\_model.ipynb
playground.ipynb
Save output of 1\_data\_preparation.ipynb to pickle `df.to_pickle('clean_data.pkl')`, csv or fast DB and use `pd.read_pickle("clean_data.pkl")` at the top of each notebook.
**3. It is not Python - it is IPython Notebook**
What makes notebook unique is **cells**. Use them well.
Each cell should be "idea-execution-output" triplet. If cell does not output anything - combine with the following cell. Import cell should output nothing -this is an expected output for it.
If cell have few outputs - it may be worth splitting it.
Hiding imports may or may not be good idea:
```
from myimports import *
```
Your reader may want to figure out what exactly you are importing to use the same stuff for her research. So use with caution. We do use it for `pandas, numpy, matplotlib, sql` however.
Hiding "secret sauce" in /helpers/model.py is bad:
```
myutil.fit_model_and_calculate(df)
```
This may save you typing and you will remove duplicate code, but your collaborator will have to open another file to figure out what's going on. Unfortunately, notebook (jupyter) is quite inflexible and basic environment, but you still don't want to force your reader to leave it for every piece of code. I hope that in the future IDE will improve, but for now, **keep "secret sauce" inside a notebook**. While "boring and obvious utils" - wherever you see fit. DRY still apply - you have to find the balance.
This should not stop you from packaging re-usable code into functions or even small classes. But "flat is better than nested".
**4. Keep notebooks clean**
You should be able to "reset & Run All" at any point in time.
Each re-run should be fast! Which means you may have to invest in writing some caching functions. May be you even want to put those into your "common utils" module.
Each cell should be executable multiple times, without the need to re-initialise the notebook. This saves you time and keep the code more robust.
But it may depend on state created by previous cells. Making each cell completely independent from the cells above is an anti-pattern, IMO.
After you are done with research - you are not done with notebook. Refactor.
**5. Create a project module, but be very selective**
If you keep re-using plotting or analytics function - do refactor it into this module. But in my experience, people expect to read and understand a notebook, without opening multiple util sub-modules. So naming your sub-routines well is even more important here, compared to normal Python.
"Clean code reads like well written prose" Grady Booch (developer of UML)
**6. Host Jupyter server in the cloud for the entire team**
You will have one environment, so everyone can quickly review and validate research without the need to match the environment (even though conda makes this pretty easy).
And you can configure defaults, like mpl style/colors and make matplot lib inline, by default:
In `~/.ipython/profile_default/ipython_config.py`
Add line `c.InteractiveShellApp.matplotlib = 'inline'`
**7. (experimental idea) Run a notebook from another notebook, with different parameters**
Quite often you may want to re-run the whole notebook, but with a different input parameters.
To do this, you can structure your research notebook as following:
Place ***params*** dictionary **in the first cell** of "source notebook".
```
params = dict(platform='iOS',
start_date='2016-05-01',
retention=7)
df = get_data(params ..)
do_analysis(params ..)
```
And in another (higher logical level) notebook, execute it using this function:
```
def run_notebook(nbfile, **kwargs):
"""
example:
run_notebook('report.ipynb', platform='google_play', start_date='2016-06-10')
"""
def read_notebook(nbfile):
if not nbfile.endswith('.ipynb'):
nbfile += '.ipynb'
with io.open(nbfile) as f:
nb = nbformat.read(f, as_version=4)
return nb
ip = get_ipython()
gl = ip.ns_table['user_global']
gl['params'] = None
arguments_in_original_state = True
for cell in read_notebook(nbfile).cells:
if cell.cell_type != 'code':
continue
ip.run_cell(cell.source)
if arguments_in_original_state and type(gl['params']) == dict:
gl['params'].update(kwargs)
arguments_in_original_state = False
```
Whether this "design pattern" proves to be useful is yet to be seen. We had some success with it - at least we stopped duplicating notebooks only to change few inputs.
Refactoring the notebook into a class or module break quick feedback loop of "idea-execute-output" that cells provide. And, IMHO, is not "ipythonic"..
**8. Write (unit) tests for shared library in notebooks and run with py.test**
There is a Plugin for py.test that can discover and run tests inside notebooks!
<https://pypi.python.org/pypi/pytest-ipynb> | Strange that no one mentioned this. Write out your next project using [nbdev](https://github.com/fastai/nbdev/tree/master/). From the [docs](https://nbdev.fast.ai/), we have
Features of Nbdev
-----------------
`nbdev` provides the following tools for developers:
* **Automatically generate docs** from Jupyter notebooks. These docs are searchable and automatically hyperlinked to appropriate documentation pages by introspecting keywords you surround in backticks.
* Utilities to **automate the publishing of PyPI and conda packages** including version number management.
* A robust, **two-way sync between notebooks and source code**, which allow you to use your IDE for code navigation or quick edits if desired.
* **Fine-grained control on hiding/showing cells**: you can choose to hide entire cells, just the output, or just the input. Furthermore, you can embed cells in collapsible elements that are open or closed by default.
* Ability to **write tests directly in notebooks** without having to learn special APIs. These tests get executed in parallel with a single CLI command. You can even define specific groups of tests such that you don't always have to run long-running tests.
* Tools for **merge/conflict resolution** with notebooks in a **human readable format**.
* **Continuous integration (CI) comes with [GitHub Actions](https://github.com/features/actions)** set up for you out of the box, that will run tests automatically for you. Even if you are not familiar with CI or GitHub Actions, this starts working right away for you without any manual intervention.
* **Integration With GitHub Pages for docs hosting**: nbdev allows you to easily host your documentation for free, using GitHub pages.
* Create Python modules, following **best practices such as automatically defining `__all__`** ([more details](http://xion.io/post/code/python-all-wild-imports.html)) with your exported functions, classes, and variables.
* **Math equation support** with LaTeX.
* ... and much more! See the [Getting Started](https://nbdev.fast.ai/#Getting-Started) section for more information.
For a quick start
* The [tutorial](https://nbdev.fast.ai/tutorial.html).
* A [minimal, end-to-end example](https://nbdev.fast.ai/example.html) of using nbdev. I suggest replicating this example after reading through the tutorial to solidify your understanding.
* use the [nbdev\_template](https://github.com/fastai/nbdev_template)
* Wonder why you did not try this out earlier, even after knowing about it 1.5 years ago (like me).
If you like videos
* [nbdev tutorial by Jeremy Howard and Sylvain gugger](https://www.dropbox.com/s/bm6jgn9a26hzmk8/nbdev%20tutorial.mp4?dl=0)
* [nbdev live coding with Hamel Husain](https://www.dropbox.com/s/ipsbipun7pv9jyc/nbdev%20live%20coding%20with%20Hamel%20Husain.mp4?dl=0)
If the video links fail, search the titles on Youtube to get them.
Also, follow all guidelines as in [volodymyr's answer above](https://stackoverflow.com/a/38192558/7952027)
All these comments were specific to notebooks.
For any code, you have to
* Write tests ( before or at least after you write code)
* add documentation for functions (preferably numpy style since this is a scientific package)
* Share it when you publish a paper so that others need not reinvent the wheel (Especially those who work in Physics). |
36,427,747 | I'm using Ipython Notebook to my research. As my file grows bigger, I constantly extract code out, things like plot method, fitting method etc.
I think I need a way to organize this. Is there any good way to do it??
---
Currently, I do this by:
```
data/
helpers/
my_notebook.ipynb
import_file.py
```
I store data at `data/`, and extract `helper method` into `helpers/`, and divide them into files like `plot_helper.py`, `app_helper.py`, etc.
I summarize the imports in `import_file.py`,
```
from IPython.display import display
import numpy as np
import scipy as sp
import pandas as pd
import matplotlib as mpl
from matplotlib import pyplot as plt
import sklearn
import re
```
And then I can import everything I need in `.ipynb` at top cell as
[](https://i.stack.imgur.com/XHbi3.png)
The structure can be seen at <https://github.com/cqcn1991/Wind-Speed-Analysis>
One problem I have right now is that I have too many submodule at `helpers/`, and it's hard to think which method should be put into which file.
I think a possible way is to organize in `pre-processing`, `processing`, `post-processing`.
UPDATE:
My big jupyter research notebook:
<https://cdn.rawgit.com/cqcn1991/Wind-Speed-Analysis/master/output_HTML/marham.html>
The top cell is `standard import` + `magic` + `extentions`
```
%matplotlib inline
%load_ext autoreload
%autoreload 2
from __future__ import division
from import_file import *
load_libs()
``` | 2016/04/05 | [
"https://Stackoverflow.com/questions/36427747",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1794744/"
] | There are many ways to organise ipython research project. I am managing a team of 5 Data Scientists and 3 Data Engineers and I found those tips to be working well for our usecase:
This is a summary of my PyData London talk:
<http://www.slideshare.net/vladimirkazantsev/clean-code-in-jupyter-notebook>
**1. Create a shared (multi-project) utils library**
You most likely have to reuse/repeat some code in different research projects. Start refactoring those things into "common utils" package. Make setup.py file, push module to github (or similar), so that team members can "pip install" it from VCS.
Examples of functionality to put in there are:
* Data Warehouse or Storage access functions
* common plotting functions
* re-usable math/stats methods
**2. Split your fat master notebook into smaller notebooks**
In my experience, the good length of file with code (any language) is only few screens (100-400 lines). Jupyter Notebook is still the source file, but with output! Reading a notebook with 20+ cells is very hard. I like my notebooks to have 4-10 cells max.
Ideally, each notebook should have one "hypothesis-data-conclusions" triplet.
Example of splitting the notebook:
1\_data\_preparation.ipynb
2\_data\_validation.ipynb
3\_exploratory\_plotting.ipynb
4\_simple\_linear\_model.ipynb
5\_hierarchical\_model.ipynb
playground.ipynb
Save output of 1\_data\_preparation.ipynb to pickle `df.to_pickle('clean_data.pkl')`, csv or fast DB and use `pd.read_pickle("clean_data.pkl")` at the top of each notebook.
**3. It is not Python - it is IPython Notebook**
What makes notebook unique is **cells**. Use them well.
Each cell should be "idea-execution-output" triplet. If cell does not output anything - combine with the following cell. Import cell should output nothing -this is an expected output for it.
If cell have few outputs - it may be worth splitting it.
Hiding imports may or may not be good idea:
```
from myimports import *
```
Your reader may want to figure out what exactly you are importing to use the same stuff for her research. So use with caution. We do use it for `pandas, numpy, matplotlib, sql` however.
Hiding "secret sauce" in /helpers/model.py is bad:
```
myutil.fit_model_and_calculate(df)
```
This may save you typing and you will remove duplicate code, but your collaborator will have to open another file to figure out what's going on. Unfortunately, notebook (jupyter) is quite inflexible and basic environment, but you still don't want to force your reader to leave it for every piece of code. I hope that in the future IDE will improve, but for now, **keep "secret sauce" inside a notebook**. While "boring and obvious utils" - wherever you see fit. DRY still apply - you have to find the balance.
This should not stop you from packaging re-usable code into functions or even small classes. But "flat is better than nested".
**4. Keep notebooks clean**
You should be able to "reset & Run All" at any point in time.
Each re-run should be fast! Which means you may have to invest in writing some caching functions. May be you even want to put those into your "common utils" module.
Each cell should be executable multiple times, without the need to re-initialise the notebook. This saves you time and keep the code more robust.
But it may depend on state created by previous cells. Making each cell completely independent from the cells above is an anti-pattern, IMO.
After you are done with research - you are not done with notebook. Refactor.
**5. Create a project module, but be very selective**
If you keep re-using plotting or analytics function - do refactor it into this module. But in my experience, people expect to read and understand a notebook, without opening multiple util sub-modules. So naming your sub-routines well is even more important here, compared to normal Python.
"Clean code reads like well written prose" Grady Booch (developer of UML)
**6. Host Jupyter server in the cloud for the entire team**
You will have one environment, so everyone can quickly review and validate research without the need to match the environment (even though conda makes this pretty easy).
And you can configure defaults, like mpl style/colors and make matplot lib inline, by default:
In `~/.ipython/profile_default/ipython_config.py`
Add line `c.InteractiveShellApp.matplotlib = 'inline'`
**7. (experimental idea) Run a notebook from another notebook, with different parameters**
Quite often you may want to re-run the whole notebook, but with a different input parameters.
To do this, you can structure your research notebook as following:
Place ***params*** dictionary **in the first cell** of "source notebook".
```
params = dict(platform='iOS',
start_date='2016-05-01',
retention=7)
df = get_data(params ..)
do_analysis(params ..)
```
And in another (higher logical level) notebook, execute it using this function:
```
def run_notebook(nbfile, **kwargs):
"""
example:
run_notebook('report.ipynb', platform='google_play', start_date='2016-06-10')
"""
def read_notebook(nbfile):
if not nbfile.endswith('.ipynb'):
nbfile += '.ipynb'
with io.open(nbfile) as f:
nb = nbformat.read(f, as_version=4)
return nb
ip = get_ipython()
gl = ip.ns_table['user_global']
gl['params'] = None
arguments_in_original_state = True
for cell in read_notebook(nbfile).cells:
if cell.cell_type != 'code':
continue
ip.run_cell(cell.source)
if arguments_in_original_state and type(gl['params']) == dict:
gl['params'].update(kwargs)
arguments_in_original_state = False
```
Whether this "design pattern" proves to be useful is yet to be seen. We had some success with it - at least we stopped duplicating notebooks only to change few inputs.
Refactoring the notebook into a class or module break quick feedback loop of "idea-execute-output" that cells provide. And, IMHO, is not "ipythonic"..
**8. Write (unit) tests for shared library in notebooks and run with py.test**
There is a Plugin for py.test that can discover and run tests inside notebooks!
<https://pypi.python.org/pypi/pytest-ipynb> | If you hate notebooks, try out these cookiecutters
* [Dr Michael Goerz's cookiecutter](https://github.com/goerz/cookiecutter-pypackage)
* [Ionel Cristian Mărieș](https://github.com/ionelmc/cookiecutter-pylibrary)
* [University of Washington Escience institute's shablona](https://github.com/uwescience/shablona) |
36,427,747 | I'm using Ipython Notebook to my research. As my file grows bigger, I constantly extract code out, things like plot method, fitting method etc.
I think I need a way to organize this. Is there any good way to do it??
---
Currently, I do this by:
```
data/
helpers/
my_notebook.ipynb
import_file.py
```
I store data at `data/`, and extract `helper method` into `helpers/`, and divide them into files like `plot_helper.py`, `app_helper.py`, etc.
I summarize the imports in `import_file.py`,
```
from IPython.display import display
import numpy as np
import scipy as sp
import pandas as pd
import matplotlib as mpl
from matplotlib import pyplot as plt
import sklearn
import re
```
And then I can import everything I need in `.ipynb` at top cell as
[](https://i.stack.imgur.com/XHbi3.png)
The structure can be seen at <https://github.com/cqcn1991/Wind-Speed-Analysis>
One problem I have right now is that I have too many submodule at `helpers/`, and it's hard to think which method should be put into which file.
I think a possible way is to organize in `pre-processing`, `processing`, `post-processing`.
UPDATE:
My big jupyter research notebook:
<https://cdn.rawgit.com/cqcn1991/Wind-Speed-Analysis/master/output_HTML/marham.html>
The top cell is `standard import` + `magic` + `extentions`
```
%matplotlib inline
%load_ext autoreload
%autoreload 2
from __future__ import division
from import_file import *
load_libs()
``` | 2016/04/05 | [
"https://Stackoverflow.com/questions/36427747",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1794744/"
] | While the given answers cover the topic thoroughly it is still worth mentioning [Cookiecutter](https://cookiecutter.readthedocs.io/en/latest/) which provides a data science boilerplate project structure:
### [Cookiecutter Data Sciencee](https://drivendata.github.io/cookiecutter-data-science/)
provides data science template for projects in Python with a logical, reasonably standardized, yet flexible project structure for doing and sharing data science work.
Your analysis doesn't have to be in Python, but the template does provide some Python boilerplate (in the src folder for example, and the Sphinx documentation skeleton in docs). However, nothing is binding.
The following quote from the project description sums it up pretty nicely:
>
> Nobody sits around before creating a new Rails project to figure out
> where they want to put their views; they just run `rails new` to get a
> standard project skeleton like everybody else.
>
>
>
### Requirements:
* Python 2.7 or 3.5
* cookiecutter Python package >= 1.4.0: `pip install cookiecutter`
### Getting started
>
> Starting a new project is as easy as running this command at the
> command line. No need to create a directory first, the cookiecutter
> will do it for you.
>
>
>
```
cookiecutter https://github.com/drivendata/cookiecutter-data-science
```
### Directory structure
```
├── LICENSE
├── Makefile <- Makefile with commands like `make data` or `make train`
├── README.md <- The top-level README for developers using this project.
├── data
│ ├── external <- Data from third party sources.
│ ├── interim <- Intermediate data that has been transformed.
│ ├── processed <- The final, canonical data sets for modeling.
│ └── raw <- The original, immutable data dump.
│
├── docs <- A default Sphinx project; see sphinx-doc.org for details
│
├── models <- Trained and serialized models, model predictions, or model summaries
│
├── notebooks <- Jupyter notebooks. Naming convention is a number (for ordering),
│ the creator's initials, and a short `-` delimited description, e.g.
│ `1.0-jqp-initial-data-exploration`.
│
├── references <- Data dictionaries, manuals, and all other explanatory materials.
│
├── reports <- Generated analysis as HTML, PDF, LaTeX, etc.
│ └── figures <- Generated graphics and figures to be used in reporting
│
├── requirements.txt <- The requirements file for reproducing the analysis environment, e.g.
│ generated with `pip freeze > requirements.txt`
│
├── src <- Source code for use in this project.
│ ├── __init__.py <- Makes src a Python module
│ │
│ ├── data <- Scripts to download or generate data
│ │ └── make_dataset.py
│ │
│ ├── features <- Scripts to turn raw data into features for modeling
│ │ └── build_features.py
│ │
│ ├── models <- Scripts to train models and then use trained models to make
│ │ │ predictions
│ │ ├── predict_model.py
│ │ └── train_model.py
│ │
│ └── visualization <- Scripts to create exploratory and results-oriented visualizations
│ └── visualize.py
│
└── tox.ini <- tox file with settings for running tox; see tox.testrun.org
```
### Related:
[ProjectTemplate](http://projecttemplate.net/index.html) - provides a similar system for R data analysis. | Strange that no one mentioned this. Write out your next project using [nbdev](https://github.com/fastai/nbdev/tree/master/). From the [docs](https://nbdev.fast.ai/), we have
Features of Nbdev
-----------------
`nbdev` provides the following tools for developers:
* **Automatically generate docs** from Jupyter notebooks. These docs are searchable and automatically hyperlinked to appropriate documentation pages by introspecting keywords you surround in backticks.
* Utilities to **automate the publishing of PyPI and conda packages** including version number management.
* A robust, **two-way sync between notebooks and source code**, which allow you to use your IDE for code navigation or quick edits if desired.
* **Fine-grained control on hiding/showing cells**: you can choose to hide entire cells, just the output, or just the input. Furthermore, you can embed cells in collapsible elements that are open or closed by default.
* Ability to **write tests directly in notebooks** without having to learn special APIs. These tests get executed in parallel with a single CLI command. You can even define specific groups of tests such that you don't always have to run long-running tests.
* Tools for **merge/conflict resolution** with notebooks in a **human readable format**.
* **Continuous integration (CI) comes with [GitHub Actions](https://github.com/features/actions)** set up for you out of the box, that will run tests automatically for you. Even if you are not familiar with CI or GitHub Actions, this starts working right away for you without any manual intervention.
* **Integration With GitHub Pages for docs hosting**: nbdev allows you to easily host your documentation for free, using GitHub pages.
* Create Python modules, following **best practices such as automatically defining `__all__`** ([more details](http://xion.io/post/code/python-all-wild-imports.html)) with your exported functions, classes, and variables.
* **Math equation support** with LaTeX.
* ... and much more! See the [Getting Started](https://nbdev.fast.ai/#Getting-Started) section for more information.
For a quick start
* The [tutorial](https://nbdev.fast.ai/tutorial.html).
* A [minimal, end-to-end example](https://nbdev.fast.ai/example.html) of using nbdev. I suggest replicating this example after reading through the tutorial to solidify your understanding.
* use the [nbdev\_template](https://github.com/fastai/nbdev_template)
* Wonder why you did not try this out earlier, even after knowing about it 1.5 years ago (like me).
If you like videos
* [nbdev tutorial by Jeremy Howard and Sylvain gugger](https://www.dropbox.com/s/bm6jgn9a26hzmk8/nbdev%20tutorial.mp4?dl=0)
* [nbdev live coding with Hamel Husain](https://www.dropbox.com/s/ipsbipun7pv9jyc/nbdev%20live%20coding%20with%20Hamel%20Husain.mp4?dl=0)
If the video links fail, search the titles on Youtube to get them.
Also, follow all guidelines as in [volodymyr's answer above](https://stackoverflow.com/a/38192558/7952027)
All these comments were specific to notebooks.
For any code, you have to
* Write tests ( before or at least after you write code)
* add documentation for functions (preferably numpy style since this is a scientific package)
* Share it when you publish a paper so that others need not reinvent the wheel (Especially those who work in Physics). |
36,427,747 | I'm using Ipython Notebook to my research. As my file grows bigger, I constantly extract code out, things like plot method, fitting method etc.
I think I need a way to organize this. Is there any good way to do it??
---
Currently, I do this by:
```
data/
helpers/
my_notebook.ipynb
import_file.py
```
I store data at `data/`, and extract `helper method` into `helpers/`, and divide them into files like `plot_helper.py`, `app_helper.py`, etc.
I summarize the imports in `import_file.py`,
```
from IPython.display import display
import numpy as np
import scipy as sp
import pandas as pd
import matplotlib as mpl
from matplotlib import pyplot as plt
import sklearn
import re
```
And then I can import everything I need in `.ipynb` at top cell as
[](https://i.stack.imgur.com/XHbi3.png)
The structure can be seen at <https://github.com/cqcn1991/Wind-Speed-Analysis>
One problem I have right now is that I have too many submodule at `helpers/`, and it's hard to think which method should be put into which file.
I think a possible way is to organize in `pre-processing`, `processing`, `post-processing`.
UPDATE:
My big jupyter research notebook:
<https://cdn.rawgit.com/cqcn1991/Wind-Speed-Analysis/master/output_HTML/marham.html>
The top cell is `standard import` + `magic` + `extentions`
```
%matplotlib inline
%load_ext autoreload
%autoreload 2
from __future__ import division
from import_file import *
load_libs()
``` | 2016/04/05 | [
"https://Stackoverflow.com/questions/36427747",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1794744/"
] | While the given answers cover the topic thoroughly it is still worth mentioning [Cookiecutter](https://cookiecutter.readthedocs.io/en/latest/) which provides a data science boilerplate project structure:
### [Cookiecutter Data Sciencee](https://drivendata.github.io/cookiecutter-data-science/)
provides data science template for projects in Python with a logical, reasonably standardized, yet flexible project structure for doing and sharing data science work.
Your analysis doesn't have to be in Python, but the template does provide some Python boilerplate (in the src folder for example, and the Sphinx documentation skeleton in docs). However, nothing is binding.
The following quote from the project description sums it up pretty nicely:
>
> Nobody sits around before creating a new Rails project to figure out
> where they want to put their views; they just run `rails new` to get a
> standard project skeleton like everybody else.
>
>
>
### Requirements:
* Python 2.7 or 3.5
* cookiecutter Python package >= 1.4.0: `pip install cookiecutter`
### Getting started
>
> Starting a new project is as easy as running this command at the
> command line. No need to create a directory first, the cookiecutter
> will do it for you.
>
>
>
```
cookiecutter https://github.com/drivendata/cookiecutter-data-science
```
### Directory structure
```
├── LICENSE
├── Makefile <- Makefile with commands like `make data` or `make train`
├── README.md <- The top-level README for developers using this project.
├── data
│ ├── external <- Data from third party sources.
│ ├── interim <- Intermediate data that has been transformed.
│ ├── processed <- The final, canonical data sets for modeling.
│ └── raw <- The original, immutable data dump.
│
├── docs <- A default Sphinx project; see sphinx-doc.org for details
│
├── models <- Trained and serialized models, model predictions, or model summaries
│
├── notebooks <- Jupyter notebooks. Naming convention is a number (for ordering),
│ the creator's initials, and a short `-` delimited description, e.g.
│ `1.0-jqp-initial-data-exploration`.
│
├── references <- Data dictionaries, manuals, and all other explanatory materials.
│
├── reports <- Generated analysis as HTML, PDF, LaTeX, etc.
│ └── figures <- Generated graphics and figures to be used in reporting
│
├── requirements.txt <- The requirements file for reproducing the analysis environment, e.g.
│ generated with `pip freeze > requirements.txt`
│
├── src <- Source code for use in this project.
│ ├── __init__.py <- Makes src a Python module
│ │
│ ├── data <- Scripts to download or generate data
│ │ └── make_dataset.py
│ │
│ ├── features <- Scripts to turn raw data into features for modeling
│ │ └── build_features.py
│ │
│ ├── models <- Scripts to train models and then use trained models to make
│ │ │ predictions
│ │ ├── predict_model.py
│ │ └── train_model.py
│ │
│ └── visualization <- Scripts to create exploratory and results-oriented visualizations
│ └── visualize.py
│
└── tox.ini <- tox file with settings for running tox; see tox.testrun.org
```
### Related:
[ProjectTemplate](http://projecttemplate.net/index.html) - provides a similar system for R data analysis. | If you hate notebooks, try out these cookiecutters
* [Dr Michael Goerz's cookiecutter](https://github.com/goerz/cookiecutter-pypackage)
* [Ionel Cristian Mărieș](https://github.com/ionelmc/cookiecutter-pylibrary)
* [University of Washington Escience institute's shablona](https://github.com/uwescience/shablona) |
36,427,747 | I'm using Ipython Notebook to my research. As my file grows bigger, I constantly extract code out, things like plot method, fitting method etc.
I think I need a way to organize this. Is there any good way to do it??
---
Currently, I do this by:
```
data/
helpers/
my_notebook.ipynb
import_file.py
```
I store data at `data/`, and extract `helper method` into `helpers/`, and divide them into files like `plot_helper.py`, `app_helper.py`, etc.
I summarize the imports in `import_file.py`,
```
from IPython.display import display
import numpy as np
import scipy as sp
import pandas as pd
import matplotlib as mpl
from matplotlib import pyplot as plt
import sklearn
import re
```
And then I can import everything I need in `.ipynb` at top cell as
[](https://i.stack.imgur.com/XHbi3.png)
The structure can be seen at <https://github.com/cqcn1991/Wind-Speed-Analysis>
One problem I have right now is that I have too many submodule at `helpers/`, and it's hard to think which method should be put into which file.
I think a possible way is to organize in `pre-processing`, `processing`, `post-processing`.
UPDATE:
My big jupyter research notebook:
<https://cdn.rawgit.com/cqcn1991/Wind-Speed-Analysis/master/output_HTML/marham.html>
The top cell is `standard import` + `magic` + `extentions`
```
%matplotlib inline
%load_ext autoreload
%autoreload 2
from __future__ import division
from import_file import *
load_libs()
``` | 2016/04/05 | [
"https://Stackoverflow.com/questions/36427747",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1794744/"
] | Strange that no one mentioned this. Write out your next project using [nbdev](https://github.com/fastai/nbdev/tree/master/). From the [docs](https://nbdev.fast.ai/), we have
Features of Nbdev
-----------------
`nbdev` provides the following tools for developers:
* **Automatically generate docs** from Jupyter notebooks. These docs are searchable and automatically hyperlinked to appropriate documentation pages by introspecting keywords you surround in backticks.
* Utilities to **automate the publishing of PyPI and conda packages** including version number management.
* A robust, **two-way sync between notebooks and source code**, which allow you to use your IDE for code navigation or quick edits if desired.
* **Fine-grained control on hiding/showing cells**: you can choose to hide entire cells, just the output, or just the input. Furthermore, you can embed cells in collapsible elements that are open or closed by default.
* Ability to **write tests directly in notebooks** without having to learn special APIs. These tests get executed in parallel with a single CLI command. You can even define specific groups of tests such that you don't always have to run long-running tests.
* Tools for **merge/conflict resolution** with notebooks in a **human readable format**.
* **Continuous integration (CI) comes with [GitHub Actions](https://github.com/features/actions)** set up for you out of the box, that will run tests automatically for you. Even if you are not familiar with CI or GitHub Actions, this starts working right away for you without any manual intervention.
* **Integration With GitHub Pages for docs hosting**: nbdev allows you to easily host your documentation for free, using GitHub pages.
* Create Python modules, following **best practices such as automatically defining `__all__`** ([more details](http://xion.io/post/code/python-all-wild-imports.html)) with your exported functions, classes, and variables.
* **Math equation support** with LaTeX.
* ... and much more! See the [Getting Started](https://nbdev.fast.ai/#Getting-Started) section for more information.
For a quick start
* The [tutorial](https://nbdev.fast.ai/tutorial.html).
* A [minimal, end-to-end example](https://nbdev.fast.ai/example.html) of using nbdev. I suggest replicating this example after reading through the tutorial to solidify your understanding.
* use the [nbdev\_template](https://github.com/fastai/nbdev_template)
* Wonder why you did not try this out earlier, even after knowing about it 1.5 years ago (like me).
If you like videos
* [nbdev tutorial by Jeremy Howard and Sylvain gugger](https://www.dropbox.com/s/bm6jgn9a26hzmk8/nbdev%20tutorial.mp4?dl=0)
* [nbdev live coding with Hamel Husain](https://www.dropbox.com/s/ipsbipun7pv9jyc/nbdev%20live%20coding%20with%20Hamel%20Husain.mp4?dl=0)
If the video links fail, search the titles on Youtube to get them.
Also, follow all guidelines as in [volodymyr's answer above](https://stackoverflow.com/a/38192558/7952027)
All these comments were specific to notebooks.
For any code, you have to
* Write tests ( before or at least after you write code)
* add documentation for functions (preferably numpy style since this is a scientific package)
* Share it when you publish a paper so that others need not reinvent the wheel (Especially those who work in Physics). | If you hate notebooks, try out these cookiecutters
* [Dr Michael Goerz's cookiecutter](https://github.com/goerz/cookiecutter-pypackage)
* [Ionel Cristian Mărieș](https://github.com/ionelmc/cookiecutter-pylibrary)
* [University of Washington Escience institute's shablona](https://github.com/uwescience/shablona) |
54,292,049 | I play to HackNet game and i have to guess a word to bypass a firewall.
The key makes 6 characters long and contains the letters K,K,K,U,A,N.
What is the simplest way to generate all possible combinations either in bash or in python ? (bonus point for bash) | 2019/01/21 | [
"https://Stackoverflow.com/questions/54292049",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10945277/"
] | Git uses a tree organization that is only allowed to be added new nodes (commits). If you really want to delete a wrongly pushed commit you must update your repository locally and than force push to the according remote. I found an issue talking about it.
[How to undo the initial commit on a remote repository in git?](https://stackoverflow.com/questions/18874613/how-to-undo-the-initial-commit-on-a-remote-repository-in-git) | use `git revert <commit_id_to_be_reverted>` |
54,292,049 | I play to HackNet game and i have to guess a word to bypass a firewall.
The key makes 6 characters long and contains the letters K,K,K,U,A,N.
What is the simplest way to generate all possible combinations either in bash or in python ? (bonus point for bash) | 2019/01/21 | [
"https://Stackoverflow.com/questions/54292049",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10945277/"
] | You should use `git rebase -i --root` and squash the commit removing the `node_modules` folder with the first commit. | use `git revert <commit_id_to_be_reverted>` |
54,292,049 | I play to HackNet game and i have to guess a word to bypass a firewall.
The key makes 6 characters long and contains the letters K,K,K,U,A,N.
What is the simplest way to generate all possible combinations either in bash or in python ? (bonus point for bash) | 2019/01/21 | [
"https://Stackoverflow.com/questions/54292049",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10945277/"
] | You should use `git rebase -i --root` and squash the commit removing the `node_modules` folder with the first commit. | Git uses a tree organization that is only allowed to be added new nodes (commits). If you really want to delete a wrongly pushed commit you must update your repository locally and than force push to the according remote. I found an issue talking about it.
[How to undo the initial commit on a remote repository in git?](https://stackoverflow.com/questions/18874613/how-to-undo-the-initial-commit-on-a-remote-repository-in-git) |
1,265,078 | I want to used python to get the executed file version, and i know the [pefile.py](http://code.google.com/p/pefile/)
how to used it to do this?
notes: the executed file may be not completely. | 2009/08/12 | [
"https://Stackoverflow.com/questions/1265078",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/154106/"
] | This is the best answer I think you can find:
```
import pefile
pe = pefile.PE("/path/to/something.exe")
print hex(pe.VS_VERSIONINFO.Length)
print hex(pe.VS_VERSIONINFO.Type)
print hex(pe.VS_VERSIONINFO.ValueLength)
print hex(pe.VS_FIXEDFILEINFO.Signature)
print hex(pe.VS_FIXEDFILEINFO.FileFlags)
print hex(pe.VS_FIXEDFILEINFO.FileOS)
for fileinfo in pe.FileInfo:
if fileinfo.Key == 'StringFileInfo':
for st in fileinfo.StringTable:
for entry in st.entries.items():
print '%s: %s' % (entry[0], entry[1])
if fileinfo.Key == 'VarFileInfo':
for var in fileinfo.Var:
print '%s: %s' % var.entry.items()[0]
```
[From Ero Carrera's (the author of `pefile.py`) own blog](http://blog.dkbza.org/2007/02/pefile-parsing-version-information-from.html) | I'm not sure that I understand your problem correctly, but if it's something along the lines of using pefile to retrieve the version of a provided executable, then perhaps (taken from [the tutorial][1])
```
import pefile
pe = pefile.PE("/path/to/pefile.exe")
print pe.dump_info()
```
will provide you with the version information. I have no idea how sensible pefile is when parsing incomplete files, but conjecturing that the version information is somewhere in the header and that pefile uses a generator to read the file, then it should be possible to read the information if the header is parseable. |
62,017,437 | I am new to programming. I have made a python script. It runs without errors in pycharm. Using pyinstaller i tried to make an exe. When i run the exe in build or dist folder or even through command prompt, it gives me the error 'Failed to execute Script Main'
I am attaching the warnings file link:
<https://drive.google.com/open?id=1cDQ2KGId0B8K9Qi1bWPIhL55hQO0dM-z>
Kindly help! | 2020/05/26 | [
"https://Stackoverflow.com/questions/62017437",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13605404/"
] | There is one pip script for each virtual environment. So when you install a python module it get installed into the projectname\venv\Lib\site-packages directory.
When you run pyinstaller from terminal to make the executable, pyinstaller checks for dependencies in Sys.path . But that path does not include the projectname\venv\Lib\site-packages directory. Therefore pyinstaller cannot find those particular dependencies. In such cases it gives you warnings.Those warning can be found in 'warnname.txt' near your executable file.
**How to Configure pycharm to run pyinstaller**
1. First you need to add pyinstaller into project interpreter.
[](https://i.stack.imgur.com/eGLSS.png)
2. Then you need to setup running configurations.
[](https://i.stack.imgur.com/3C37D.png)
[](https://i.stack.imgur.com/prRnh.png)
**Script name**: path to your python script
**working path**: Project location
**Leave interpreter options as it is in the image.**
3. Run pyinstaller. You can find your .exe in dist directory.
4. If the "Module not found" error still persists. You can add a hidden import hook and specify the names of the missing modules.Navigate to **Project Path\venv\Lib\site-packages\PyInstaller\hooks** and create a new "**hook-pandas.py**"(hook-modulename.py) script and make a list of hidden import modules like this:
```
hiddenimports = ['pandas._libs.tslibs.np_datetime','pandas._libs.tslibs.nattype','pandas._libs.skiplist']
```
5. And run pyinstaller again, and it should work now. | I know I write this 10 months after but i run into the same problem and i know the solution. so, maybe some people who have the same problem could get help.
If your script has any additional files such as db,csv,png etc. you should add this files same directory. in this way you could solve the problem i guess. at least my problem was solved this way. |
48,021,748 | I have two mysql database one is localhost and another is in server now, am going to create simple app in python using flask for that application i would like to connect the both mysql DB (local and server).
Any one please suggest how to connect multiple DB into flask.
```
app = Flask(__name__)
client = MongoClient()
client = MongoClient('localhost', 27017)
db = client.sampleDB1
```
Sample code if possible.
Thanks | 2017/12/29 | [
"https://Stackoverflow.com/questions/48021748",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5483189/"
] | I had the same issue, finally figured it out.
Instead of using
```
client = MongoClient()
client = MongoClient('localhost', 27017)
db = client.sampleDB1
```
Delete all that and try this:
```
mongo1 = PyMongo(app, uri = 'mongodb://localhost:27017/Database1')
mongo2 = PyMongo(app, uri = 'mongodb://localhost:27017/Database2')
```
Then, when you want to call a particular database you can use:
```
@app.route('/routenamedb1', methods=['GET'])
def get_data_from_Database1():
Database1 = mongo1.db.CollectionName ##Notice I use mongo1,
#If I wanted to access database2 I would use mongo2
#Walk through the Database for DC to
for s in Database1.find():
#Modifying code
return data
``` | create model.py and separate instances of 2 databases inside it, then in app.py:
```
app = Flask(__name__)
app.config['MODEL'] = model.my1st_database()
app.config['MODEL2'] = model.my2nd_database()
```
works for me :) |
48,021,748 | I have two mysql database one is localhost and another is in server now, am going to create simple app in python using flask for that application i would like to connect the both mysql DB (local and server).
Any one please suggest how to connect multiple DB into flask.
```
app = Flask(__name__)
client = MongoClient()
client = MongoClient('localhost', 27017)
db = client.sampleDB1
```
Sample code if possible.
Thanks | 2017/12/29 | [
"https://Stackoverflow.com/questions/48021748",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5483189/"
] | ```
#This technique can be used to connect to multiple databases or database servers:
app = Flask(__name__)
# connect to MongoDB with the defaults
mongo1 = PyMongo(app)
# connect to another MongoDB database on the same host
app.config['MONGO2_DBNAME'] = 'dbname_two'
mongo2 = PyMongo(app, config_prefix='MONGO2')
# connect to another MongoDB server altogether
app.config['MONGO3_HOST'] = 'another.host.example.com'
app.config['MONGO3_PORT'] = 27017
app.config['MONGO3_DBNAME'] = 'dbname_three'
mongo3 = PyMongo(app, config_prefix='MONGO3')
``` | create model.py and separate instances of 2 databases inside it, then in app.py:
```
app = Flask(__name__)
app.config['MODEL'] = model.my1st_database()
app.config['MODEL2'] = model.my2nd_database()
```
works for me :) |
48,021,748 | I have two mysql database one is localhost and another is in server now, am going to create simple app in python using flask for that application i would like to connect the both mysql DB (local and server).
Any one please suggest how to connect multiple DB into flask.
```
app = Flask(__name__)
client = MongoClient()
client = MongoClient('localhost', 27017)
db = client.sampleDB1
```
Sample code if possible.
Thanks | 2017/12/29 | [
"https://Stackoverflow.com/questions/48021748",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5483189/"
] | I had the same issue, finally figured it out.
Instead of using
```
client = MongoClient()
client = MongoClient('localhost', 27017)
db = client.sampleDB1
```
Delete all that and try this:
```
mongo1 = PyMongo(app, uri = 'mongodb://localhost:27017/Database1')
mongo2 = PyMongo(app, uri = 'mongodb://localhost:27017/Database2')
```
Then, when you want to call a particular database you can use:
```
@app.route('/routenamedb1', methods=['GET'])
def get_data_from_Database1():
Database1 = mongo1.db.CollectionName ##Notice I use mongo1,
#If I wanted to access database2 I would use mongo2
#Walk through the Database for DC to
for s in Database1.find():
#Modifying code
return data
``` | ```
#This technique can be used to connect to multiple databases or database servers:
app = Flask(__name__)
# connect to MongoDB with the defaults
mongo1 = PyMongo(app)
# connect to another MongoDB database on the same host
app.config['MONGO2_DBNAME'] = 'dbname_two'
mongo2 = PyMongo(app, config_prefix='MONGO2')
# connect to another MongoDB server altogether
app.config['MONGO3_HOST'] = 'another.host.example.com'
app.config['MONGO3_PORT'] = 27017
app.config['MONGO3_DBNAME'] = 'dbname_three'
mongo3 = PyMongo(app, config_prefix='MONGO3')
``` |
57,010,207 | I want to use R to split some chat messages, here is an example:
```
example <- "[29.01.18, 23:33] Alice: Ist das hier ein Chatverlauf?\n[29.01.18, 23:45] Bob: Ja ist es!\n[29.01.18, 23:45] Bob: Der ist dazu da die funktionsweise des Parsers zu demonstrieren\n[29.01.18, 23:46] Alice: PTT-20180129-WA0025.opus (Datei angehängt)\n[29.01.18, 23:46] Bob: Ah, er kann also auch erkennen ob Voicemails gesendet wurden!\n[29.01.18, 23:46] Bob: Das ist praktisch!\n[29.01.18, 23:47] Bob: Oder?\n[29.01.18, 23:47] Alice: ja |Emoji_Grinning_Face_With_Smiling_Eyes| \n[29.01.18, 23:47] Alice: und Emojis gehen auch!\n[29.01.18, 23:47] Bob: Was ist mit normalen Smilies?\n[29.01.18, 23:49] Alice: Keine Ahnung, lass uns das doch mal ausprobieren\n[29.01.18, 23:50] Bob: Alles klar :) :D\n[29.01.18, 23:51] Alice: Scheint zu funktionieren!:P\n[29.01.18, 23:51] Bob: Meinst du, dass URLS auch erkannt werden?\n[29.01.18, 23:52] Bob: Schick doch mal eine zum ausprobieren!\n[29.01.18, 23:53] Alice: https://github.com/JuKo007\n[29.01.18, 23:58] Alice: Scheint zu funktionieren!\n[29.01.18, 23:59] Alice: Sehr schön!\n[30.01.18, 00:00] Alice: Damit sollten sich WhatsApp Verläufe besser quantifizieren lassen!\n[30.01.18, 00:02] Bob: Alles klar, los gehts |Emoji_Relieved_Face| \n"
```
Basically, I want to split the string right in front of the date-time indicator in the brackets, here is what I tried so far:
```
# Cutting the textblock into individual messages
chat <- strsplit(example,"(?=\\[\\d\\d.\\d\\d.\\d\\d, \\d\\d:\\d\\d\\])",perl=TRUE)
chat <- unlist(chat)
```
The weird thing is, that in the output, it seems that the split occurs *after* the first square bracket, not in front:
```
[1] "["
[2] "29.01.18, 23:33] Alice: Ist das hier ein Chatverlauf?\n"
[3] "["
[4] "29.01.18, 23:45] Bob: Ja ist es!\n"
[5] "["
[6] "29.01.18, 23:45] Bob: Der ist dazu da die funktionsweise des Parsers zu demonstrieren\n"
[7] "["
[8] "29.01.18, 23:46] Alice: PTT-20180129-WA0025.opus (Datei angehängt)\n"
[9] "["
[10] "29.01.18, 23:46] Bob: Ah, er kann also auch erkennen ob Voicemails gesendet wurden!\n"
[11] "["
[12] "29.01.18, 23:46] Bob: Das ist praktisch!\n"
[13] "["
[14] "29.01.18, 23:47] Bob: Oder?\n"
[15] "["
[16] "29.01.18, 23:47] Alice: ja |Emoji_Grinning_Face_With_Smiling_Eyes| \n"
[17] "["
[18] "29.01.18, 23:47] Alice: und Emojis gehen auch!\n"
[19] "["
[20] "29.01.18, 23:47] Bob: Was ist mit normalen Smilies?\n"
[21] "["
[22] "29.01.18, 23:49] Alice: Keine Ahnung, lass uns das doch mal ausprobieren\n"
[23] "["
[24] "29.01.18, 23:50] Bob: Alles klar :) :D\n"
[25] "["
[26] "29.01.18, 23:51] Alice: Scheint zu funktionieren!:P\n"
[27] "["
[28] "29.01.18, 23:51] Bob: Meinst du, dass URLS auch erkannt werden?\n"
[29] "["
[30] "29.01.18, 23:52] Bob: Schick doch mal eine zum ausprobieren!\n"
[31] "["
[32] "29.01.18, 23:53] Alice: https://github.com/JuKo007\n"
[33] "["
[34] "29.01.18, 23:58] Alice: Scheint zu funktionieren!\n"
[35] "["
[36] "29.01.18, 23:59] Alice: Sehr schön!\n"
[37] "["
[38] "30.01.18, 00:00] Alice: Damit sollten sich WhatsApp Verläufe besser quantifizieren lassen!\n"
[39] "["
[40] "30.01.18, 00:02] Bob: Alles klar, los gehts |Emoji_Relieved_Face| \n"
```
When I try to test the Regex pattern [online](https://regex101.com/r/T2wB2o/1) or use it in python, it works just as intended, so to me it seems that this is a feature of the strsplit function? Any recommendation on how to change my R code to make this work are very welcome! I know that it would be easy to just paste this output back together to get my desired output but I would really like to understand whats going on with strsplit and do it properly instead of patching it back together. What I want is:
```
[1] "[29.01.18, 23:33] Alice: Ist das hier ein Chatverlauf?\n"
[2] "[29.01.18, 23:45] Bob: Ja ist es!\n"
[3] "[29.01.18, 23:45] Bob: Der ist dazu da die funktionsweise des Parsers zu demonstrieren\n"
[4] "[29.01.18, 23:46] Alice: PTT-20180129-WA0025.opus (Datei angehängt)\n"
[5] "[29.01.18, 23:46] Bob: Ah, er kann also auch erkennen ob Voicemails gesendet wurden!\n"
[6] "[29.01.18, 23:46] Bob: Das ist praktisch!\n"
[7] "[29.01.18, 23:47] Bob: Oder?\n"
[8] "[29.01.18, 23:47] Alice: ja |Emoji_Grinning_Face_With_Smiling_Eyes| \n"
[9] "[29.01.18, 23:47] Alice: und Emojis gehen auch!\n"
[10] "[29.01.18, 23:47] Bob: Was ist mit normalen Smilies?\n"
[11] "[29.01.18, 23:49] Alice: Keine Ahnung, lass uns das doch mal ausprobieren\n"
[12] "[29.01.18, 23:50] Bob: Alles klar :) :D\n"
[13] "[29.01.18, 23:51] Alice: Scheint zu funktionieren!:P\n"
[14] "[29.01.18, 23:51] Bob: Meinst du, dass URLS auch erkannt werden?"
[15] "[29.01.18, 23:52] Bob: Schick doch mal eine zum ausprobieren!\n"
[16] "[29.01.18, 23:53] Alice: https://github.com/JuKo007\n"
[17] "[29.01.18, 23:58] Alice: Scheint zu funktionieren!\n"
[18] "[29.01.18, 23:59] Alice: Sehr schön!\n"
[19] "[30.01.18, 00:00] Alice: Damit sollten sich WhatsApp Verläufe besser quantifizieren lassen!\n"
[20] "[30.01.18, 00:02] Bob: Alles klar, los gehts |Emoji_Relieved_Face| \n"
``` | 2019/07/12 | [
"https://Stackoverflow.com/questions/57010207",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6039913/"
] | You could add a negative lookahead `(?!^)` to assert not the start of the string.
Your updated line might look like:
```
chat <- strsplit(example,"(?!^)(?=\\[\\d\\d.\\d\\d.\\d\\d, \\d\\d:\\d\\d\\])",perl=TRUE)
```
[R demo](https://ideone.com/KlRaFp)
Result
```
[1] "[29.01.18, 23:33] Alice: Ist das hier ein Chatverlauf?\n"
[2] "[29.01.18, 23:45] Bob: Ja ist es!\n"
[3] "[29.01.18, 23:45] Bob: Der ist dazu da die funktionsweise des Parsers zu demonstrieren\n"
[4] "[29.01.18, 23:46] Alice: PTT-20180129-WA0025.opus (Datei angehängt)\n"
[5] "[29.01.18, 23:46] Bob: Ah, er kann also auch erkennen ob Voicemails gesendet wurden!\n"
[6] "[29.01.18, 23:46] Bob: Das ist praktisch!\n"
[7] "[29.01.18, 23:47] Bob: Oder?\n"
[8] "[29.01.18, 23:47] Alice: ja |Emoji_Grinning_Face_With_Smiling_Eyes| \n"
[9] "[29.01.18, 23:47] Alice: und Emojis gehen auch!\n"
[10] "[29.01.18, 23:47] Bob: Was ist mit normalen Smilies?\n"
[11] "[29.01.18, 23:49] Alice: Keine Ahnung, lass uns das doch mal ausprobieren\n"
[12] "[29.01.18, 23:50] Bob: Alles klar :) :D\n"
[13] "[29.01.18, 23:51] Alice: Scheint zu funktionieren!:P\n"
[14] "[29.01.18, 23:51] Bob: Meinst du, dass URLS auch erkannt werden?\n"
[15] "[29.01.18, 23:52] Bob: Schick doch mal eine zum ausprobieren!\n"
[16] "[29.01.18, 23:53] Alice: https://github.com/JuKo007\n"
[17] "[29.01.18, 23:58] Alice: Scheint zu funktionieren!\n"
[18] "[29.01.18, 23:59] Alice: Sehr schön!\n"
[19] "[30.01.18, 00:00] Alice: Damit sollten sich WhatsApp Verläufe besser quantifizieren lassen!\n"
[20] "[30.01.18, 00:02] Bob: Alles klar, los gehts |Emoji_Relieved_Face| \n"
``` | You can use `stringi` and extract the info you want by slightly modifying the end of your pattern (i.e., matching everything until the next `[`). You could include more of your pattern to ensure there aren't any false-matches but this should get your started. Good luck!
```
library(stringi)
stri_extract_all(example, regex = "\\[\\d\\d.\\d\\d.\\d\\d, \\d\\d:\\d\\d\\][^\\[]*")
[[1]]
[1] "[29.01.18, 23:33] Alice: Ist das hier ein Chatverlauf?\n"
[2] "[29.01.18, 23:45] Bob: Ja ist es!\n"
[3] "[29.01.18, 23:45] Bob: Der ist dazu da die funktionsweise des Parsers zu demonstrieren\n"
[4] "[29.01.18, 23:46] Alice: \016PTT-20180129-WA0025.opus (Datei angehängt)\n"
[5] "[29.01.18, 23:46] Bob: Ah, er kann also auch erkennen ob Voicemails gesendet wurden!\n"
[6] "[29.01.18, 23:46] Bob: Das ist praktisch!\n"
[7] "[29.01.18, 23:47] Bob: Oder?\n"
[8] "[29.01.18, 23:47] Alice: ja |Emoji_Grinning_Face_With_Smiling_Eyes| \n"
[9] "[29.01.18, 23:47] Alice: und Emojis gehen auch!\n"
[10] "[29.01.18, 23:47] Bob: Was ist mit normalen Smilies?\n"
[11] "[29.01.18, 23:49] Alice: \016Keine Ahnung, lass uns das doch mal ausprobieren\n"
[12] "[29.01.18, 23:50] Bob: Alles klar :) :D\n"
[13] "[29.01.18, 23:51] Alice: Scheint zu funktionieren!:P\n"
[14] "[29.01.18, 23:51] Bob: Meinst du, dass URLS auch erkannt werden?\n"
[15] "[29.01.18, 23:52] Bob: \016Schick doch mal eine zum ausprobieren!\n"
[16] "[29.01.18, 23:53] Alice: https://github.com/JuKo007\n"
[17] "[29.01.18, 23:58] Alice: \016Scheint zu funktionieren!\n"
[18] "[29.01.18, 23:59] Alice: Sehr schön!\n"
[19] "[30.01.18, 00:00] Alice: Damit sollten sich WhatsApp Verläufe besser quantifizieren lassen!\n"
[20] "[30.01.18, 00:02] Bob: \016Alles klar, los gehts |Emoji_Relieved_Face| \n"
``` |
25,567,791 | I've been trying for several days now to send a python array by i2c.
```
data = [x,x,x,x] # `x` is a number from 0 to 127.
bus.write_i2c_block_data(i2c_address, 0, data)
bus.write_i2c_block_data(addr, cmd, array)
```
In the function above: addr - arduino i2c adress; cmd - Not sure what this is; array - python array of int numbers.
Can this be done? What is actually the cmd?
---
FWIW, Arduino code, where I receive the array and put it on the `byteArray`:
>
>
> ```
> void receiveData(int numByte){
> int i = 0;
> while(wire.available()){
> if(i < 4){
> byteArray[i] = wire.read();
> i++;
> }
> }
> }
>
> ```
>
>
---
It gives me this error:
`bus.write_i2c_block_data(i2c_adress, 0, decodedArray) IOError: [Errno 5] Input/output error.`
I tried with this: `bus.write_byte(i2c_address, value)`, and it worked, but only for a `value` that goes from 0 to 127, but, I need to pass not only a value, but a full array. | 2014/08/29 | [
"https://Stackoverflow.com/questions/25567791",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3866306/"
] | The function is the good one.
But you should take care of some points:
* bus.write\_i2c\_block\_data(addr, cmd, []) send the value of cmd AND the values in the list on the I2C bus.
So
```
bus.write_i2c_block_data(0x20, 42, [12, 23, 34, 45])
```
doesn't send 4 bytes but 5 bytes to the device.
I doesn't know how the wire library work on arduino, but the device only read 4 bytes, it doesn't send the ACK for the last bytes and the sender detect an output error.
* Two convention exist for I2C device address. The I2C bus have 7 bits for device address and a bit to indicate a read or a write. An other (wrong) convention is to write the address in 8 bits, and say that you have an address for read, and an other for write. The smbus package use the correct convention (7 bits).
Exemple: 0x23 in 7 bits convention, become 0x46 for writing, and 0x47 for reading. | It took me a while,but i got it working.
On the arduino side:
```
int count = 0;
...
...
void receiveData(int numByte){
while(Wire.available()){
if(count < 4){
byteArray[count] = Wire.read();
count++;
}
else{
count = 0;
byteArray[count] = Wire.read();
}
}
}
```
On the raspberry side:
```
def writeData(arrayValue):
for i in arrayValue:
bus.write_byte(i2c_address, i)
```
And that's it. |
25,567,791 | I've been trying for several days now to send a python array by i2c.
```
data = [x,x,x,x] # `x` is a number from 0 to 127.
bus.write_i2c_block_data(i2c_address, 0, data)
bus.write_i2c_block_data(addr, cmd, array)
```
In the function above: addr - arduino i2c adress; cmd - Not sure what this is; array - python array of int numbers.
Can this be done? What is actually the cmd?
---
FWIW, Arduino code, where I receive the array and put it on the `byteArray`:
>
>
> ```
> void receiveData(int numByte){
> int i = 0;
> while(wire.available()){
> if(i < 4){
> byteArray[i] = wire.read();
> i++;
> }
> }
> }
>
> ```
>
>
---
It gives me this error:
`bus.write_i2c_block_data(i2c_adress, 0, decodedArray) IOError: [Errno 5] Input/output error.`
I tried with this: `bus.write_byte(i2c_address, value)`, and it worked, but only for a `value` that goes from 0 to 127, but, I need to pass not only a value, but a full array. | 2014/08/29 | [
"https://Stackoverflow.com/questions/25567791",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3866306/"
] | The function is the good one.
But you should take care of some points:
* bus.write\_i2c\_block\_data(addr, cmd, []) send the value of cmd AND the values in the list on the I2C bus.
So
```
bus.write_i2c_block_data(0x20, 42, [12, 23, 34, 45])
```
doesn't send 4 bytes but 5 bytes to the device.
I doesn't know how the wire library work on arduino, but the device only read 4 bytes, it doesn't send the ACK for the last bytes and the sender detect an output error.
* Two convention exist for I2C device address. The I2C bus have 7 bits for device address and a bit to indicate a read or a write. An other (wrong) convention is to write the address in 8 bits, and say that you have an address for read, and an other for write. The smbus package use the correct convention (7 bits).
Exemple: 0x23 in 7 bits convention, become 0x46 for writing, and 0x47 for reading. | cmd is offset on which you want to write a data.
so its like
```
bus.write_byte(i2c_address, offset, byte)
```
but if you want to write array of bytes then you need to write block data so your code will look like this
```
bus.write_i2c_block_data(i2c_address, offset, [array_of_bytes])
``` |
25,567,791 | I've been trying for several days now to send a python array by i2c.
```
data = [x,x,x,x] # `x` is a number from 0 to 127.
bus.write_i2c_block_data(i2c_address, 0, data)
bus.write_i2c_block_data(addr, cmd, array)
```
In the function above: addr - arduino i2c adress; cmd - Not sure what this is; array - python array of int numbers.
Can this be done? What is actually the cmd?
---
FWIW, Arduino code, where I receive the array and put it on the `byteArray`:
>
>
> ```
> void receiveData(int numByte){
> int i = 0;
> while(wire.available()){
> if(i < 4){
> byteArray[i] = wire.read();
> i++;
> }
> }
> }
>
> ```
>
>
---
It gives me this error:
`bus.write_i2c_block_data(i2c_adress, 0, decodedArray) IOError: [Errno 5] Input/output error.`
I tried with this: `bus.write_byte(i2c_address, value)`, and it worked, but only for a `value` that goes from 0 to 127, but, I need to pass not only a value, but a full array. | 2014/08/29 | [
"https://Stackoverflow.com/questions/25567791",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3866306/"
] | It took me a while,but i got it working.
On the arduino side:
```
int count = 0;
...
...
void receiveData(int numByte){
while(Wire.available()){
if(count < 4){
byteArray[count] = Wire.read();
count++;
}
else{
count = 0;
byteArray[count] = Wire.read();
}
}
}
```
On the raspberry side:
```
def writeData(arrayValue):
for i in arrayValue:
bus.write_byte(i2c_address, i)
```
And that's it. | cmd is offset on which you want to write a data.
so its like
```
bus.write_byte(i2c_address, offset, byte)
```
but if you want to write array of bytes then you need to write block data so your code will look like this
```
bus.write_i2c_block_data(i2c_address, offset, [array_of_bytes])
``` |
12,758,591 | Are there any libraries in Python that does or allows Text To Speech Conversion using Mac Lion's built in text to speech engine?
I did google but most are windows based. I tried pyttx.
I tried to run
```
import pyttsx
engine = pyttsx.init()
engine.say('Sally sells seashells by the seashore.')
engine.say('The quick brown fox jumped over the lazy dog.')
engine.runAndWait()
```
But I get these errors
```
File "/Users/manabchetia/Documents/Codes/Speech.py", line 2, in <module>
engine = pyttsx.init()
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pyttsx-1.0.egg/pyttsx/__init__.py", line 39, in init
eng = Engine(driverName, debug)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pyttsx-1.0.egg/pyttsx/engine.py", line 45, in __init__
self.proxy = driver.DriverProxy(weakref.proxy(self), driverName, debug)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pyttsx-1.0.egg/pyttsx/driver.py", line 64, in __init__
self._module = __import__(name, globals(), locals(), [driverName])
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pyttsx-1.0.egg/pyttsx/drivers/nsss.py", line 18, in <module>
ImportError: No module named Foundation
```
How do I solve these errors? | 2012/10/06 | [
"https://Stackoverflow.com/questions/12758591",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1657827/"
] | Wouldn't it be much simpler to do this?
```
from os import system
system('say Hello world!')
```
You can enter `man say` to see other things you can do with the `say` command.
However, if you want some more advanced features, importing `AppKit` would also be a possibility, although some Cocoa/Objective C knowledge is needed.
```
from AppKit import NSSpeechSynthesizer
speechSynthesizer = NSSpeechSynthesizer.alloc().initWithVoice_("com.apple.speech.synthesis.voice.Bruce")
speechSynthesizer.startSpeakingString_('Hi! Nice to meet you!')
```
If you would like to see more things you can do with NSSpeechSynthesizer take a look at Apple's documentation: <https://developer.apple.com/library/mac/#documentation/Cocoa/Reference/ApplicationKit/Classes/NSSpeechSynthesizer_Class/Reference/Reference.html> | If you are targeting Mac OS X as your platform - PyObjC and NSSpeechSynthesizer is your best bet.
Here is a quick example for you
```
#!/usr/bin/env python
from AppKit import NSSpeechSynthesizer
import time
import sys
if len(sys.argv) < 2:
text = raw_input('type text to speak> ')
else:
text = sys.argv[1]
nssp = NSSpeechSynthesizer
ve = nssp.alloc().init()
for voice in nssp.availableVoices():
ve.setVoice_(voice)
print voice
ve.startSpeakingString_(text)
while not ve.isSpeaking():
time.sleep(0.1)
while ve.isSpeaking():
time.sleep(0.1)
```
Please note that AppKit module is part of PyObjC bridge and should be already installed on your Mac. No need to install it if you are using OS provided python (/usr/bin/python) |
12,758,591 | Are there any libraries in Python that does or allows Text To Speech Conversion using Mac Lion's built in text to speech engine?
I did google but most are windows based. I tried pyttx.
I tried to run
```
import pyttsx
engine = pyttsx.init()
engine.say('Sally sells seashells by the seashore.')
engine.say('The quick brown fox jumped over the lazy dog.')
engine.runAndWait()
```
But I get these errors
```
File "/Users/manabchetia/Documents/Codes/Speech.py", line 2, in <module>
engine = pyttsx.init()
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pyttsx-1.0.egg/pyttsx/__init__.py", line 39, in init
eng = Engine(driverName, debug)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pyttsx-1.0.egg/pyttsx/engine.py", line 45, in __init__
self.proxy = driver.DriverProxy(weakref.proxy(self), driverName, debug)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pyttsx-1.0.egg/pyttsx/driver.py", line 64, in __init__
self._module = __import__(name, globals(), locals(), [driverName])
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pyttsx-1.0.egg/pyttsx/drivers/nsss.py", line 18, in <module>
ImportError: No module named Foundation
```
How do I solve these errors? | 2012/10/06 | [
"https://Stackoverflow.com/questions/12758591",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1657827/"
] | Wouldn't it be much simpler to do this?
```
from os import system
system('say Hello world!')
```
You can enter `man say` to see other things you can do with the `say` command.
However, if you want some more advanced features, importing `AppKit` would also be a possibility, although some Cocoa/Objective C knowledge is needed.
```
from AppKit import NSSpeechSynthesizer
speechSynthesizer = NSSpeechSynthesizer.alloc().initWithVoice_("com.apple.speech.synthesis.voice.Bruce")
speechSynthesizer.startSpeakingString_('Hi! Nice to meet you!')
```
If you would like to see more things you can do with NSSpeechSynthesizer take a look at Apple's documentation: <https://developer.apple.com/library/mac/#documentation/Cocoa/Reference/ApplicationKit/Classes/NSSpeechSynthesizer_Class/Reference/Reference.html> | This might work:
```
import subprocess
subprocess.call(["say","Hello World! (MESSAGE)"])
``` |
12,758,591 | Are there any libraries in Python that does or allows Text To Speech Conversion using Mac Lion's built in text to speech engine?
I did google but most are windows based. I tried pyttx.
I tried to run
```
import pyttsx
engine = pyttsx.init()
engine.say('Sally sells seashells by the seashore.')
engine.say('The quick brown fox jumped over the lazy dog.')
engine.runAndWait()
```
But I get these errors
```
File "/Users/manabchetia/Documents/Codes/Speech.py", line 2, in <module>
engine = pyttsx.init()
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pyttsx-1.0.egg/pyttsx/__init__.py", line 39, in init
eng = Engine(driverName, debug)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pyttsx-1.0.egg/pyttsx/engine.py", line 45, in __init__
self.proxy = driver.DriverProxy(weakref.proxy(self), driverName, debug)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pyttsx-1.0.egg/pyttsx/driver.py", line 64, in __init__
self._module = __import__(name, globals(), locals(), [driverName])
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pyttsx-1.0.egg/pyttsx/drivers/nsss.py", line 18, in <module>
ImportError: No module named Foundation
```
How do I solve these errors? | 2012/10/06 | [
"https://Stackoverflow.com/questions/12758591",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1657827/"
] | If you are targeting Mac OS X as your platform - PyObjC and NSSpeechSynthesizer is your best bet.
Here is a quick example for you
```
#!/usr/bin/env python
from AppKit import NSSpeechSynthesizer
import time
import sys
if len(sys.argv) < 2:
text = raw_input('type text to speak> ')
else:
text = sys.argv[1]
nssp = NSSpeechSynthesizer
ve = nssp.alloc().init()
for voice in nssp.availableVoices():
ve.setVoice_(voice)
print voice
ve.startSpeakingString_(text)
while not ve.isSpeaking():
time.sleep(0.1)
while ve.isSpeaking():
time.sleep(0.1)
```
Please note that AppKit module is part of PyObjC bridge and should be already installed on your Mac. No need to install it if you are using OS provided python (/usr/bin/python) | This might work:
```
import subprocess
subprocess.call(["say","Hello World! (MESSAGE)"])
``` |
53,622,737 | I have a Pandas Dataframe which has columns which look something like this:
```
df:
Column0 Column1 Column2
'MSC' '1' 'R2'
'MIS' 'Tuesday' '22'
'13' 'Finance' 'Monday'
```
So overall, in these columns are actual strings but also numeric values (integers) which are in string format.
I found [this](https://stackoverflow.com/questions/15891038/change-data-type-of-columns-in-pandas) nice post about the `pd.to_numeric` and `astype()` methods, but I can't see if or how I could use them in my case.
Using:
```
pd.to_numeric(df, errors = 'ignore')
```
just results in skiping the whole columns. Instead of skipping the whole columns, I only want to skip the strings in those columns which can't be converted, move on to the next entry and try to convert the next string.
So in the end, my dataframe would look like this:
```
df:
Column0 Column1 Column2
'MSC' 1 'R2'
'MIS' 'Tuesday' 22
13 'Finance' 'Monday'
```
Is there maybe an efficient way to loop over these columns and achieve that?
Best regards,
Jan
**EDIT:**
Thanks for all your suggestions! Since I am still a python beginner, @coldspeed and @sacul 's answers are easier to understand for me so I will go with one of them! | 2018/12/04 | [
"https://Stackoverflow.com/questions/53622737",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10027078/"
] | 100% agree with the comments—mixing dtypes in columns is a terrible idea, performance wise.
For reference, however, I would do this with `pd.to_numeric` and `fillna`:
```
df2 = df.apply(pd.to_numeric, errors='coerce').fillna(df)
print(df2)
Column0 Column1 Column2
0 MSC 1 R2
1 MIS Tuesday 22
2 13 Finance Monday
```
Columns are cast to `object` dtype to prevent coercion. You can see this when you extract the `values`:
```
print(df2.values.tolist())
[['MSC', 1.0, 'R2'], ['MIS', 'Tuesday', 22.0], [13.0, 'Finance', 'Monday']]
``` | I would apply `pd.to_numeric` with `errors='coerce'`, and `update` the original dataframe according to the results (see caveats in comments):
```
# show original string type:
df.loc[0,'Column1']
# '1'
df.update(df.apply(pd.to_numeric, errors='coerce'))
>>> df
Column0 Column1 Column2
0 MSC 1 R2
1 MIS Tuesday 22
2 13 Finance Monday
# show updated float type:
df.loc[0,'Column1']
# 1.0
``` |
53,622,737 | I have a Pandas Dataframe which has columns which look something like this:
```
df:
Column0 Column1 Column2
'MSC' '1' 'R2'
'MIS' 'Tuesday' '22'
'13' 'Finance' 'Monday'
```
So overall, in these columns are actual strings but also numeric values (integers) which are in string format.
I found [this](https://stackoverflow.com/questions/15891038/change-data-type-of-columns-in-pandas) nice post about the `pd.to_numeric` and `astype()` methods, but I can't see if or how I could use them in my case.
Using:
```
pd.to_numeric(df, errors = 'ignore')
```
just results in skiping the whole columns. Instead of skipping the whole columns, I only want to skip the strings in those columns which can't be converted, move on to the next entry and try to convert the next string.
So in the end, my dataframe would look like this:
```
df:
Column0 Column1 Column2
'MSC' 1 'R2'
'MIS' 'Tuesday' 22
13 'Finance' 'Monday'
```
Is there maybe an efficient way to loop over these columns and achieve that?
Best regards,
Jan
**EDIT:**
Thanks for all your suggestions! Since I am still a python beginner, @coldspeed and @sacul 's answers are easier to understand for me so I will go with one of them! | 2018/12/04 | [
"https://Stackoverflow.com/questions/53622737",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10027078/"
] | 100% agree with the comments—mixing dtypes in columns is a terrible idea, performance wise.
For reference, however, I would do this with `pd.to_numeric` and `fillna`:
```
df2 = df.apply(pd.to_numeric, errors='coerce').fillna(df)
print(df2)
Column0 Column1 Column2
0 MSC 1 R2
1 MIS Tuesday 22
2 13 Finance Monday
```
Columns are cast to `object` dtype to prevent coercion. You can see this when you extract the `values`:
```
print(df2.values.tolist())
[['MSC', 1.0, 'R2'], ['MIS', 'Tuesday', 22.0], [13.0, 'Finance', 'Monday']]
``` | Or you could simply use the `isnumeric()` method of `str`. I like it because the syntax is clear, although according to coldspeed's comment, this can become very slow on large df.
>
> `df = df.applymap(lambda x: int(x) if x.isnumeric() else x)`
>
>
>
Example:
```
In [1]: import pandas as pd
In [2]: df = pd.DataFrame([['a','b','c'],['1','1a','c']],columns=['Col1','Col2','Col3'])
In [3]: df
Out[3]:
Col1 Col2 Col3
0 a b c
1 1 1a c
In [4]: df.Col1.map(lambda x: int(x) if x.isnumeric() else x)
Out[4]:
0 a
1 1
Name: Col1, dtype: object
``` |
53,622,737 | I have a Pandas Dataframe which has columns which look something like this:
```
df:
Column0 Column1 Column2
'MSC' '1' 'R2'
'MIS' 'Tuesday' '22'
'13' 'Finance' 'Monday'
```
So overall, in these columns are actual strings but also numeric values (integers) which are in string format.
I found [this](https://stackoverflow.com/questions/15891038/change-data-type-of-columns-in-pandas) nice post about the `pd.to_numeric` and `astype()` methods, but I can't see if or how I could use them in my case.
Using:
```
pd.to_numeric(df, errors = 'ignore')
```
just results in skiping the whole columns. Instead of skipping the whole columns, I only want to skip the strings in those columns which can't be converted, move on to the next entry and try to convert the next string.
So in the end, my dataframe would look like this:
```
df:
Column0 Column1 Column2
'MSC' 1 'R2'
'MIS' 'Tuesday' 22
13 'Finance' 'Monday'
```
Is there maybe an efficient way to loop over these columns and achieve that?
Best regards,
Jan
**EDIT:**
Thanks for all your suggestions! Since I am still a python beginner, @coldspeed and @sacul 's answers are easier to understand for me so I will go with one of them! | 2018/12/04 | [
"https://Stackoverflow.com/questions/53622737",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10027078/"
] | 100% agree with the comments—mixing dtypes in columns is a terrible idea, performance wise.
For reference, however, I would do this with `pd.to_numeric` and `fillna`:
```
df2 = df.apply(pd.to_numeric, errors='coerce').fillna(df)
print(df2)
Column0 Column1 Column2
0 MSC 1 R2
1 MIS Tuesday 22
2 13 Finance Monday
```
Columns are cast to `object` dtype to prevent coercion. You can see this when you extract the `values`:
```
print(df2.values.tolist())
[['MSC', 1.0, 'R2'], ['MIS', 'Tuesday', 22.0], [13.0, 'Finance', 'Monday']]
``` | Using `to_numeric` + `ignore`
```
df=df.applymap(lambda x : pd.to_numeric(x,errors='ignore'))
df
Column0 Column1 Column2
0 MSC 1 R2
1 MIS Tuesday 22
2 13 Finance Monday
df.applymap(type)
Column0 Column1 Column2
0 <class 'str'> <class 'numpy.int64'> <class 'str'>
1 <class 'str'> <class 'str'> <class 'numpy.int64'>
2 <class 'numpy.int64'> <class 'str'> <class 'str'>
``` |
7,504,129 | I have a variable, `fulltext`, which contains the full text of what I want the description of a new changelist in P4V to be. There are already files in the default changelist.
I want to use python to populate the description of a new changelist (based on default) with the contents of `fulltext`.
How can this be done. I've tried this:
```
os.sytem("p4 change -i")
print fulltext
```
But that doesn't create any new change list at all. I don't know how to tell p4 that I'm done editing the description. | 2011/09/21 | [
"https://Stackoverflow.com/questions/7504129",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/343381/"
] | If you're trying to write Python programs that work against Perforce, you might find P4Python helpful: <http://www.perforce.com/perforce/doc.current/manuals/p4script/03_python.html> | It is easiest if you have the changelist numbers that you know you are going to change.
```
#changeListIDNumber is the desired changelist to edit
import P4
p4 = P4.connect()
cl = p4.fetch_changelist(changeListIDNumber)
cl['Description'] = 'your description here'
p4.save_change(cl)
```
If you are using this for your default changelist, and you do not pre populate your description with anything, you will get an error as there will be no 'Description' key in your changelist dictionary. |
7,504,129 | I have a variable, `fulltext`, which contains the full text of what I want the description of a new changelist in P4V to be. There are already files in the default changelist.
I want to use python to populate the description of a new changelist (based on default) with the contents of `fulltext`.
How can this be done. I've tried this:
```
os.sytem("p4 change -i")
print fulltext
```
But that doesn't create any new change list at all. I don't know how to tell p4 that I'm done editing the description. | 2011/09/21 | [
"https://Stackoverflow.com/questions/7504129",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/343381/"
] | If you're trying to write Python programs that work against Perforce, you might find P4Python helpful: <http://www.perforce.com/perforce/doc.current/manuals/p4script/03_python.html> | on shell this works, you may use in any language
echo "Change:new\nClient:myclient\nUser:me\nStatus:new\nDescription:test" | p4 change -i |
7,504,129 | I have a variable, `fulltext`, which contains the full text of what I want the description of a new changelist in P4V to be. There are already files in the default changelist.
I want to use python to populate the description of a new changelist (based on default) with the contents of `fulltext`.
How can this be done. I've tried this:
```
os.sytem("p4 change -i")
print fulltext
```
But that doesn't create any new change list at all. I don't know how to tell p4 that I'm done editing the description. | 2011/09/21 | [
"https://Stackoverflow.com/questions/7504129",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/343381/"
] | It is easiest if you have the changelist numbers that you know you are going to change.
```
#changeListIDNumber is the desired changelist to edit
import P4
p4 = P4.connect()
cl = p4.fetch_changelist(changeListIDNumber)
cl['Description'] = 'your description here'
p4.save_change(cl)
```
If you are using this for your default changelist, and you do not pre populate your description with anything, you will get an error as there will be no 'Description' key in your changelist dictionary. | on shell this works, you may use in any language
echo "Change:new\nClient:myclient\nUser:me\nStatus:new\nDescription:test" | p4 change -i |
45,406,847 | I use Django to send email,everything is OK when running on development environment, which uses command "python manage.py runserver 0.0.0.0:8100". But in the production environment which deployed by nginx+uwsgi+Django do not work.
Here is the code:
```
#Email settings
EMAIL_HOST='smtp.exmail.qq.com'
EMAIL_PORT='465'
EMAIL_HOST_USER='sender@qq.cn'
EMAIL_HOST_PASSWORD='password'
EMAIL_USE_SSL=True
RECEIVE_EMIAL_LIST=['receiver@qq.com']
send_mail('subject','content',setting.EMAIL_HOST_USER,setting.RECEIVE_EMIAL_LIST, fail_silently=False)
``` | 2017/07/31 | [
"https://Stackoverflow.com/questions/45406847",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6133601/"
] | You could wrapping the check in a `setTimeout`:
```
$(".menu-toggle").first().click(function () {
setTimeout(function() {
if (!$("#wrapper").hasClass("menu-active")) {
$("#wrapper").find("div:first").addClass("overlay");
}
if ($("#wrapper").hasClass("menu-active")) {
$("#wrapper").find("div:first").removeClass("overlay");
}
}, 1);
});
```
That *should* cause the check to happen after the browser has updated the DOM. | Make the following,
```
<link rel="preload" href="path-to-your-script.js" as="script">
<script>
var scriptPriority =
document.createElement('script');
scriptPriority.src = 'path-to-your-script.js';
document.body.appendChild(scriptPriority);
</script>
```
About: Link rel Preload
Link rel preload is method that is used by a few developers, its something that almost nobody known and its use to give priority to a script or link css.
More info in:
<https://developers.google.com/web/updates/2016/03/link-rel-preload>
<https://developer.mozilla.org/en-US/docs/Web/HTML/Preloading_content> |
71,461,517 | We have just updated our jenkins (2.337) and the python console output has gone weird:
[](https://i.stack.imgur.com/n2Yxn.png)
I've searched the jenkins settings (ANSI plugin etc) and I can change the inner colours but the gray background and line breaks remain. Does anyone know the settings to get it back to the plain old black and white it used to be? | 2022/03/13 | [
"https://Stackoverflow.com/questions/71461517",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2325752/"
] | We had a similar problem ... we had an almost Black Background with Black Text
We found that the Extra CSS in the Theme section of the Jenkins Configuration has changed.
After putting it through a code formatter (there are no new lines or whitespace in the field) we had the following for the console-output:
```
.console-output .error-inline{
color:#f44336
}
.console-output,.console-output *{
position:relative;
font-family:Roboto Mono,monospace!important;
font-size:14px;
background:#263238;
color:#e9eded;
cursor:text
}
.console-output{
padding:10px 20px
}
```
The "background:#263238;" was giving the Dark Gray background, while the output didn't specify a colour for the text.
Once I updated the '.console-output' CSS to be:
```
.console-output{
color:#fff!important;
padding:10px 20px
}
```
it was all resolved.
Looking at your picture, your console output is going to have a lighter gray background while the text block will be a type specified in your CSS, and will have a darker grey background.
Changing both the have the same colour background will resolve your issue. You could use the HTML Code Examiner (F12 in Chrome) to examine the page setup, and track down exactly which items are giving the look you don't like | When you have broken console colors (black font on black screen) after jenkins update,
* Go to Manage Jenkins -> configure system
* scroll to theme
* click add -> extra CSS
put this in the new field:
```
.console-output{
color:#fff!important;
}
```
You can also add any other CSS to please your eye. |
14,081,949 | How to turn off collisions for some objects and then again turn it on using pymunk lib in python?
Let me show you the example, based on the code below. I want all red balls to go through first border of lines and stop on the lower border. Blue balls should still collide with upper border.
What needs to be changed in the code?
```
import pygame
from pygame.locals import *
from pygame.color import *
import pymunk as pm
from pymunk import Vec2d
import math, sys, random
def to_pygame(p):
"""Small hack to convert pymunk to pygame coordinates"""
return int(p.x), int(-p.y+600)
pygame.init()
screen = pygame.display.set_mode((600, 600))
clock = pygame.time.Clock()
running = True
### Physics stuff
space = pm.Space()
space.gravity = (0.0, -900.0)
## Balls
balls = []
### walls
static_body = pm.Body()
static_lines = [pm.Segment(static_body, (111.0, 280.0), (407.0, 246.0), 0.0),
pm.Segment(static_body, (407.0, 246.0), (407.0, 343.0), 0.0),
pm.Segment(static_body, (111.0, 420.0), (407.0, 386.0), 0.0),
pm.Segment(static_body, (407.0, 386.0), (407.0, 493.0), 0.0)]
for line in static_lines:
line.elasticity = 0.95
space.add(static_lines)
ticks_to_next_ball = 10
while running:
for event in pygame.event.get():
if event.type == QUIT:
running = False
elif event.type == KEYDOWN and event.key == K_ESCAPE:
running = False
ticks_to_next_ball -= 1
if ticks_to_next_ball <= 0:
ticks_to_next_ball = 100
mass = 10
radius = random.randint(10,40)
inertia = pm.moment_for_circle(mass, 0, radius, (0,0))
body = pm.Body(mass, inertia)
x = random.randint(115,350)
body.position = x, 600
shape = pm.Circle(body, radius, (0,0))
shape.elasticity = 0.95
space.add(body, shape)
balls.append(shape)
### Clear screen
screen.fill(THECOLORS["white"])
### Draw stuff
balls_to_remove = []
for ball in balls:
if ball.body.position.y < 200: balls_to_remove.append(ball)
p = to_pygame(ball.body.position)
if ball.radius > 25:
color = THECOLORS["blue"]
else:
color = THECOLORS["red"]
pygame.draw.circle(screen, color, p, int(ball.radius), 2)
for ball in balls_to_remove:
space.remove(ball, ball.body)
balls.remove(ball)
for line in static_lines:
body = line.body
pv1 = body.position + line.a.rotated(body.angle)
pv2 = body.position + line.b.rotated(body.angle)
p1 = to_pygame(pv1)
p2 = to_pygame(pv2)
pygame.draw.lines(screen, THECOLORS["lightgray"], False, [p1,p2])
### Update physics
dt = 1.0/60.0
for x in range(1):
space.step(dt)
### Flip screen
pygame.display.flip()
clock.tick(50)
pygame.display.set_caption("fps: " + str(clock.get_fps()))
``` | 2012/12/29 | [
"https://Stackoverflow.com/questions/14081949",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/789021/"
] | Chipmunk has a few options filtering collisions:
<http://chipmunk-physics.net/release/ChipmunkLatest-Docs/#cpShape-Filtering>
It sounds like you just need to use a layers bitmask though.
ex:
```
# This layer bit is for balls colliding with other balls
# I'm only guessing that you want this though.
ball_layer = 1
# This layer bit is for things that collide with red balls only.
red_ball_layer = 2
# This layer bit is for things that collide with blue balls only.
blue_ball_layer = 4
# Bitwise OR the layer bits together
red_ball_shape.layers = ball_layer | red_ball_layer
blue_ball_shape.layers = ball_layer | blue_ball_layer
# Lower border should collide with red only
upper_border_shape.layers = red_ball_layer
#Upper border with blue balls only
lower_border_shape.layers = blue_ball_layer
```
I've never actually used Pymunk personally, but I'm guessing that it exposes the Chipmunk layers property simply as .layers | In Pymunk you can use the [ShapeFilter](http://www.pymunk.org/en/latest/pymunk.html#pymunk.ShapeFilter) class to set the categories (layers) with which an object can collide. I put the upper and lower lines into the categories 1 and 2 and then set the masks of the balls so that they ignore these layers. You need to understand how [bitmasking](https://en.wikipedia.org/wiki/Mask_(computing)) works.
Here's the complete example based on the code in the original question (press left and right mouse button to spawn the balls).
```
import sys
import pygame as pg
from pygame.color import THECOLORS
import pymunk as pm
def to_pygame(p):
"""Small hack to convert pymunk to pygame coordinates"""
return int(p[0]), int(-p[1]+600)
pg.init()
screen = pg.display.set_mode((600, 600))
clock = pg.time.Clock()
space = pm.Space()
space.gravity = (0.0, -900.0)
# Walls
static_body = space.static_body
static_lines = [
pm.Segment(static_body, (111.0, 280.0), (407.0, 246.0), 0.0),
pm.Segment(static_body, (407.0, 246.0), (407.0, 343.0), 0.0),
pm.Segment(static_body, (111.0, 420.0), (407.0, 386.0), 0.0),
pm.Segment(static_body, (407.0, 386.0), (407.0, 493.0), 0.0),
]
for idx, line in enumerate(static_lines):
line.elasticity = 0.95
if idx < 2: # Lower lines.
# The lower lines are in category 2, in binary 0b10.
line.filter = pm.ShapeFilter(categories=2)
else: # Upper lines.
# The upper lines are in category 1, in binary 0b1.
line.filter = pm.ShapeFilter(categories=1)
space.add(static_lines)
balls = []
running = True
while running:
for event in pg.event.get():
if event.type == pg.QUIT:
running = False
elif event.type == pg.KEYDOWN and event.key == pg.K_ESCAPE:
running = False
if event.type == pg.MOUSEBUTTONDOWN:
radius = 15 if event.button == 1 else 30
mass = 10
inertia = pm.moment_for_circle(mass, 0, radius, (0,0))
body = pm.Body(mass, inertia)
body.position = to_pygame(event.pos)
shape = pm.Circle(body, radius, (0,0))
shape.elasticity = 0.95
if shape.radius > 25:
# bin(pm.ShapeFilter.ALL_MASKS ^ 1) is '0b11111111111111111111111111111110'
# That means all categories are checked for collisions except
# bit 1 (the upper lines) which are ignored.
shape.filter = pm.ShapeFilter(mask=pm.ShapeFilter.ALL_MASKS ^ 1)
else:
# Ignores category bin(2), '0b11111111111111111111111111111101'
# All categories are checked for collisions except bit 2 (the lower lines).
shape.filter = pm.ShapeFilter(mask=pm.ShapeFilter.ALL_MASKS ^ 2)
space.add(body, shape)
balls.append(shape)
screen.fill(THECOLORS["white"])
balls_to_remove = []
for ball in balls:
if ball.body.position.y < 100:
balls_to_remove.append(ball)
p = to_pygame(ball.body.position)
if ball.radius > 25:
color = THECOLORS["red"]
else:
color = THECOLORS["blue"]
pg.draw.circle(screen, color, p, int(ball.radius), 2)
for ball in balls_to_remove:
space.remove(ball, ball.body)
balls.remove(ball)
for line in static_lines:
body = line.body
pv1 = body.position + line.a.rotated(body.angle)
pv2 = body.position + line.b.rotated(body.angle)
p1 = to_pygame(pv1)
p2 = to_pygame(pv2)
pg.draw.lines(screen, THECOLORS["gray29"], False, [p1, p2])
# Update physics.
dt = 1.0/60.0
for x in range(1):
space.step(dt)
pg.display.flip()
clock.tick(50)
pg.quit()
sys.exit()
``` |
44,705,385 | I have this BT speaker , with in built mic , <http://www.intex.in/speakers/bluetooth-speakers/it-11s-bt>
i want to build something like google home with it , using python .Please guide me. | 2017/06/22 | [
"https://Stackoverflow.com/questions/44705385",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8071763/"
] | Try with that :
```
function cari($d,$p)
{
$this->db->select('cf_pakar,gejala');
$this->db->from('gejalapenyakit');
$this->db->where('id_penyakit',$p);
$this->db->where_in('id_gejala',$d);
return $this->db->get()->result();
}
```
And your `$d = ('1','2','3','4','5')` should be `$d = ['1','2','3','4','5']`
try to do that :
```
function cari($d,$p)
{
//load $d here with that :
$d = ['1','2','3','4','5'];
//or that :
$d = [1,2,3,4,5]
$this->db->select('cf_pakar,gejala');
$this->db->from('gejalapenyakit');
$this->db->where('id_penyakit',$p);
$this->db->where_in('id_gejala',$d);
return $this->db->get()->result();
}
```
if it works then your problem comes from your $d. If it's not, your problem must come from your db | You need to send ',' seperated values in query.
$d = implode(",",$d);
This will work. |
44,705,385 | I have this BT speaker , with in built mic , <http://www.intex.in/speakers/bluetooth-speakers/it-11s-bt>
i want to build something like google home with it , using python .Please guide me. | 2017/06/22 | [
"https://Stackoverflow.com/questions/44705385",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8071763/"
] | Try with that :
```
function cari($d,$p)
{
$this->db->select('cf_pakar,gejala');
$this->db->from('gejalapenyakit');
$this->db->where('id_penyakit',$p);
$this->db->where_in('id_gejala',$d);
return $this->db->get()->result();
}
```
And your `$d = ('1','2','3','4','5')` should be `$d = ['1','2','3','4','5']`
try to do that :
```
function cari($d,$p)
{
//load $d here with that :
$d = ['1','2','3','4','5'];
//or that :
$d = [1,2,3,4,5]
$this->db->select('cf_pakar,gejala');
$this->db->from('gejalapenyakit');
$this->db->where('id_penyakit',$p);
$this->db->where_in('id_gejala',$d);
return $this->db->get()->result();
}
```
if it works then your problem comes from your $d. If it's not, your problem must come from your db | Answered :
Initially i use foreach to make my :
```
$input = $this->input->post('input');
$i = 0;
foreach($input as $i){
$i++;
$d = $d.$i.',';
$d = ('1','2','3','4','5');
```
and when i use $input for parameter, its work perfectly |
64,902,105 | I have a requirement below but I am getting some error:
Write a separate Privileges class. The class should have one attribute, privileges, that stores a list of strings.Move the show\_privileges() method to this class. Make a Privileges instance as an attribute in the Admin class. Create a new instance of Admin and use your method to show its privileges.
```
class User:
def __init__(self, first_name, last_name):
"""Initiating attributes of user class"""
self.first_name = first_name
self.last_name = last_name
self.login_attempts = 1
def describe_user(self):
"""Print summary of the user info"""
print(f"User's info is {self.first_name} {self.last_name}")
def greet_user(self):
""" Method greets user"""
print(f"Goodday, {self.first_name} {self.last_name}!")
def increment_login_attempts(self, login):
"""Method increments login attempts"""
self.login_attempts += login
print(f"This user has {self.login_attempts} login attempts")
def reset_login_attempts(self):
"""Method resets login attempts"""
self.login_attempts = 0
class Privileges:
def __init__(self, privileges):
""" Shows admin privileges"""
self.privileges = privileges
def show_privileges(self):
"""Lists admin privileges"""
print(f"This user {','' '.join(self.privileges)}")
class Admin(User):
""" Represents Admin privileges of a user"""
def __init__(self, first_name, last_name, privileges):
super().__init__(first_name, last_name)
self.privileges = Privileges()
my_Admin_user = Admin('john', 'olode', ["Can Add Post", "Can Delete Post", "Can Ban User"])
my_Admin_user.describe_user()
my_Admin_user.greet_user()
my_Admin_user.Privileges.show_privileges()
```
Error Below
```
Traceback (most recent call last):
File "userPrivilegeClass.py", line 43, in <module>
my_Admin_user = Admin('john', 'olode', ["Can Add Post", "Can Delete Post", "Can Ban User"])
File "userPrivilegeClass.py", line 39, in __init__
self.privileges = Privileges()
TypeError: __init__() missing 1 required positional argument: 'privileges'
```
Would appreciate any help.
FYI, I am very new to python (about a month), please bear with me whuile you dumb it down for me.. Thank you | 2020/11/18 | [
"https://Stackoverflow.com/questions/64902105",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14179096/"
] | As stated in the related questions, the easiest thing to do is to use an index instead as it requires no unsafe code. I might write it like this:
```
pub fn insert<'a, K: Eq, V>(this: &'a mut Vec<(K, V)>, key: K, val: V) -> &'a mut V {
let idx = this
.iter()
.enumerate()
.find_map(|(i, (k, _))| if key == *k { Some(i) } else { None });
let idx = idx.unwrap_or_else(|| {
this.push((key, val));
this.len() - 1
});
&mut this[idx].1
}
```
You should *perform benchmarking* to know if this is not fast enough for some reason. Only in that case should you opt in to `unsafe` code to get the last bit of speed. You should then benchmark *again* to see if the code is measurably faster.
For example, you might be able to get the speedup by using [`slice::get_unchecked_mut`](https://doc.rust-lang.org/std/primitive.slice.html#method.get_unchecked_mut) instead of `&mut this[idx].1`, which is a much easier bit of unsafe code to rationalize.
The nice thing about using indices in our safe code is that they directly translate into pointer offset logic. We can take this safe example and make minimal modifications to it to get a version using `unsafe` code:
```rust
pub fn insert<'a, K: Eq, V>(this: &'a mut Vec<(K, V)>, key: K, val: V) -> &'a mut V {
// I copied this code from Stack Overflow without reading the surrounding
// text which explained why this code is or is not safe.
unsafe {
let found = this
.iter_mut()
.find_map(|(k, v)| if key == *k { Some(v as *mut V) } else { None });
match found {
Some(v) => &mut *v,
None => {
this.push((key, val));
&mut this.last_mut().unwrap().1
}
}
}
}
```
The main points of safety revolve around the pointer to the value in `found`. It started as a mutable reference, so we know that it was valid when we were iterating. We know that `find_map` stops iterating on the first `Some`, and we know that iterating using `iter_mut()` shouldn't change our values anyway. Since `this` cannot change between the binding of `found` and the usage of it in the `match`, I believe that this piece of code is safe.
It's always valuable to exercise your code through Miri. You must actually *exercise* the code, as Miri only flags code that causes undefined behavior, ignoring any dormant code paths. This code is Miri-clean:
```
fn main() {
let mut things = vec![(1, 2), (3, 4)];
let v = insert(&mut things, 1, 2);
println!("{} ({:p})", v, v);
let v = insert(&mut things, 1, 2);
println!("{} ({:p})", v, v);
let v = insert(&mut things, 5, 6);
println!("{} ({:p})", v, v);
let v = insert(&mut things, 5, 6);
println!("{} ({:p})", v, v);
}
```
```none
2 (0x2829c)
2 (0x2829c)
6 (0x41054)
6 (0x41054)
```
---
>
> Is [the original implementation] actually safe?
>
>
>
Miri reports no issues for the same test code I used above, and I don't see anything obviously wrong.
>
> Is this the recommended way to express the unsafe operations performed? Should I use pointers instead?
>
>
>
When it's possible to avoid `mem::transmute`, it *generally* should be avoided. `transmute` is The Big Hammer and can do quite a lot of things that you might not intend (changing *types* is a key one). Using pointers feels much simpler in this case.
I agree with the usage of a comment to demonstrate why the unsafe code is safe. Even if it's wrong it still demonstrates the mindset of the original author. A future reviewer may be able to say "ah, they didn't think about checklist item #42, let me test that!".
Specifically for the reasoning in your comment, it's overly dense / academic *to me*. I don't see why there's talk about multiple lifetimes or double borrows.
>
> Will the new Polonius borrow checker be able to reason about patterns like this?
>
>
>
Yes:
```none
% cargo +nightly rustc --
Compiling example v0.1.0 (/private/tmp/example)
error[E0499]: cannot borrow `*this` as mutable more than once at a time
--> src/main.rs:8:16
|
2 | pub fn insert<'a, K: Eq, V>(this: &'a mut Vec<(K, V)>, key: K, val: V) -> &'a mut V {
| -- lifetime `'a` defined here
3 | for (key1, val1) in &mut *this {
| ---------- first mutable borrow occurs here
4 | if key == *key1 {
5 | return val1;
| ---- returning this value requires that `*this` is borrowed for `'a`
...
8 | let this = &mut *this;
| ^^^^^^^^^^ second mutable borrow occurs here
% cargo +nightly rustc -- -Zpolonius
Compiling example v0.1.0 (/private/tmp/example)
Finished dev [unoptimized + debuginfo] target(s) in 0.86s
% ./target/debug/example
2 (0x7f97ea405b24)
2 (0x7f97ea405b24)
6 (0x7f97ea405ba4)
6 (0x7f97ea405ba4)
```
See also:
* [How to update-or-insert on a Vec?](https://stackoverflow.com/q/47395171/155423)
* [Double mutable borrow error in a loop happens even with NLL on](https://stackoverflow.com/q/50519147/155423)
* [Returning a reference from a HashMap or Vec causes a borrow to last beyond the scope it's in?](https://stackoverflow.com/q/38023871/155423)
* [When is it necessary to circumvent Rust's borrow checker?](https://stackoverflow.com/q/50440074/155423) | Safe alternative
----------------
Firstly, here is what I would suggest instead. You can iterate over the `Vec` once to get the index of the target value via `position(|x| x == y)`. You are then able to match the now owned value and continue like before. This should have very similar performance to your previous version (In fact, LLVM might even make it identical when built with release mode).
```rust
/// Insert a new data element at a given key.
pub fn insert<K: Eq, V>(this: &mut Vec<(K, V)>, key: K, val: V) -> &mut V {
match this.iter().position(|(key1, _)| &key == key1) {
Some(idx) => &mut this[idx].1,
None => {
this.push((key, val));
&mut this.last_mut().unwrap().1
}
}
}
```
[Playground Link](https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=a8688d354b5a460c263a9686bef775f7)
Explanation of Error
--------------------
Here is a quick explanation of why the compiler is getting confused. It is easier to view if I first rewrite it to separate the creation of the iterator. I also added a second lifetime to the function signature to make it less restrictive and easier to show the error. To be honest it kind of feels like a mistake on the part of the borrow checker, but I can see how it got there.
```rust
use std::slice::IterMut;
// Returns a reference of borrowed value 'a of lifetime 'b. Since this reference // may exist up to the end of 'a, we know that 'b <= 'a.
pub fn insert<'a: 'b, 'b, K: Eq, V>(this: &'a mut Vec<(K, V)>, key: K, val: V) -> &'b mut V {
// The problem comes from trying to identify an appropriate lifetime for 'c.
// While iterating, each item also more or less shares the lifetime 'c.
let iterator: IterMut<'c, (K, V)> = this.into_iter();
for (ref mut key1, ref mut val1) in iterator {
if key == *key1 {
// Since this is the returned value, it must have lifetime 'b to match
// the function signature. But at the same time it must also live for 'c.
// Therefore 'b <= 'c.
return val1
}
}
// So at this point the constraints we have so far are as follows:
// 'b <= 'a
// 'c <= 'a
// 'b <= 'c
// Therefore 'b <= 'c <= 'a
// Due to the next line, 'c mandates the iterator is still alive making this the
// second mutable borrow.
this.push((key, val));
// This lives for 'b, but since 'b <= 'c then 'c still exists
&mut this.last_mut().unwrap().1
}
```
Takeaways
---------
* **"Is this actually safe?"** Does it use `unsafe`? If it uses `unsafe` then it is not safe. Safe/unsafe is not about if it should work. Just because C code works, doesn't make it safe. It is about if our code has the potential for human error causing the program to act in ways the compiler can't account for. We only deem something unsafe to be safe once we have tried it under a number of conditions and it reliably works as expected with no exceptions. So "is this actually safe?" is more of question of how much much trust you have in this code.
* **"Is this the recommended way to express the unsafe operations performed? Should I use pointers instead?"** In terms of unsafe code, my personal preference would be what you have right now and just transmute the lifetimes. Using pointers just hides the transmute by making it implicit in the pointer dereference. Plus it adds pointers into the equation which just adds another layer of complexity.
* **"Will the new Polonius borrow checker be able to reason about patterns like this?"** No idea. Maybe someone with more knowledge on the subject will leave a comment answering this question.
* **Sidenote:** Try to avoid writing functions with `fn foo<'a>(&'a A) -> &'a B` lifetimes. This can be more restrictive because it forces the returned lifetime to be the same as the input. The implicit version looks more like `fn foo<'a: 'b, 'b>(&'a A) -> &'b B` and only requires that the input lifetime is longer than the returned lifetime. |
44,913,971 | I'm coding a little python program for ROT13.
If you don't know what it means, it means it will replace the letter of the alphabet to 13th letter in front of it therefore 'a' would become 'n'.
A user will ask for an input and I shall replace each character in the sentence to the 13th letter in front.
This means I need to replace each character, now who would I do that?
I tried importing the re function but It didn't work. This is what I got so far.
```
import re
Alpha = input("Input the word you would like translated")
Alpha = re.sub('[abcdefghijklmnopqrstuvwxyz]', 'nopqrstuvwxyzabcdefghijklm',
Alpha)
print(Alpha)
```
Help would be very much appreciated. | 2017/07/04 | [
"https://Stackoverflow.com/questions/44913971",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7637737/"
] | [Vertically concatenate](https://www.mathworks.com/help/matlab/ref/vertcat.html) the matrices inside the cell arrays and use `intersect` with the [`'rows'`](https://www.mathworks.com/help/matlab/ref/intersect.html#btcnv0p-12) flag. i.e.
```
Q1={[1 2 3 4], [3 2 4 1], [4 2 1 3]};
Q2={[2 4 3 1], [1 2 3 4], [1 2 4 3]};
Qout = intersect(vertcat(Q1{:}), vertcat(Q2{:}), 'rows');
%>> Qout
%Qout =
% 1 2 3 4
``` | You can do it by using two loops and check all off them.
```
q1=[1 2 3 4; 3 2 4 1; 4 2 1 3];
q2=[2 4 3 1; 1 2 3 4; 1 2 4 3];
%find the size of matrix
[m1,n1] = size(q1);
[m2] = size(q2,1);
for (ii=1:m1)
for (jj=1:m2)
%if segments are equal, it will return 1
%if sum of same segment = 4 it means they are same
if ( sum( q1(ii,:) == q2(jj,:) ) == n1)
ii %result of q1
jj %result of q2
break;
end
end
end
``` |
44,092,459 | Undertaking a task to Write a function power that accepts two arguments, a and b and calculates a raised to the power b.
Example
```
power(2, 3) => 8
```
Note: Don't use
```
2 ** 3
```
and don't use
```
Math.pow(2, 3)
```
I have tried this
```
def power(a,b):
return eval(((str(a)+"*")*b)[:-1])
```
And it works but seems to fail one test which is to `return_1_when_exp_is_0`
and i also get the error
```
Unhandled Exception: unexpected EOF while parsing (, line 0)
```
Please how do i solve this issue considering that i am new to python | 2017/05/21 | [
"https://Stackoverflow.com/questions/44092459",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7173798/"
] | You can use a for loop
```
x=1
for i in range(b):
x=x*a
print(x)
``` | ```
def power(a, b):
if b == 0:
return 1
else:
return a ** b
``` |
44,092,459 | Undertaking a task to Write a function power that accepts two arguments, a and b and calculates a raised to the power b.
Example
```
power(2, 3) => 8
```
Note: Don't use
```
2 ** 3
```
and don't use
```
Math.pow(2, 3)
```
I have tried this
```
def power(a,b):
return eval(((str(a)+"*")*b)[:-1])
```
And it works but seems to fail one test which is to `return_1_when_exp_is_0`
and i also get the error
```
Unhandled Exception: unexpected EOF while parsing (, line 0)
```
Please how do i solve this issue considering that i am new to python | 2017/05/21 | [
"https://Stackoverflow.com/questions/44092459",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7173798/"
] | Using eval is a terrible idea, but if you really wanted to then using `join()` would be a better way to create the string:
```
def power(a, b):
return eval('*'.join([str(a)]*b))
>>> power(2, 3)
8
```
If you add `['1']` to the front then the `0` exponent behaves properly:
```
def power(a, b):
return eval('*'.join(['1']+[str(a)]*b))
>>> power(2, 0)
1
```
However, this is simple to implement for integer exponents with a `for` loop:
```
def power(n, e):
t = 1
for _ in range(e):
t *= n
return t
>>> power(2, 3)
8
>>> power(2, 0)
1
```
You could also use `functools.reduce()` to do the same thing:
```
import functools as ft
import operator as op
def power(n, e):
return ft.reduce(op.mul, [n]*e, 1)
``` | ```
def power(a, b):
if b == 0:
return 1
else:
return a ** b
``` |
44,092,459 | Undertaking a task to Write a function power that accepts two arguments, a and b and calculates a raised to the power b.
Example
```
power(2, 3) => 8
```
Note: Don't use
```
2 ** 3
```
and don't use
```
Math.pow(2, 3)
```
I have tried this
```
def power(a,b):
return eval(((str(a)+"*")*b)[:-1])
```
And it works but seems to fail one test which is to `return_1_when_exp_is_0`
and i also get the error
```
Unhandled Exception: unexpected EOF while parsing (, line 0)
```
Please how do i solve this issue considering that i am new to python | 2017/05/21 | [
"https://Stackoverflow.com/questions/44092459",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7173798/"
] | This worked fine
```
def power(a,b):
if b == 0:
return 1
else:
return eval(((str(a)+"*")*b)[:-1])
``` | ```
def power(a, b):
if b == 0:
return 1
else:
return a ** b
``` |
44,092,459 | Undertaking a task to Write a function power that accepts two arguments, a and b and calculates a raised to the power b.
Example
```
power(2, 3) => 8
```
Note: Don't use
```
2 ** 3
```
and don't use
```
Math.pow(2, 3)
```
I have tried this
```
def power(a,b):
return eval(((str(a)+"*")*b)[:-1])
```
And it works but seems to fail one test which is to `return_1_when_exp_is_0`
and i also get the error
```
Unhandled Exception: unexpected EOF while parsing (, line 0)
```
Please how do i solve this issue considering that i am new to python | 2017/05/21 | [
"https://Stackoverflow.com/questions/44092459",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7173798/"
] | ```
def power(theNumber, thePower):
#basically, multiply the number for power times
try:
theNumber=int(theNumber)
thePower=int(thePower)
if theNumber == 0:
return 0
elif thePower == 0:
return 1
else:
return theNumber * power(theNumber,thePower-1)
except exception as err:
return 'Only digits are allowed as input'
``` | ```
def power(a, b):
if b == 0:
return 1
else:
return a ** b
``` |
44,092,459 | Undertaking a task to Write a function power that accepts two arguments, a and b and calculates a raised to the power b.
Example
```
power(2, 3) => 8
```
Note: Don't use
```
2 ** 3
```
and don't use
```
Math.pow(2, 3)
```
I have tried this
```
def power(a,b):
return eval(((str(a)+"*")*b)[:-1])
```
And it works but seems to fail one test which is to `return_1_when_exp_is_0`
and i also get the error
```
Unhandled Exception: unexpected EOF while parsing (, line 0)
```
Please how do i solve this issue considering that i am new to python | 2017/05/21 | [
"https://Stackoverflow.com/questions/44092459",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7173798/"
] | You should avoid `eval` by all costs, especially when it's very simple to implement pure algorithmic efficient solution. Classic efficient algorithm is [Exponentiation\_by\_squaring](https://en.wikipedia.org/wiki/Exponentiation_by_squaring). Instead of computing and multiplying numbers `n` times, you can always divide it to squares to archive logarithmic\* complexity.
For example, for calculating `x^15`:
```
x^15 = (x^7)*(x^7)*x
x^7 = (x^3)*(x^3)*x
x^3 = x*x*x
```
Thus taking 6 multiplications instead of 14.
```
def pow3(x, n):
r = 1
while n:
if n % 2 == 1:
r *= x
n -= 1
x *= x
n /= 2
return r
```
Source: <https://helloacm.com/exponentiation-by-squaring/>
**Note**: it was not mentioned in the question, but everything above considers N to be positive integer. If your question was also covering fractional or negative exponent, suggested approach will not work "as is".
\* Of course depends on length of x and complexity of multiplying, see [Wikipedia](https://en.wikipedia.org/wiki/Exponentiation_by_squaring) for detailed complexity analysis.
Also may be interesting to check out following questions: [C solution](https://stackoverflow.com/a/108959/1657819) or [Python implementing pow() for exponentiation by squaring for very large integers](https://stackoverflow.com/questions/16421311/python-implementing-pow-for-exponentiation-by-squaring-for-very-large-integers) | ```
def power(a, b):
if b == 0:
return 1
else:
return a ** b
``` |
898,091 | I have previously read Spolsky's article on character-encoding, as well as [this from dive into python 3](http://diveintopython3.org/strings.html). I know php is getting Unicode at some point, but I am having trouble understanding why this is such a big deal.
If php-CLI is being used, ok it makes sense. However, in the web server world, isnt it up to the browser to take this integer and turn it into a character (based off character-encoding).
What am I not getting? | 2009/05/22 | [
"https://Stackoverflow.com/questions/898091",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] | Well, for one thing you need to somehow generate the strings the browser displays :-) | There's an awesome FAQ section on Unicode and the Web [here.](http://unicode.org/faq/unicode_web.html) See if it answers some of your questions. |
898,091 | I have previously read Spolsky's article on character-encoding, as well as [this from dive into python 3](http://diveintopython3.org/strings.html). I know php is getting Unicode at some point, but I am having trouble understanding why this is such a big deal.
If php-CLI is being used, ok it makes sense. However, in the web server world, isnt it up to the browser to take this integer and turn it into a character (based off character-encoding).
What am I not getting? | 2009/05/22 | [
"https://Stackoverflow.com/questions/898091",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] | PHP does "support" UTF8, look at the mbstring[1](http://uk2.php.net/mbstring) extension. Most of the problem comes from PHP developers who don't use the mb\* functions when dealing with UTF8 data.
UTF8 characters are often more than one character so you need to use functions which appreciate that fact like mb\_strpos[2](http://uk2.php.net/manual/en/function.mb-strpos.php) rather than strpos[3](http://uk2.php.net/manual/en/function.strpos.php).
It works fine if you are getting UTF8 from the browser -> putting in database -> getting it back out -> displaying it to the user. If you are doing something more involved with UTF8 data (or indeed any major text processing) you should probably consider using an alternative language. | Well, for one thing you need to somehow generate the strings the browser displays :-) |
898,091 | I have previously read Spolsky's article on character-encoding, as well as [this from dive into python 3](http://diveintopython3.org/strings.html). I know php is getting Unicode at some point, but I am having trouble understanding why this is such a big deal.
If php-CLI is being used, ok it makes sense. However, in the web server world, isnt it up to the browser to take this integer and turn it into a character (based off character-encoding).
What am I not getting? | 2009/05/22 | [
"https://Stackoverflow.com/questions/898091",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] | The PHP string functions often treat strings as sequences of 8-byte characters. I've had all sorts of issues with Chinese text going through the string functions. `substr()`, for example, can cut a multi-byte character in half, which causes all manner of problems for XML parsers. | There's an awesome FAQ section on Unicode and the Web [here.](http://unicode.org/faq/unicode_web.html) See if it answers some of your questions. |
898,091 | I have previously read Spolsky's article on character-encoding, as well as [this from dive into python 3](http://diveintopython3.org/strings.html). I know php is getting Unicode at some point, but I am having trouble understanding why this is such a big deal.
If php-CLI is being used, ok it makes sense. However, in the web server world, isnt it up to the browser to take this integer and turn it into a character (based off character-encoding).
What am I not getting? | 2009/05/22 | [
"https://Stackoverflow.com/questions/898091",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] | PHP does "support" UTF8, look at the mbstring[1](http://uk2.php.net/mbstring) extension. Most of the problem comes from PHP developers who don't use the mb\* functions when dealing with UTF8 data.
UTF8 characters are often more than one character so you need to use functions which appreciate that fact like mb\_strpos[2](http://uk2.php.net/manual/en/function.mb-strpos.php) rather than strpos[3](http://uk2.php.net/manual/en/function.strpos.php).
It works fine if you are getting UTF8 from the browser -> putting in database -> getting it back out -> displaying it to the user. If you are doing something more involved with UTF8 data (or indeed any major text processing) you should probably consider using an alternative language. | There's an awesome FAQ section on Unicode and the Web [here.](http://unicode.org/faq/unicode_web.html) See if it answers some of your questions. |
898,091 | I have previously read Spolsky's article on character-encoding, as well as [this from dive into python 3](http://diveintopython3.org/strings.html). I know php is getting Unicode at some point, but I am having trouble understanding why this is such a big deal.
If php-CLI is being used, ok it makes sense. However, in the web server world, isnt it up to the browser to take this integer and turn it into a character (based off character-encoding).
What am I not getting? | 2009/05/22 | [
"https://Stackoverflow.com/questions/898091",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] | PHP does "support" UTF8, look at the mbstring[1](http://uk2.php.net/mbstring) extension. Most of the problem comes from PHP developers who don't use the mb\* functions when dealing with UTF8 data.
UTF8 characters are often more than one character so you need to use functions which appreciate that fact like mb\_strpos[2](http://uk2.php.net/manual/en/function.mb-strpos.php) rather than strpos[3](http://uk2.php.net/manual/en/function.strpos.php).
It works fine if you are getting UTF8 from the browser -> putting in database -> getting it back out -> displaying it to the user. If you are doing something more involved with UTF8 data (or indeed any major text processing) you should probably consider using an alternative language. | The PHP string functions often treat strings as sequences of 8-byte characters. I've had all sorts of issues with Chinese text going through the string functions. `substr()`, for example, can cut a multi-byte character in half, which causes all manner of problems for XML parsers. |
42,512,141 | I have written the following simple program which should print out all events detected by `pygame.event.get()`.
```
import pygame, sys
from pygame.locals import *
display = pygame.display.set_mode((300, 300))
pygame.init()
while True:
for event in pygame.event.get():
print(event)
if event.type == QUIT:
pygame.quit()
sys.exit()
```
But when I run this I only have mouse events, and a KEYDOWN and KEYUP event when I hit caps-lock twice, being printed in terminal. When I use any other keys they only print to terminal as if I was writing in the terminal window.
```
<Event(4-MouseMotion {'pos': (102, 15), 'buttons': (0, 0, 0),
'rel': (-197, -284)})>
<Event(2-KeyDown {'unicode': '', 'scancode': 0, 'key': 301, 'm
od': 8192})>
<Event(3-KeyUp {'key': 301, 'scancode': 0, 'mod': 0})>
wasd
```
I am using Mac OSX 10.12.1, python 3.5.2, and pygame 1.9.4.dev0.
I assume I'm missing something straight forward, but I found nothing similar online. Any help would be much appreciated. | 2017/02/28 | [
"https://Stackoverflow.com/questions/42512141",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4191155/"
] | If you're working in a virtualenv, don't use the `virtualenv` command.
Use `python3 -m venv`.
Then install pygame (*e.g.* `pip3 install hg+http://bitbucket.org/pygame/pygame`).
See [this thread](https://bitbucket.org/pygame/pygame/issues/203/window-does-not-get-focus-on-os-x-with#comment-32656108) for more details on this issue. | Firstly i doubt you are but pygame only registers inputs when your focused on the pygame screen so there's that. I don't have a direct answer to your question so sorry but i do have my solution or work around to it. Because i dislike the normal event system i use pygame.key.get\_pressed() (<https://www.pygame.org/docs/ref/key.html>) just because i think it looks better and more readable. This is probably just a bad habit of mine though sooo..... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.