qid
int64
46k
74.7M
question
stringlengths
54
37.8k
date
stringlengths
10
10
metadata
sequencelengths
3
3
response_j
stringlengths
17
26k
response_k
stringlengths
26
26k
54,064,946
I am working in jupyter with python in order to clean a set of data that I have retrieved from an analysis software and I would like to have an equal number of samples that pass and fail. Basically my dataframe in pandas looks like this: ``` grade section area_steel Nx Myy utilisation Accceptable 0 C16/20 STD R 700 350 4534 -310000 240000 0.313 0 1 C90/105 STD R 400 600 4248 -490000 270000 0.618 0 3 C35/45 STD R 550 400 1282 580000 810000 7.049 1 4 C12/15 STD R 350 750 2386 960000 610000 5.180 1 ``` However the results which are not acceptable (1) are double the results which are acceptable. no\_pass = 8589 no\_fail = 16999 ratio = 1.979159389917336 I would like a new dataframe with the same ratio of pass and fails I tried the following but it doesn't seem to work: ``` import random new_data = data[data.Accceptable <= random.random()*1/ratio] ``` It would seem that only one random value is computed and all the rows checked against that, while I would like one value per row. Any suggestion?
2019/01/06
[ "https://Stackoverflow.com/questions/54064946", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10876004/" ]
Using formatting string and assuming that `optimal_system` is your dictionary: ``` with open('output.txt', 'w') as f: for k in optimal_system.keys(): f.write("{}: {}\n".format(k, optimal_system[k])) ``` **EDIT** As pointed by @wwii, the code above can be also written as: ``` with open('output.txt', 'w') as f: for k, v in optimal_system.items(): f.write("{}: {}\n".format(k, v)) ``` And the string can be formatted using [formatted string literals](https://docs.python.org/3.7/reference/lexical_analysis.html#f-strings), available since python 3.6, hence `f'{k}: {v}\n'` instead of `"{}: {}\n".format(k, v)`.
You can use the [`pprint` module](https://docs.python.org/3/library/pprint.html) -- it also works for all other data structures. To force every entry on a new line, set the `width` argument to something low. The `stream` argument lets you directly write to the file. ``` import pprint mydata = {'Optimal Temperature (K)': 425, 'Optimal Pressure (kPa)': 100, 'other stuff': [1, 2, ...]} with open('output.txt', 'w') as f: pprint.pprint(mydata, stream=f, width=1) ``` will produce: ``` {'Optimal Pressure (kPa)': 100, 'Optimal Temperature (K)': 425, 'other stuff': [1, 2, Ellipsis]} ```
9,164,176
> > **Possible Duplicate:** > > [Good Primer for Python Slice Notation](https://stackoverflow.com/questions/509211/good-primer-for-python-slice-notation) > > > I have a string and I'm splitting in a `;` character, I would like to associate this string with variables, but for me just the first x strings is useful, the other is redundant; I wanted to use this code below, but if there is more than 4 coma than this raise an exception. Is there any simply way? ``` az1, el1, az2, el2, rfsspe = data_point.split(";") ```
2012/02/06
[ "https://Stackoverflow.com/questions/9164176", "https://Stackoverflow.com", "https://Stackoverflow.com/users/737640/" ]
Yes! Use [slicing](https://stackoverflow.com/q/509211/21475): ``` az1, el1, az2, el2, rfsspe = data_point.split(";")[:5] ``` That "slices" the list to get the first 5 elements only.
The way, I do this is usually to add all the variables to a list(var\_list) and then when I'm processsing the list I do something like ``` for x in var_list[:5]: print x #or do something ```
58,414,350
Is there a way for Airflow to skip current task from the PythonOperator? For example: ```py def execute(): if condition: skip_current_task() task = PythonOperator(task_id='task', python_callable=execute, dag=some_dag) ``` And also marking the task as "Skipped" in Airflow UI?
2019/10/16
[ "https://Stackoverflow.com/questions/58414350", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7730549/" ]
Figured it out! Skipping task is as easy as: ```py def execute(): if condition: raise AirflowSkipException task = PythonOperator(task_id='task', python_callable=execute, dag=some_dag) ```
The easiest solution to skip a task: ```py def execute(): if condition: return task = PythonOperator(task_id='task', python_callable=execute, dag=some_dag) ``` Unfortunately, it will mark task as `DONE`
49,145,328
I am new to using google colaboratory (colab) and pydrive along with it. I am trying to load data in 'CAS\_num\_strings' which was written in a pickle file in a specific directory on my google drive using colab as: ``` pickle.dump(CAS_num_strings,open('CAS_num_strings.p', 'wb')) dump_meta = {'title': 'CAS.pkl', 'parents': [{'id':'1UEqIADV_tHic1Le0zlT25iYB7T6dBpBj'}]} pkl_dump = drive.CreateFile(dump_meta) pkl_dump.SetContentFile('CAS_num_strings.p') pkl_dump.Upload() print(pkl_dump.get('id')) ``` Where 'id':'1UEqIADV\_tHic1Le0zlT25iYB7T6dBpBj' makes sure that it has a specific parent folder with this given by this id. The last print command gives me the output: ``` '1ZgZfEaKgqGnuBD40CY8zg0MCiqKmi1vH' ``` Hence, I am able to create and dump the pickle file whose id is '1ZgZfEaKgqGnuBD40CY8zg0MCiqKmi1vH'. Now, I want to load this pickle file in another colab script for a different purpose. In order to load, I use the command set: ``` cas_strings = drive.CreateFile({'id':'1ZgZfEaKgqGnuBD40CY8zg0MCiqKmi1vH'}) print('title: %s, mimeType: %s' % (cas_strings['title'], cas_strings['mimeType'])) print('Downloaded content "{}"'.format(cas_strings.GetContentString())) ``` This gives me the output: ``` title: CAS.pkl, mimeType: text/x-pascal --------------------------------------------------------------------------- UnicodeDecodeError Traceback (most recent call last) <ipython-input-9-a80d9de0fecf> in <module>() 30 cas_strings = drive.CreateFile({'id':'1ZgZfEaKgqGnuBD40CY8zg0MCiqKmi1vH'}) 31 print('title: %s, mimeType: %s' % (cas_strings['title'], cas_strings['mimeType'])) ---> 32 print('Downloaded content "{}"'.format(cas_strings.GetContentString())) 33 34 /usr/local/lib/python3.6/dist-packages/pydrive/files.py in GetContentString(self, mimetype, encoding, remove_bom) 192 self.has_bom == remove_bom: 193 self.FetchContent(mimetype, remove_bom) --> 194 return self.content.getvalue().decode(encoding) 195 196 def GetContentFile(self, filename, mimetype=None, remove_bom=False): UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 0: invalid start byte ``` As you can see, it finds the file CAS.pkl but cannot decode the data. I want to be able to resolve this error. I understand that the normal utf-8 encoding/decoding works smoothly during normal pickle dumping and loading with the 'wb' and 'rb' options. However in the present case, after dumping I can't seem to load it from the pickle file in google drive created in the previous step. The error exists somewhere in me not being able to specify how to decode the data at "return self.content.getvalue().decode(encoding)". I can't seem to find from here (<https://developers.google.com/drive/v2/reference/files#resource-representations>) which keywords/metadata tags to modify. Any help is appreciated. Thanks
2018/03/07
[ "https://Stackoverflow.com/questions/49145328", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9407842/" ]
Apply the `click` event for `<tr>` and pass the current reference `this` to the calling function like `<tr onclick="callme(this)">`. From the javascript get the current row reference and find all the `td` inside that. Now get the values using `innerHTML` and assign it to the respective input fields("id\_type" , "event\_category" , "description"). Look at the following example. ```js function callme(e) { var tds=e.getElementsByTagName('td'); document.getElementById("id_type").value = tds[0].innerHTML.trim(); document.getElementById("event_category").value = tds[1].innerHTML.trim(); document.getElementById("description").value = tds[2].innerHTML.trim(); } ``` ```html <table> <tr onclick="callme(this)"> <td>test1</td> <td>something1</td> <td>content1</td> </tr> <tr onclick="callme(this)"> <td>test2</td> <td>something2</td> <td>content2</td> </tr> <tr onclick="callme(this)"> <td>test3</td> <td>something3</td> <td>content3</td> </tr> </table> <input type="text" id="id_type" /> <input type="text" id="event_category" /> <input type="text" id="description" /> ``` **Note:** As per my comment, don't use the same `id` for all your `td`. You can try to use `class` instead of `td`. For this current solution it is not affecting but in feature it will give you the wrong information as like your code. It is important `id` should be unique.
According to HTML spec `id` attribute should be unique in a page, so if you have multiple elements with same `id`, your HTML is not valid. `getElementById()` should only ever return one element. You can't make it return multiple elements. So you can use unique `id` for each row or try using `class`
62,328,382
I'm new to python and plotly.graph\_objects. I created some maps similar to the example found here: [United States Choropleth Map](https://plotly.com/python/choropleth-maps/#united-states-choropleth-map) I'd like to combine the maps into one figure with a common color scale. I've looked at lots of examples of people using shared scales on subplots but they are using different graphing libraries. Is the functionality I want supported? If so, how is it done? Here is the code I am using: ``` import plotly.graph_objects as go import pandas as pd df_shootings = pd.read_csv('https://raw.githubusercontent.com/washingtonpost/data-police-shootings/master/fatal-police-shootings-data.csv') state_count = df_shootings.groupby(['state', 'race']).size().reset_index(name='total') races = pd.DataFrame({'W': 'White, non-Hispanic', 'B': 'Black, non-Hispanic', 'A': 'Asian', 'N': 'Native American', 'H': 'Hispanic'}, index=[0]) for race in races: result = state_count[['state', 'total']][state_count.race == race] fig = go.Figure(data=go.Choropleth( locations=result.state, z = result.total, locationmode = 'USA-states', # set of locations match entries in `locations` marker_line_color='white', colorbar_title = "Shooting deaths", )) fig.update_layout( title_text = races[race][0], geo_scope='usa', # limite map scope to USA ) fig.data[0].hovertemplate = 'State: %{location}<br>Shooting deaths: %{z:.2f}<extra></extra>' fig.show() ``` This is what I would like to get: [![enter image description here](https://i.stack.imgur.com/Hfmo6.png)](https://i.stack.imgur.com/Hfmo6.png) Right now I get individual maps with their own color scale which is different for each map.
2020/06/11
[ "https://Stackoverflow.com/questions/62328382", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1373313/" ]
The solution is to not use the `$_COOKIE` array, but a variable ```php <?php // Use a variable $cookieValue = 1; // Check the cookie if ((isset($_COOKIE["i"])) && !empty($_COOKIE["i"])) { $cookieValue = (int)$_COOKIE["i"] + 1; } // Push the cookie setcookie("i", $cookieValue); // Use the variable echo $cookieValue; ```
``` else{ setcookie("i",1); header("Refresh:0"); } ```
57,464,273
I have a dataframe with a columns that contain GPS coordinates. I want to convert the columns that are in degree seconds to degree decimals. For example, I have a 2 columns named "lat\_sec" and "long\_sec" that are formatted with values like 186780.8954N. I tried to write a function that saves the last character in the string as the direction, divide the number part of it to get the degree decimal, and then concatenate the two together to have the new format. I then tried to find the column by its name in the data frame and apply the function to it. New to python and can't find other resources on this. I don't think I created my function properly. I have the word 'coordinate' in it because I did not know what to call the value that I am breaking down. My data looks like this: ``` long_sec 635912.9277W 555057.2000W 581375.9850W 581166.2780W df = pd.DataFrame(my_array) def convertDec(coordinate): decimal = float(coordinate[:-1]/3600) direction = coordinate[-1:] return str(decimal) + str(direction) df['lat_sec'] = df['lat_sec'].apply(lambda x: x.convertDec()) My error looks like this: Traceback (most recent call last): File "code.py", line 44, in <module> df['lat_sec'] = df['lat_sec'].apply(lambda x: x.convertDec()) File "C:\Python\Python37\lib\site-packages\pandas\core\frame.py", line 2917, in __getitem__ indexer = self.columns.get_loc(key) File "C:\Python\Python37\lib\site-packages\pandas\core\indexes\base.py", line 2604, in get_loc return self._engine.get_loc(self._maybe_cast_indexer(key)) File "pandas\_libs\index.pyx", line 108, in pandas._libs.index.IndexEngine.get_loc File "pandas\_libs\index.pyx", line 129, in pandas._libs.index.IndexEngine.get_loc File "pandas\_libs\index_class_helper.pxi", line 91, in pandas._libs.index.Int64Engine._check_type KeyError: 'lat_sec' ```
2019/08/12
[ "https://Stackoverflow.com/questions/57464273", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11771163/" ]
By doing `float(coordinate[:-1]/3600)` you are dividing `str` by `int` which is not possible, what you can do is convert the `str` into `float` than divide it by integer `3600` which gives you `float` output. Second you are not using `apply` properly and there is no `lat_sec` column to which you are applying your function ``` import pandas as pd df = pd.DataFrame(['635912.9277W','555057.2000W','581375.9850W','581166.2780W'],columns=['long_sec']) #function creation def convertDec(coordinate): decimal = float(coordinate[:-1])/3600 direction = coordinate[-1:] return str(decimal) + str(direction) #if you just want to update the existing column df['long_sec'] = df.apply(lambda row: convertDec(row['long_sec']), axis=1) #if you want to create a new column, just change to the name that you want df['lat_sec'] = df.apply(lambda row: convertDec(row['long_sec']), axis=1) #OUTPUT long_sec 0 176.64247991666667W 1 154.18255555555555W 2 161.49332916666665W 3 161.43507722222225W ``` if you don't want output in float but in integer just change `float(coordinate[:-1])/3600` to `int(float(coordinate[:-1])/3600)`
In your code above, inside `convertDec` method, there is also an error in : ``` decimal = float(coordinate[:-1]/3600) ``` You need to convert the `coordinate` to float first before divide it with 3600. So, your code above should look like this : ``` import pandas as pd # Your example dataset dictCoordinates = { "long_sec" : ["111111.1111W", "222222.2222W", "333333.3333W", "444444.4444W"], "lat_sec" : ["555555.5555N", "666666.6666N", "777777.7777N", "888888.8888N"] } # Insert your dataset into Pandas DataFrame df = pd.DataFrame(data = dictCoordinates) # Your conversion method here def convertDec(coordinate): decimal = float(coordinate[:-1]) / 3600 # Eliminate last character, then convert to float, then divide it with 3600 decimal = format(decimal, ".4f") # To make sure the output has 4 digits after decimal point direction = coordinate[-1] # Extract direction (N or W) from content return str(decimal) + direction # Return your desired output # Do the conversion for your "long_sec" df["long_sec"] = df.apply(lambda x : convertDec(x["long_sec"]), axis = 1) # Do the conversion for your "lat_sec" df["lat_sec"] = df.apply(lambda x : convertDec(x["lat_sec"]), axis = 1) print(df) ``` That's it. Hope this helps.
37,947,178
I am using python and I have to write a program to create files of a total of 160 GB. I ran the program overnight and it was able to create files of 100 GB. However, after that it stopped running and gave an error saying "No space left on device". QUESTION : I wanted to ask if it was possible to start running the program from where it stopped so I don't have to create those 100 GB files again.
2016/06/21
[ "https://Stackoverflow.com/questions/37947178", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6465134/" ]
Steps to fix this error in windows 10/8/7 1.Check your javac path on Windows using Windows Explorer C:\Program Files\Java\jdk1.7.0\_02\bin and copy the address. 2.Go to Control Panel. Environment Variables and Insert the address at the beginning of var. Path followed by semicolon. i.e C:\Program Files\Java\jdk1.7.0\_02\bin; . Do not delete the path existent, just click in and go to the left end and paste the line above. Do not try anything else, because you just need to link your code to "javac.exe" and you just need to locate it. 3.Close your command prompt and reopen it,and write the code for compile and execution. [![enter image description here](https://i.stack.imgur.com/w7GDm.png)](https://i.stack.imgur.com/w7GDm.png)
You need to add the location of your JDK to your PATH variable, if you wish to call javac.exe without the path. ``` set PATH=%PATH%;C:\path\to\your\JDK\bin\dir ``` Then... ``` javac.exe MyFirstProgram.java ``` OR, you can simply call it via the full path to javac.exe from your JDK installation e.g. ``` C:\path\to\your\JDK\bin\javac.exe MyFirstProgram.java ```
74,188,813
In practicing python, I've come across the sliding window technique but don't quite understand the implementation. Given a string k and integer N, the code is to loop through, thereby moving the window from left to right. However, the capture of the windowed elements as well as how the window grows is fuzzy to me. These sliding window questions on Leetcode are similar but do not have the alphabetic aspect. 1. Fruits into baskets : <https://leetcode.com/problems/fruit-into-baskets/> 2. Longest substring without repeating characters : <https://leetcode.com/problems/longest-substring-without-repeating-characters/> 3. Longest substring after k replacements : <https://leetcode.com/problems/longest-repeating-character-replacement/> 4. Permutation in string: <https://leetcode.com/problems/permutation-in-string/> 5. String anagrams: <https://leetcode.com/problems/find-all-anagrams-in-a-string/> 6. Average of any contiguous subarray of size k : <https://leetcode.com/problems/maximum-average-subarray-i/> 7. Maximum sum of any contiguous subarray of size k : <https://leetcode.com/problems/maximum-subarray/> 8. Smallest subarray with a given sum : <https://leetcode.com/problems/minimum-size-subarray-sum/> 9. Longest substring with k distinct characters : <https://leetcode.com/problems/longest-substring-with-at-most-k-distinct-characters/> Most occurring contiguous sub-string here defined as three letters in growing sequence. For example, for an input string k of 'cdegoxyzcga' and length N of 3, the output would be [cde, xyz].
2022/10/25
[ "https://Stackoverflow.com/questions/74188813", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10047888/" ]
Instead of trying to solve ![log](https://chart.googleapis.com/chart?cht=tx&chl=log_p(a)=b) it might be easier to look at ![pow](https://chart.googleapis.com/chart?cht=tx&chl=a=p%5Eb) and just solve this iteratively, taking advantage of Python's integer type. This way you can avoid the float domain, and its associated precision loss, entirely. Here's a rough attempt: ``` def ilog(a: int, p: int) -> tuple[int, bool]: """ find the largest b such that p ** b <= a return tuple of (b, exact) """ if p == 1: return a, True b = 0 x = 1 while x < a: x *= p b += 1 if x == a: return b, True else: return b - 1, False ``` There are plenty of opportunities for optimization if this is too slow (consider Newton's method, binary search...)
You can use decimals and play with precision and rounding instead of floats in this case Like this: ``` from decimal import Decimal, Context, ROUND_HALF_UP, ROUND_HALF_DOWN ctx1 = Context(prec=20, rounding=ROUND_HALF_UP) ctx2 = Context(prec=20, rounding=ROUND_HALF_DOWN) ctx1.divide(Decimal(243).ln( ctx1) , Decimal(3).ln( ctx2)) ``` Output: ``` Decimal('5') ``` First, the rounding works like the epsilon - the numerator is rounded up and denominator down. You always get a slightly higher answer Second, you can adjust precision you need However, fundamentally the problem is unsolvable.
74,188,813
In practicing python, I've come across the sliding window technique but don't quite understand the implementation. Given a string k and integer N, the code is to loop through, thereby moving the window from left to right. However, the capture of the windowed elements as well as how the window grows is fuzzy to me. These sliding window questions on Leetcode are similar but do not have the alphabetic aspect. 1. Fruits into baskets : <https://leetcode.com/problems/fruit-into-baskets/> 2. Longest substring without repeating characters : <https://leetcode.com/problems/longest-substring-without-repeating-characters/> 3. Longest substring after k replacements : <https://leetcode.com/problems/longest-repeating-character-replacement/> 4. Permutation in string: <https://leetcode.com/problems/permutation-in-string/> 5. String anagrams: <https://leetcode.com/problems/find-all-anagrams-in-a-string/> 6. Average of any contiguous subarray of size k : <https://leetcode.com/problems/maximum-average-subarray-i/> 7. Maximum sum of any contiguous subarray of size k : <https://leetcode.com/problems/maximum-subarray/> 8. Smallest subarray with a given sum : <https://leetcode.com/problems/minimum-size-subarray-sum/> 9. Longest substring with k distinct characters : <https://leetcode.com/problems/longest-substring-with-at-most-k-distinct-characters/> Most occurring contiguous sub-string here defined as three letters in growing sequence. For example, for an input string k of 'cdegoxyzcga' and length N of 3, the output would be [cde, xyz].
2022/10/25
[ "https://Stackoverflow.com/questions/74188813", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10047888/" ]
How about this? Is this what you are looking for? ``` import math def ilog(a: int, p:int) -> int: """ find the largest b such that p ** b <= a """ float_log = math.log(a, p) if (candidate := math.ceil(float_log))**p <= a: return candidate return int(float_log) print(ilog(243, 3)) print(ilog(3**31, 3)) print(ilog(8,2)) ``` Output: ``` 5 31 3 ```
You can use decimals and play with precision and rounding instead of floats in this case Like this: ``` from decimal import Decimal, Context, ROUND_HALF_UP, ROUND_HALF_DOWN ctx1 = Context(prec=20, rounding=ROUND_HALF_UP) ctx2 = Context(prec=20, rounding=ROUND_HALF_DOWN) ctx1.divide(Decimal(243).ln( ctx1) , Decimal(3).ln( ctx2)) ``` Output: ``` Decimal('5') ``` First, the rounding works like the epsilon - the numerator is rounded up and denominator down. You always get a slightly higher answer Second, you can adjust precision you need However, fundamentally the problem is unsolvable.
74,188,813
In practicing python, I've come across the sliding window technique but don't quite understand the implementation. Given a string k and integer N, the code is to loop through, thereby moving the window from left to right. However, the capture of the windowed elements as well as how the window grows is fuzzy to me. These sliding window questions on Leetcode are similar but do not have the alphabetic aspect. 1. Fruits into baskets : <https://leetcode.com/problems/fruit-into-baskets/> 2. Longest substring without repeating characters : <https://leetcode.com/problems/longest-substring-without-repeating-characters/> 3. Longest substring after k replacements : <https://leetcode.com/problems/longest-repeating-character-replacement/> 4. Permutation in string: <https://leetcode.com/problems/permutation-in-string/> 5. String anagrams: <https://leetcode.com/problems/find-all-anagrams-in-a-string/> 6. Average of any contiguous subarray of size k : <https://leetcode.com/problems/maximum-average-subarray-i/> 7. Maximum sum of any contiguous subarray of size k : <https://leetcode.com/problems/maximum-subarray/> 8. Smallest subarray with a given sum : <https://leetcode.com/problems/minimum-size-subarray-sum/> 9. Longest substring with k distinct characters : <https://leetcode.com/problems/longest-substring-with-at-most-k-distinct-characters/> Most occurring contiguous sub-string here defined as three letters in growing sequence. For example, for an input string k of 'cdegoxyzcga' and length N of 3, the output would be [cde, xyz].
2022/10/25
[ "https://Stackoverflow.com/questions/74188813", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10047888/" ]
Instead of trying to solve ![log](https://chart.googleapis.com/chart?cht=tx&chl=log_p(a)=b) it might be easier to look at ![pow](https://chart.googleapis.com/chart?cht=tx&chl=a=p%5Eb) and just solve this iteratively, taking advantage of Python's integer type. This way you can avoid the float domain, and its associated precision loss, entirely. Here's a rough attempt: ``` def ilog(a: int, p: int) -> tuple[int, bool]: """ find the largest b such that p ** b <= a return tuple of (b, exact) """ if p == 1: return a, True b = 0 x = 1 while x < a: x *= p b += 1 if x == a: return b, True else: return b - 1, False ``` There are plenty of opportunities for optimization if this is too slow (consider Newton's method, binary search...)
How about this? Is this what you are looking for? ``` import math def ilog(a: int, p:int) -> int: """ find the largest b such that p ** b <= a """ float_log = math.log(a, p) if (candidate := math.ceil(float_log))**p <= a: return candidate return int(float_log) print(ilog(243, 3)) print(ilog(3**31, 3)) print(ilog(8,2)) ``` Output: ``` 5 31 3 ```
58,877,657
I am learning python I have project structure shown below. ``` i3cmd i3lib __init__.py i3common.py i3sound i3sound.py ``` ============================================================== **init**.py is empty i3common.py (removed actual code to simplify the post) ``` def rangeofdata(cmd, device, index): return ["a", "b", "c"] ``` i3sound.py (removed actual code to simplify the post) ``` from i3lib import i3common def getvolume(rangedata): return rangedata if __name__ == '__main__': rangedata = i3common.rangeofdata(["pactl", "list", "sinks"], "Sink", 2) print(getvolume(rangedata)) ``` When execute this code in pycharm it execute and get output ``` /home/vipin/Documents/python/i3cmd/venv/bin/python /home/vipin/Documents/python/i3cmd/i3sound/i3sound.py ['a', 'b', 'c'] Process finished with exit code 0 ``` But when open a terminal and go to /home/vipin/Documents/python/i3cmd/i3sound ``` cd /home/vipin/Documents/python/i3cmd/i3sound ``` then execute ``` python i3sound.py ``` below error i am getting ``` Traceback (most recent call last): File "i3sound.py", line 1, in <module> from i3lib import i3common ModuleNotFoundError: No module named 'i3lib' ``` What i am missing?
2019/11/15
[ "https://Stackoverflow.com/questions/58877657", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8446934/" ]
I think you are just missing the installation of the `Lombok` on `intellij` double click on `Lombok.jar` and chose the `intelliJ IDE` Example config for lombok annotation procession in your `build.gradle` : ``` dependencies { compileOnly('org.projectlombok:lombok:1.16.20') annotationProcessor 'org.projectlombok:lombok:1.16.20' // compile 'org.projectlombok:lombok:1.16.20' <-- this no longer works! // other dependencies... } ``` > > @Wither is deprecated since 10.X.X. With has been promoted to the main > package, so use that one instead. > > > Please look at this [Lombok Wither](https://projectlombok.org/api/lombok/experimental/Wither.html) that's why you are not having the withA() function, if you downgrade your package you could use it sure
Line `compileOnly 'org.projectlombok:lombok:1.18.8'` shows that you're using gradle. I think the easiest way to check whether it works or not can be just running the gradle build (without IDE). Since lombok is an annotation processor, as long as the code passes the compilation, it's supposed to work (and the chances are that it really works based on that line). So you should check how does the IDE (you haven't specified which IDE it is actually) integrate with lombok. Maybe you need to enable "annotation processing" if you compile it with java compiler (like in intelliJ) and configure lombok. You an also install Lombok plugin for your IDE. Another useful hint is to use delombok and see whether the lombok has actually generated something or not
58,877,657
I am learning python I have project structure shown below. ``` i3cmd i3lib __init__.py i3common.py i3sound i3sound.py ``` ============================================================== **init**.py is empty i3common.py (removed actual code to simplify the post) ``` def rangeofdata(cmd, device, index): return ["a", "b", "c"] ``` i3sound.py (removed actual code to simplify the post) ``` from i3lib import i3common def getvolume(rangedata): return rangedata if __name__ == '__main__': rangedata = i3common.rangeofdata(["pactl", "list", "sinks"], "Sink", 2) print(getvolume(rangedata)) ``` When execute this code in pycharm it execute and get output ``` /home/vipin/Documents/python/i3cmd/venv/bin/python /home/vipin/Documents/python/i3cmd/i3sound/i3sound.py ['a', 'b', 'c'] Process finished with exit code 0 ``` But when open a terminal and go to /home/vipin/Documents/python/i3cmd/i3sound ``` cd /home/vipin/Documents/python/i3cmd/i3sound ``` then execute ``` python i3sound.py ``` below error i am getting ``` Traceback (most recent call last): File "i3sound.py", line 1, in <module> from i3lib import i3common ModuleNotFoundError: No module named 'i3lib' ``` What i am missing?
2019/11/15
[ "https://Stackoverflow.com/questions/58877657", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8446934/" ]
### Lombok 1.18.8: @Wither If you look at the actual implementation of `withA()` you will notice that it relies on an all-args constructors. To make your example work, try to add it, as well as a no-arg constructor: ``` @Wither @AllArgsConstructor @NoArgsConstructor public class User { private int a; } ``` The *delombok*'d version is: ``` public class User { private int a; public User withA(int a) { return this.a == a ? this : new User(a); } public User(int a) { this.a = a; } public User() { } } ``` Note: This has been tested with Lombok 1.18.8, IntelliJ IDEA and Lombok plugin. ### Lombok 1.18.10: @With `@With` has been promoted and `@Wither` deprecated: Simply replace `lombok.experimental.Wither` with `lombok.With`. Everything else is similar to `1.18.8`: ``` @With @AllArgsConstructor @NoArgsConstructor public class User { private int a; } ```
I think you are just missing the installation of the `Lombok` on `intellij` double click on `Lombok.jar` and chose the `intelliJ IDE` Example config for lombok annotation procession in your `build.gradle` : ``` dependencies { compileOnly('org.projectlombok:lombok:1.16.20') annotationProcessor 'org.projectlombok:lombok:1.16.20' // compile 'org.projectlombok:lombok:1.16.20' <-- this no longer works! // other dependencies... } ``` > > @Wither is deprecated since 10.X.X. With has been promoted to the main > package, so use that one instead. > > > Please look at this [Lombok Wither](https://projectlombok.org/api/lombok/experimental/Wither.html) that's why you are not having the withA() function, if you downgrade your package you could use it sure
58,877,657
I am learning python I have project structure shown below. ``` i3cmd i3lib __init__.py i3common.py i3sound i3sound.py ``` ============================================================== **init**.py is empty i3common.py (removed actual code to simplify the post) ``` def rangeofdata(cmd, device, index): return ["a", "b", "c"] ``` i3sound.py (removed actual code to simplify the post) ``` from i3lib import i3common def getvolume(rangedata): return rangedata if __name__ == '__main__': rangedata = i3common.rangeofdata(["pactl", "list", "sinks"], "Sink", 2) print(getvolume(rangedata)) ``` When execute this code in pycharm it execute and get output ``` /home/vipin/Documents/python/i3cmd/venv/bin/python /home/vipin/Documents/python/i3cmd/i3sound/i3sound.py ['a', 'b', 'c'] Process finished with exit code 0 ``` But when open a terminal and go to /home/vipin/Documents/python/i3cmd/i3sound ``` cd /home/vipin/Documents/python/i3cmd/i3sound ``` then execute ``` python i3sound.py ``` below error i am getting ``` Traceback (most recent call last): File "i3sound.py", line 1, in <module> from i3lib import i3common ModuleNotFoundError: No module named 'i3lib' ``` What i am missing?
2019/11/15
[ "https://Stackoverflow.com/questions/58877657", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8446934/" ]
### Lombok 1.18.8: @Wither If you look at the actual implementation of `withA()` you will notice that it relies on an all-args constructors. To make your example work, try to add it, as well as a no-arg constructor: ``` @Wither @AllArgsConstructor @NoArgsConstructor public class User { private int a; } ``` The *delombok*'d version is: ``` public class User { private int a; public User withA(int a) { return this.a == a ? this : new User(a); } public User(int a) { this.a = a; } public User() { } } ``` Note: This has been tested with Lombok 1.18.8, IntelliJ IDEA and Lombok plugin. ### Lombok 1.18.10: @With `@With` has been promoted and `@Wither` deprecated: Simply replace `lombok.experimental.Wither` with `lombok.With`. Everything else is similar to `1.18.8`: ``` @With @AllArgsConstructor @NoArgsConstructor public class User { private int a; } ```
Line `compileOnly 'org.projectlombok:lombok:1.18.8'` shows that you're using gradle. I think the easiest way to check whether it works or not can be just running the gradle build (without IDE). Since lombok is an annotation processor, as long as the code passes the compilation, it's supposed to work (and the chances are that it really works based on that line). So you should check how does the IDE (you haven't specified which IDE it is actually) integrate with lombok. Maybe you need to enable "annotation processing" if you compile it with java compiler (like in intelliJ) and configure lombok. You an also install Lombok plugin for your IDE. Another useful hint is to use delombok and see whether the lombok has actually generated something or not
8,595,689
I'm trying to send a request to an API that only accepts XML. I've used `elementtree.SimpleXMLWriter` to build the XML tree and it's stored in a StringIO object. That's all fine and dandy. The problem is that I have to urlencode the StringIO object in order to send it to the API. But when I try, I get: ``` File "C:\Python27\lib\urllib.py", line 1279, in urlencode if len(query) and not isinstance(query[0], tuple): AttributeError: StringIO instance has no attribute '__len__' ``` Apparently this has been discussed as [an issue with Python](http://bugs.python.org/issue12327). I'm just wondering if there are any other built-in functions for urlencoding a string, specifically ones that don't need to call `len()` so that I can encode this StringIO object. Thanks! **PS:** I'm open to using something other than StringIO for storing the XML object, if that's an easier solution. I just need some sort of "[file](http://effbot.org/zone/xml-writer.htm)" for `SimpleXMLWriter` to store the XML in.
2011/12/21
[ "https://Stackoverflow.com/questions/8595689", "https://Stackoverflow.com", "https://Stackoverflow.com/users/625840/" ]
As the links you provided point out, php is not a persistent language and there is no way to have persistence across sessions (i.e. page loads). You can create a middle ground though by running a second php script as a daemon, and have your main script (i.e. the one the user hits) connect to that (yes - over a socket...) and get data from it. If you were to do that, and want to avoid the hassel of Web Sockets, try the new HTML5 [EventStream API](http://www.html5rocks.com/en/tutorials/eventsource/basics/), as it gives you the best of both worlds: A commet like infrastructure without the hackyness of long-polling or the need for a dedicated Web Sockets server.
If you need to keep the connection open, you need to keep the PHP script open. Commonly PHP is just invoked and then closed after the script has run (CGI, CLI), or it's a mixture (mod\_php in apache, FCGI) in which sometimes the PHP interpreter stays in memory after your script has finished (so everything associated from the OS to that process would still remain as a socket handle). However this is never save. Instead you need to make PHP a daemon which can keep your PHP scripts in memory. An existing solution for that is [Appserver-In-PHP](https://github.com/indeyets/appserver-in-php). It will keep your code in memory until you restart the server. Like the code, you can as well preserve variables between requests, e.g. a connection handle.
60,780,826
I try to write a python function that counts a specific word in a string. My regex pattern doesn't work when the word I want to count is repeated multiple times in a row. The pattern seems to work well otherwise. Here is my function ``` import re def word_count(word, text): return len(re.findall('(^|\s|\b)'+re.escape(word)+'(\,|\s|\b|\.|$)', text, re.IGNORECASE)) ``` When I test it with a random string ``` >>> word_count('Linux', "Linux, Word, Linux") 2 ``` When the word I want to count is adjacent to itself ``` >>> word_count('Linux', "Linux Linux") 1 ```
2020/03/20
[ "https://Stackoverflow.com/questions/60780826", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7915157/" ]
Problem is in your regex. Your regex is using 2 capture groups and `re.findall` will return any capture groups if available. That needs to change to non-capture groups using `(?:...)` Besides there is reason to use `(^|\s|\b)` as `\b` or word boundary is suffice which covers all the cases besides `\b` is zero width. Same way `(\,|\s|\b|\.|$)` can be changed to `\b`. So you can just use: ``` def word_count(word, text): return len(re.findall(r'\b' + re.escape(word) + r'\b', text, re.I)) ``` This will give: ``` >>> word_count('Linux', "Linux, Word, Linux") 2 >>> word_count('Linux', "Linux Linux") 2 ```
I am not sure this is 100% because I don't understand the part about passing the function the word to search for when you are just looking for words that repeat in a string. So maybe consider... ``` import re pattern = r'\b(\w+)( \1\b)+' def word_count(text): split_words = text.split(' ') count = 0 for split_word in split_words: count = count + len(re.findall(pattern, text, re.IGNORECASE)) return count word_count('Linux Linux Linux Linux') ``` Output: ``` 4 ``` Maybe it helps. UPDATE: Based on comment below... ``` def word_count(word, text): count = text.count(word) return count word_count('Linux', "Linux, Word, Linux") ``` Output: ``` 2 ```
66,702,514
I am trying to create a function that would take a user inputted number and determine if the number is an integer or a floating-point depending on what the mode is set to. I am very new to python and learning the language and I am getting an invalid syntax error and I don't know what to do. So far I am making the integer tester first. Here is the code: ``` def getNumber(IntF, FloatA, Msg, rsp): print("What would you like to do?") print("Option A = Interger") print("Option B = Floating Point") Rsp = int(input("What number would like to test as an interger?")) A = rsp if rsp == "A": while True: try: userInput = int(input("What number would like to test as an interger")) except ValueError as ve: print("Not an integer! Try again.") continue else: return userInput break ```
2021/03/19
[ "https://Stackoverflow.com/questions/66702514", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15429618/" ]
`.testcontainer.properties` in my `$HOME` directory fixed the issue for me. This file is used to override properties but I am still not sure how that fixes the issue. I see in my `.gitlab.yml` that what we do and just imitated that in my local, that solved the issue.
For some it might help to update the version of testcontainers
62,421,333
I have a dataframe like image1. I want to convert it to image2. I have tried r, python, and excel but failed. Excel formula: =INDEX(AV2:AW2,MODE(MATCH(AV2:AW2,AV2:AW2,0))) give me N/A output. the "k2" column would be the most common element from "knumbers" column. Any Help. Best, Zillur [![image1](https://i.stack.imgur.com/O7SkM.png)](https://i.stack.imgur.com/O7SkM.png) [![image2](https://i.stack.imgur.com/obB56.png)](https://i.stack.imgur.com/obB56.png)
2020/06/17
[ "https://Stackoverflow.com/questions/62421333", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4168405/" ]
In R, you can split the strings on comma, count the frequency using `table` and get the most frequently occurring string. ``` df$k2 <- sapply(strsplit(df$knumbers, ','), function(x) names(sort(table(x), decreasing = TRUE)[1])) ```
Python solution: ``` # Initialise pandas, and mode in session: import pandas as pd from statistics import mode # Scalar denoting the full path to file (including file name): filepath => string scalar filepath = '' # Read in the Excel sheet: df => Data Frame df = pd.read_excel(filepath) # Find modal element per row: k2 => string vector df['k2'] = [*map(lambda x: mode(str(x).split(',')), df['knumbers'])] ``` Base R Solution: ``` # Define a function to retrieve the modal element in a factor/character vector: mode_stat => function mode_stat <- function(chr_vec){names(sort(table(as.character(chr_vec)), decreasing = TRUE)[1])} # Apply the function to a list of split knumber strings: k2 => character vector df$k2 <- sapply(strsplit(df$knumbers, ","), mode_stat) ``` Data (reconstruct in R): ``` df <- structure(list(Total = c(446, 346, 332, 308), knumbers = c("K10401", "K10413,K10413,K10412", "K13844,K13844,K13845", "K19206,K19207,K19207" )), row.names = c(NA, -4L), class = c("tbl_df", "tbl", "data.frame")) ``` In Excel: ``` (goodluck) ```
9,905,874
I'm running into a problem that I haven't seen anyone on StackOverflow encounter or even google for that matter. My main goal is to be able to replace occurences of a string in the file with another string. Is there a way there a way to be able to acess all of the lines in the file. The problem is that when I try to read in a large text file (1-2 gb) of text, python only reads a subset of it. For example, I'll do a really simply command such as: ``` newfile = open("newfile.txt","w") f = open("filename.txt","r") for line in f: replaced = line.replace("string1", "string2") newfile.write(replaced) ``` And it only writes the first 382 mb of the original file. Has anyone encountered this problem previously? I tried a few different solutions such as using: ``` import fileinput for i, line in enumerate(fileinput.input("filename.txt", inplace=1) sys.stdout.write(line.replace("string1", "string2") ``` But it has the same effect. Nor does reading the file in chunks such as using ``` f.read(10000) ``` I've narrowed it down to mostly likely being a reading in problem and not a writing problem because it happens for simply printing out lines. I know that there are more lines. When I open it in a full text editor such as Vim, I can see what the last line should be, and it is not the last line that python prints. Can anyone offer any advice or things to try? I'm currently using a 32-bit version of Windows XP with 3.25 gb of ram, and running Python 2.7 \*Edit Solution Found (Thanks Lattyware). Using an Iterator ``` def read_in_chunks(file, chunk_size=1000): while True: data = file.read(chunk_size) if not data: break yield data ```
2012/03/28
[ "https://Stackoverflow.com/questions/9905874", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1297872/" ]
Try: ``` f = open("filename.txt", "rb") ``` On Windows, `rb` means open file in binary mode. According to the docs, text mode vs. binary mode only has an impact on end-of-line characters. But (if I remember correctly) I believe opening files in text mode on Windows also does something with EOF (hex 1A). You can also specify the mode when using `fileinput`: ``` fileinput.input("filename.txt", inplace=1, mode="rb") ```
If you use the file like this: ``` with open("filename.txt") as f: for line in f: newfile.write(line.replace("string1", "string2")) ``` It should only read into memory one line at a time, unless you keep a reference to that line in memory. After each line is read it will be up to pythons garbage collector to get rid of it. Give this a try and see if it works for you :)
9,905,874
I'm running into a problem that I haven't seen anyone on StackOverflow encounter or even google for that matter. My main goal is to be able to replace occurences of a string in the file with another string. Is there a way there a way to be able to acess all of the lines in the file. The problem is that when I try to read in a large text file (1-2 gb) of text, python only reads a subset of it. For example, I'll do a really simply command such as: ``` newfile = open("newfile.txt","w") f = open("filename.txt","r") for line in f: replaced = line.replace("string1", "string2") newfile.write(replaced) ``` And it only writes the first 382 mb of the original file. Has anyone encountered this problem previously? I tried a few different solutions such as using: ``` import fileinput for i, line in enumerate(fileinput.input("filename.txt", inplace=1) sys.stdout.write(line.replace("string1", "string2") ``` But it has the same effect. Nor does reading the file in chunks such as using ``` f.read(10000) ``` I've narrowed it down to mostly likely being a reading in problem and not a writing problem because it happens for simply printing out lines. I know that there are more lines. When I open it in a full text editor such as Vim, I can see what the last line should be, and it is not the last line that python prints. Can anyone offer any advice or things to try? I'm currently using a 32-bit version of Windows XP with 3.25 gb of ram, and running Python 2.7 \*Edit Solution Found (Thanks Lattyware). Using an Iterator ``` def read_in_chunks(file, chunk_size=1000): while True: data = file.read(chunk_size) if not data: break yield data ```
2012/03/28
[ "https://Stackoverflow.com/questions/9905874", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1297872/" ]
Are you sure the problem is with reading and not with writing out? Do you close the file that is written to, either explicitly `newfile.close()` or using the `with` construct? Not closing the output file is often the source of such problems when buffering is going on somewhere. If that's the case in your setting too, closing should fix your initial solutions.
If you use the file like this: ``` with open("filename.txt") as f: for line in f: newfile.write(line.replace("string1", "string2")) ``` It should only read into memory one line at a time, unless you keep a reference to that line in memory. After each line is read it will be up to pythons garbage collector to get rid of it. Give this a try and see if it works for you :)
9,905,874
I'm running into a problem that I haven't seen anyone on StackOverflow encounter or even google for that matter. My main goal is to be able to replace occurences of a string in the file with another string. Is there a way there a way to be able to acess all of the lines in the file. The problem is that when I try to read in a large text file (1-2 gb) of text, python only reads a subset of it. For example, I'll do a really simply command such as: ``` newfile = open("newfile.txt","w") f = open("filename.txt","r") for line in f: replaced = line.replace("string1", "string2") newfile.write(replaced) ``` And it only writes the first 382 mb of the original file. Has anyone encountered this problem previously? I tried a few different solutions such as using: ``` import fileinput for i, line in enumerate(fileinput.input("filename.txt", inplace=1) sys.stdout.write(line.replace("string1", "string2") ``` But it has the same effect. Nor does reading the file in chunks such as using ``` f.read(10000) ``` I've narrowed it down to mostly likely being a reading in problem and not a writing problem because it happens for simply printing out lines. I know that there are more lines. When I open it in a full text editor such as Vim, I can see what the last line should be, and it is not the last line that python prints. Can anyone offer any advice or things to try? I'm currently using a 32-bit version of Windows XP with 3.25 gb of ram, and running Python 2.7 \*Edit Solution Found (Thanks Lattyware). Using an Iterator ``` def read_in_chunks(file, chunk_size=1000): while True: data = file.read(chunk_size) if not data: break yield data ```
2012/03/28
[ "https://Stackoverflow.com/questions/9905874", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1297872/" ]
Try: ``` f = open("filename.txt", "rb") ``` On Windows, `rb` means open file in binary mode. According to the docs, text mode vs. binary mode only has an impact on end-of-line characters. But (if I remember correctly) I believe opening files in text mode on Windows also does something with EOF (hex 1A). You can also specify the mode when using `fileinput`: ``` fileinput.input("filename.txt", inplace=1, mode="rb") ```
Are you sure the problem is with reading and not with writing out? Do you close the file that is written to, either explicitly `newfile.close()` or using the `with` construct? Not closing the output file is often the source of such problems when buffering is going on somewhere. If that's the case in your setting too, closing should fix your initial solutions.
68,945,015
I need a simple python library to convert PDF to image (render the PDF as is), but after hours of searching, I keep hitting the same wall, I find libraries like `pdf2image` python library (and many similar ones), which depend on external applications or wrap command-line tools. Although there are workarounds to allow using these libraries in serverless settings, they all would complicate our deployment and require creating the likes of `Execution Environments` or extra lambda layers, which will eat up from the small allowed lambda size. Is there a self-contained, independent mechanism (not dependent on command-line tools) to allow achieving this (seemingly simple) task? Also, I am wondering, is there a reason (licensing or patents) for the scarcity of tools that deal with PDFs (they are mostly commercial or under strict AGPL licenses)?
2021/08/26
[ "https://Stackoverflow.com/questions/68945015", "https://Stackoverflow.com", "https://Stackoverflow.com/users/452748/" ]
You said "Ended up using pdf2image" [pdf2image (MIT)](https://pypi.org/project/pdf2image/). A python (3.6+) module that wraps pdftoppm (GPL?) and pdftocairo (GPL?) to convert PDF to a PIL Image object. Generally [Poppler (GPL)](https://en.wikipedia.org/wiki/Poppler_(software)) spinoffs from Open Source [Xpdf (GPL)](http://www.xpdfreader.com/about.html) which has * pdftopng: * pdftoppm: * pdfimages: and a 3rd party pdftotiff
You can convert PDF's to images without external dependencies using PyMuPDF. I use it for Azure functions. Install with `pip install PyMuPDF` In your python file: ``` import fitz pdfDoc = fitz.open(filepath) img = pdfDoc[0].get_pixmap(matrix=fitz.Matrix(2,2)) bytesimg = img.tobytes() ``` This takes the first page of the PDF and converts it to an image, the matrix is for the resolution. You can also open a stream instead of a file on disk: ``` pdfDoc = fitz.open(stream = pdfstream, filetype="pdf") ```
31,941,951
In my Python code I use a third party shared object, a `.so` file, which I suspect to contains a memory leak. During the run of my program I have a loop where I repeatedly call functions of the shared object. While the programm is running I can see in `htop`, that the memory usage is steadily increasing. When the RAM is full, the programm crashes with the terminal output `killed`. My assumption is, that if the memory leak is produced by the shared object, because otherwise Python would raise an `Exception.MemoryError`. I tried using [`reload(modul_name)`](https://stackoverflow.com/questions/437589/how-do-i-unload-reload-a-python-module) followed by a `gc.collect()` but it did not free the memory according to `htop`. What shall I do?
2015/08/11
[ "https://Stackoverflow.com/questions/31941951", "https://Stackoverflow.com", "https://Stackoverflow.com/users/380038/" ]
The exact cause of the exception is, that the number `1439284609013` is too big to fit into `Integer`. However, the actual issue lies elsewhere. I have looked at the source code, your parameters seem to be wrong: ``` emp1 ~/KT/bkp 1439284609013 1439284641872 ``` You have given a `String`, another `String` and two `Long`s, these are the * `args[0]`: `tableName` * `args[1]`: `outputDir` * `args[2]`: `startTime` * `args[3]`: `endTime` the problem is, that you are missing an argument: `args[2]` should be an `Integer`,`startTime` should become `args[3]` and `endTime` should become `args[4]`. In the source, that expected third, `Integer` argument is called `versions`, however I don't exactly know what that means. --- ### Official documentation Going through the source is one thing, but the [official docs](http://hbase.apache.org/book.html#_export) also give the syntax of `Export` the following: > > `$ bin/hbase org.apache.hadoop.hbase.mapreduce.Export <tablename> <outputdir> [<versions> [<starttime> [<endtime>]]]` > > > By default, the Export tool only exports the newest version of a given cell, regardless of the number of versions stored. To export more than one version, replace `<versions>` with the desired number of versions. > > > --- ### Wrapping it up To achive what you wanted originally, just simple add `1` as the third argument: ``` hbase org.apache.hadoop.hbase.mapreduce.Export emp1 ~/KT/bkp 1 1439284609013 1439284641872 ```
I entered only the start time and end time. Export is expecting versions before start and end time. So finally I entered the version number it worked. ``` ./hbase org.apache.hadoop.hbase.mapreduce.Export emp1 ~/KT/bkp 2147483647 1439284609013 1439284646830 ```
31,941,951
In my Python code I use a third party shared object, a `.so` file, which I suspect to contains a memory leak. During the run of my program I have a loop where I repeatedly call functions of the shared object. While the programm is running I can see in `htop`, that the memory usage is steadily increasing. When the RAM is full, the programm crashes with the terminal output `killed`. My assumption is, that if the memory leak is produced by the shared object, because otherwise Python would raise an `Exception.MemoryError`. I tried using [`reload(modul_name)`](https://stackoverflow.com/questions/437589/how-do-i-unload-reload-a-python-module) followed by a `gc.collect()` but it did not free the memory according to `htop`. What shall I do?
2015/08/11
[ "https://Stackoverflow.com/questions/31941951", "https://Stackoverflow.com", "https://Stackoverflow.com/users/380038/" ]
Timestamps are usually associated to Long types, that have 64 bits Integers have 32 bits and the range is only -2,147,483,648 to 2,147,483,647 in Java
I entered only the start time and end time. Export is expecting versions before start and end time. So finally I entered the version number it worked. ``` ./hbase org.apache.hadoop.hbase.mapreduce.Export emp1 ~/KT/bkp 2147483647 1439284609013 1439284646830 ```
67,280,726
I want to extract some data from a text file to a dataframe : the text file look like this ``` URL: http://www.nytimes.com/2016/06/30/sports/baseball/washington-nationals-max-scherzer-baffles-mets-completing-a-sweep.html WASHINGTON — Stellar .... stretched thin. “We were going t......e do anything.” Wednesday’s ... starter. “We’re n... work.” The Mets did not scor....their 40-37 record. URL: http://www.nytimes.com/2016/06/30/nyregion/mayor-de-blasios-counsel-to-leave-next-month-to-lead-police-review-board.html Mayor Bill de .... Department. The move.... April. A civil ... conversations. More... administration. URL: http://www.nytimes.com/2016/06/30/nyregion/three-men-charged-in-killing-of-cuomo-administration-lawyer.html In the early..., the Folk Nation. As hundreds ... wounds. For some...residents. On Wednesd...killing. One ...murder. ``` It contains the URL and the text from new york times articles, I want to create a dataframe of 2 columns, the first one being the URL and the second one being the text. The issue I have is that I couldn't deal with the Delimiters as there are two new lines between the URL and the corresponding text. But there are single new lines also in the text itself. I tried using this code, but instead of getting a 2 column dataframe, I got a single column with a new row for each newline used, so it is also separating the text into multiple paragraphs, I am using dask btw : ``` df_csv = dd.read_csv(filename,sep="\n\n",header=None,engine='python') ```
2021/04/27
[ "https://Stackoverflow.com/questions/67280726", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10586681/" ]
``` # read file file = open('ny.txt', encoding="utf8").read() url = [] text = [] # split text at every 2-new-lines # elements at 'odd' positions are 'urls' # elements at 'even' positions are 'text/content' for ind, line in enumerate(file.split('\n\n')): if ind%2==0: url.append(line) else: text.append(line) # save to a dataframe df = pd.DataFrame({'url':url, 'text':text}) df url text 0 URL: http://www.nytimes.com/2016/06/30/sports/... WASHINGTON — Stellar .... stretched thin.\n“We... 1 URL: http://www.nytimes.com/2016/06/30/nyregio... Mayor Bill de .... Department.\nThe move.... A... 2 URL: http://www.nytimes.com/2016/06/30/nyregio... In the early..., the Folk Nation.\nAs hundreds... # ADDITIONAL : Remove the characters 'URL: ' with empty string df['url'] = df['url'].str.replace('URL: ', '') df url text 0 http://www.nytimes.com/2016/06/30/sports/baseb... WASHINGTON — Stellar .... stretched thin.\n“We... 1 http://www.nytimes.com/2016/06/30/nyregion/may... Mayor Bill de .... Department.\nThe move.... A... 2 http://www.nytimes.com/2016/06/30/nyregion/thr... In the early..., the Folk Nation.\nAs hundreds... ```
You can do it easily in the following way: ``` import pandas as pd text = '''URL: http://www.nytimes.com/2016/06/30/sports/baseball/washington-nationals-max-scherzer-baffles-mets-completing-a-sweep.html WASHINGTON — Stellar .... stretched thin. “We were going t......e do anything.” Wednesday’s ... starter. “We’re n... work.” The Mets did not scor....their 40-37 record. URL: http://www.nytimes.com/2016/06/30/nyregion/mayor-de-blasios-counsel-to-leave-next-month-to-lead-police-review-board.html Mayor Bill de .... Department. The move.... April. A civil ... conversations. More... administration. URL: http://www.nytimes.com/2016/06/30/nyregion/three-men-charged-in-killing-of-cuomo-administration-lawyer.html In the early..., the Folk Nation. As hundreds ... wounds. For some...residents. On Wednesd...killing. One ...murder. ''' # 1) Extract the text to lines list text = text.replace('\n', '') # delete all the single '\n' text = text.replace('\n\n', '') # delete all the '\n\n' lines = text.split('URL: ')[1:] # to drop the first match of '' # 2) Create pandas.DataFrame object and populate it with the extracted lines list from (1) df = pd.DataFrame(dict(lines=lines)) # 3) Extract the URLs into a new column df.loc[:, 'URL'] = df.loc[:, 'lines'].str.extract(r'(http:[^,]+.html)', expand=False) # 4) Extract the message into a new column df.loc[:, 'Text'] = df.loc[:, 'lines'].str.extract(r'(?<=\.html)([^$]+)', expand=False) # 4) Delete the original lines column df.drop('lines', axis='columns', inplace = True) ``` **Output:** ``` URL Text 0 http://www.nytimes.com/2016/06/30/sports/baseb... WASHINGTON — Stellar .... stretched thin.“We w... 1 http://www.nytimes.com/2016/06/30/nyregion/may... Mayor Bill de .... Department.The move.... Apr... 2 http://www.nytimes.com/2016/06/30/nyregion/thr... In the early..., the Folk Nation.As hundreds .... ``` Cheers!
63,506,041
Am new to python and am trying to read a PDF file to pull the `ID No.`. I have been successful so far to extract the text out of the PDF file using `pdfplumber`. Below is the code block: ``` import pdfplumber with pdfplumber.open('ABC.pdf') as pdf_file: firstpage = pdf_file.pages[0] raw_text = firstpage.extract_text() print (raw_text) ``` Here is the text output: ``` Welcome to ABC 01 January, 1991 ID No. : 10101010 Welcome to your ABC portal. Learn More text here.. Even more text here.. Mr Jane Doe Jack & Jill Street Learn more about your www.abc.com .... .... .... ``` However, am unable to find the optimum way to parse this unstructured text further. The final output am expecting to be is just the ID No. i.e. `10101010`. On a side note, the script would be using against fairly huge set of PDFs so performance would be of concern.
2020/08/20
[ "https://Stackoverflow.com/questions/63506041", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7855187/" ]
Try using a regular expression: ``` import pdfplumber import re with pdfplumber.open('ABC.pdf') as pdf_file: firstpage = pdf_file.pages[0] raw_text = firstpage.extract_text() m = re.search(r'ID No\. : (\d+)', raw_text) if m: print(m.group(1)) ``` Of course you'll have to iterate over *all* the PDF's contents - not just the first page! Also ask yourself if it's possible that there's more than one match per page. Anyway: you know the structure of the input better than I do (and we don't have access to the sample file), so I'll leave it as an exercise for you.
If the length of the id number is always the same, I would try to find the location of it with the find-function. `position = raw_text.find('ID No. : ')`should return the position of the I in ID No. position + 9 should be the first digit of the id. When the number has always a length of 8 you could get it with `int(raw_text[position+9:position+17]`)
63,506,041
Am new to python and am trying to read a PDF file to pull the `ID No.`. I have been successful so far to extract the text out of the PDF file using `pdfplumber`. Below is the code block: ``` import pdfplumber with pdfplumber.open('ABC.pdf') as pdf_file: firstpage = pdf_file.pages[0] raw_text = firstpage.extract_text() print (raw_text) ``` Here is the text output: ``` Welcome to ABC 01 January, 1991 ID No. : 10101010 Welcome to your ABC portal. Learn More text here.. Even more text here.. Mr Jane Doe Jack & Jill Street Learn more about your www.abc.com .... .... .... ``` However, am unable to find the optimum way to parse this unstructured text further. The final output am expecting to be is just the ID No. i.e. `10101010`. On a side note, the script would be using against fairly huge set of PDFs so performance would be of concern.
2020/08/20
[ "https://Stackoverflow.com/questions/63506041", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7855187/" ]
Try using a regular expression: ``` import pdfplumber import re with pdfplumber.open('ABC.pdf') as pdf_file: firstpage = pdf_file.pages[0] raw_text = firstpage.extract_text() m = re.search(r'ID No\. : (\d+)', raw_text) if m: print(m.group(1)) ``` Of course you'll have to iterate over *all* the PDF's contents - not just the first page! Also ask yourself if it's possible that there's more than one match per page. Anyway: you know the structure of the input better than I do (and we don't have access to the sample file), so I'll leave it as an exercise for you.
If you are new to Python and actually need to process serious amounts of data, I suggest that you look at Scala as an alternative. For data processing in general, and regular expression matching in particular, the time it takes to get results is much reduced. Here is an answer to your question in Scala instead of Python: ``` import com.itextpdf.text.pdf.PdfReader import com.itextpdf.text.pdf.parser.PdfTextExtractor val fil = "ABC.pdf" val textFromPage = (1 until (new PdfReader(fil)).getNumberOfPages).par.map(page => PdfTextExtractor.getTextFromPage(new PdfReader(fil), page)).mkString val r = "ID No\\. : (\\d+)".r val res = for (m <- r.findAllMatchIn(textFromPage )) yield m.group(0) res.foreach(println) ```
14,657,498
I'd like to create a `text/plain` message using Markdown formatting and transform that into a `multipart/alternative` message where the `text/html` part has been generated from the Markdown. I've tried using the filter command to filter this through a python program that creates the message, but it seems that the message doesn't get sent through properly. The code is below (this is just test code to see if I can make `multipart/alternative` messages at all. ``` import sys from email.mime.text import MIMEText from email.mime.multipart import MIMEMultipart html = """<html> <body> This is <i>HTML</i> </body> </html> """ msgbody = sys.stdin.read() newmsg = MIMEMultipart("alternative") plain = MIMEText(msgbody, "plain") plain["Content-Disposition"] = "inline" html = MIMEText(html, "html") html["Content-Disposition"] = "inline" newmsg.attach(plain) newmsg.attach(html) print newmsg.as_string() ``` Unfortunately, in mutt, you only get the message body sent to the filter command when you compose (the headers are not included). Once I get this working, I think the markdown part won't be too hard.
2013/02/02
[ "https://Stackoverflow.com/questions/14657498", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1053149/" ]
Inside your `DialogFragment`, call [`Fragment.setRetainInstance(boolean)`](http://developer.android.com/reference/android/app/Fragment.html#setRetainInstance%28boolean%29) with the value `true`. You don't need to save the fragment manually, the framework already takes care of all of this. Calling this will prevent your fragment from being destroyed on rotation and your network requests will be unaffected. You may have to add this code to stop your dialog from being dismissed on rotation, due to a [bug](https://code.google.com/p/android/issues/detail?id=17423) with the compatibility library: ``` @Override public void onDestroyView() { Dialog dialog = getDialog(); // handles https://code.google.com/p/android/issues/detail?id=17423 if (dialog != null && getRetainInstance()) { dialog.setDismissMessage(null); } super.onDestroyView(); } ```
One of the advantages of using `dialogFragment` compared to just using `alertDialogBuilder` is exactly because dialogfragment can automatically recreate itself upon rotation without user intervention. However, when the dialogfragment does not recreate itself, it is possible that you overwrite `onSaveInstanceState` but didn't to call `super`: ``` @Override protected void onSaveInstanceState(Bundle outState) { super.onSaveInstanceState(outState); // <-- must call this if you want to retain dialogFragment upon rotation ... } ```
14,657,498
I'd like to create a `text/plain` message using Markdown formatting and transform that into a `multipart/alternative` message where the `text/html` part has been generated from the Markdown. I've tried using the filter command to filter this through a python program that creates the message, but it seems that the message doesn't get sent through properly. The code is below (this is just test code to see if I can make `multipart/alternative` messages at all. ``` import sys from email.mime.text import MIMEText from email.mime.multipart import MIMEMultipart html = """<html> <body> This is <i>HTML</i> </body> </html> """ msgbody = sys.stdin.read() newmsg = MIMEMultipart("alternative") plain = MIMEText(msgbody, "plain") plain["Content-Disposition"] = "inline" html = MIMEText(html, "html") html["Content-Disposition"] = "inline" newmsg.attach(plain) newmsg.attach(html) print newmsg.as_string() ``` Unfortunately, in mutt, you only get the message body sent to the filter command when you compose (the headers are not included). Once I get this working, I think the markdown part won't be too hard.
2013/02/02
[ "https://Stackoverflow.com/questions/14657498", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1053149/" ]
Inside your `DialogFragment`, call [`Fragment.setRetainInstance(boolean)`](http://developer.android.com/reference/android/app/Fragment.html#setRetainInstance%28boolean%29) with the value `true`. You don't need to save the fragment manually, the framework already takes care of all of this. Calling this will prevent your fragment from being destroyed on rotation and your network requests will be unaffected. You may have to add this code to stop your dialog from being dismissed on rotation, due to a [bug](https://code.google.com/p/android/issues/detail?id=17423) with the compatibility library: ``` @Override public void onDestroyView() { Dialog dialog = getDialog(); // handles https://code.google.com/p/android/issues/detail?id=17423 if (dialog != null && getRetainInstance()) { dialog.setDismissMessage(null); } super.onDestroyView(); } ```
This is a convenience method using the fix from antonyt's answer: ``` public class RetainableDialogFragment extends DialogFragment { public RetainableDialogFragment() { setRetainInstance(true); } @Override public void onDestroyView() { Dialog dialog = getDialog(); // handles https://code.google.com/p/android/issues/detail?id=17423 if (dialog != null && getRetainInstance()) { dialog.setDismissMessage(null); } super.onDestroyView(); } } ``` Just let your `DialogFragment` extend this class and everything will be fine. This becomes especially handy, if you have multiple `DialogFragments` in your project which all need this fix.
14,657,498
I'd like to create a `text/plain` message using Markdown formatting and transform that into a `multipart/alternative` message where the `text/html` part has been generated from the Markdown. I've tried using the filter command to filter this through a python program that creates the message, but it seems that the message doesn't get sent through properly. The code is below (this is just test code to see if I can make `multipart/alternative` messages at all. ``` import sys from email.mime.text import MIMEText from email.mime.multipart import MIMEMultipart html = """<html> <body> This is <i>HTML</i> </body> </html> """ msgbody = sys.stdin.read() newmsg = MIMEMultipart("alternative") plain = MIMEText(msgbody, "plain") plain["Content-Disposition"] = "inline" html = MIMEText(html, "html") html["Content-Disposition"] = "inline" newmsg.attach(plain) newmsg.attach(html) print newmsg.as_string() ``` Unfortunately, in mutt, you only get the message body sent to the filter command when you compose (the headers are not included). Once I get this working, I think the markdown part won't be too hard.
2013/02/02
[ "https://Stackoverflow.com/questions/14657498", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1053149/" ]
Inside your `DialogFragment`, call [`Fragment.setRetainInstance(boolean)`](http://developer.android.com/reference/android/app/Fragment.html#setRetainInstance%28boolean%29) with the value `true`. You don't need to save the fragment manually, the framework already takes care of all of this. Calling this will prevent your fragment from being destroyed on rotation and your network requests will be unaffected. You may have to add this code to stop your dialog from being dismissed on rotation, due to a [bug](https://code.google.com/p/android/issues/detail?id=17423) with the compatibility library: ``` @Override public void onDestroyView() { Dialog dialog = getDialog(); // handles https://code.google.com/p/android/issues/detail?id=17423 if (dialog != null && getRetainInstance()) { dialog.setDismissMessage(null); } super.onDestroyView(); } ```
In case nothing helps, and you need a solution that works, you can go on the safe side, and each time you open a dialog save its basic info to the activity ViewModel (and remove it from this list when you dismiss dialog). This basic info could be dialog type and some id (the information you need in order to open this dialog). This ViewModel is not destroyed during changes of Activity lifecycle. Let's say user opens a dialog to leave a reference to a restaurant. So dialog type would be LeaveReferenceDialog and the id would be the restaurant id. When opening this dialog, you save this information in an Object that you can call DialogInfo, and add this object to the ViewModel of the Activity. This information will allow you to reopen the dialog when the activity onResume() is being called: ``` // On resume in Activity override fun onResume() { super.onResume() // Restore dialogs that were open before activity went to background restoreDialogs() } ``` Which calls: ``` fun restoreDialogs() { mainActivityViewModel.setIsRestoringDialogs(true) // lock list in view model for (dialogInfo in mainActivityViewModel.openDialogs) openDialog(dialogInfo) mainActivityViewModel.setIsRestoringDialogs(false) // open lock } ``` When IsRestoringDialogs in ViewModel is set to true, dialog info will not be added to the list in view model, and it's important because we're now restoring dialogs which are already in that list. Otherwise, changing the list while using it would cause an exception. So: ``` // Create new dialog override fun openLeaveReferenceDialog(restaurantId: String) { var dialog = LeaveReferenceDialog() // Add id to dialog in bundle val bundle = Bundle() bundle.putString(Constants.RESTAURANT_ID, restaurantId) dialog.arguments = bundle dialog.show(supportFragmentManager, "") // Add dialog info to list of open dialogs addOpenDialogInfo(DialogInfo(LEAVE_REFERENCE_DIALOG, restaurantId)) } ``` Then remove dialog info when dismissing it: ``` // Dismiss dialog override fun dismissLeaveReferenceDialog(Dialog dialog, id: String) { if (dialog?.isAdded()){ dialog.dismiss() mainActivityViewModel.removeOpenDialog(LEAVE_REFERENCE_DIALOG, id) } } ``` And in the ViewModel of the Activity: ``` fun addOpenDialogInfo(dialogInfo: DialogInfo){ if (!isRestoringDialogs){ val dialogWasInList = removeOpenDialog(dialogInfo.type, dialogInfo.id) openDialogs.add(dialogInfo) } } fun removeOpenDialog(type: Int, id: String) { if (!isRestoringDialogs) for (dialogInfo in openDialogs) if (dialogInfo.type == type && dialogInfo.id == id) openDialogs.remove(dialogInfo) } ``` You actually reopen all the dialogs that were open before, in the same order. But how do they retain their information? Each dialog has a ViewModel of its own, which is also not destroyed during the activity lifecycle. So when you open the dialog, you get the ViewModel and init the UI using this ViewModel of the dialog as always.
14,657,498
I'd like to create a `text/plain` message using Markdown formatting and transform that into a `multipart/alternative` message where the `text/html` part has been generated from the Markdown. I've tried using the filter command to filter this through a python program that creates the message, but it seems that the message doesn't get sent through properly. The code is below (this is just test code to see if I can make `multipart/alternative` messages at all. ``` import sys from email.mime.text import MIMEText from email.mime.multipart import MIMEMultipart html = """<html> <body> This is <i>HTML</i> </body> </html> """ msgbody = sys.stdin.read() newmsg = MIMEMultipart("alternative") plain = MIMEText(msgbody, "plain") plain["Content-Disposition"] = "inline" html = MIMEText(html, "html") html["Content-Disposition"] = "inline" newmsg.attach(plain) newmsg.attach(html) print newmsg.as_string() ``` Unfortunately, in mutt, you only get the message body sent to the filter command when you compose (the headers are not included). Once I get this working, I think the markdown part won't be too hard.
2013/02/02
[ "https://Stackoverflow.com/questions/14657498", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1053149/" ]
Inside your `DialogFragment`, call [`Fragment.setRetainInstance(boolean)`](http://developer.android.com/reference/android/app/Fragment.html#setRetainInstance%28boolean%29) with the value `true`. You don't need to save the fragment manually, the framework already takes care of all of this. Calling this will prevent your fragment from being destroyed on rotation and your network requests will be unaffected. You may have to add this code to stop your dialog from being dismissed on rotation, due to a [bug](https://code.google.com/p/android/issues/detail?id=17423) with the compatibility library: ``` @Override public void onDestroyView() { Dialog dialog = getDialog(); // handles https://code.google.com/p/android/issues/detail?id=17423 if (dialog != null && getRetainInstance()) { dialog.setDismissMessage(null); } super.onDestroyView(); } ```
Most of the answers here are incorrect because they use setRetainInstance(true), but this is now deprecated as of [API 28](https://developer.android.com/reference/android/app/Fragment.html#setRetainInstance%28boolean%29). Here is the solution I am using: ``` fun isDialogVisible(fm: FragmentManager): Boolean { val dialog = fm.findFragmentByTag("<FRAGMENT_TAG>") return dialog?.isResumed ?: false } ``` If the function returns false, then simply call dialog.show(fm, "<FRAGMENT\_TAG>") to show it again.
14,657,498
I'd like to create a `text/plain` message using Markdown formatting and transform that into a `multipart/alternative` message where the `text/html` part has been generated from the Markdown. I've tried using the filter command to filter this through a python program that creates the message, but it seems that the message doesn't get sent through properly. The code is below (this is just test code to see if I can make `multipart/alternative` messages at all. ``` import sys from email.mime.text import MIMEText from email.mime.multipart import MIMEMultipart html = """<html> <body> This is <i>HTML</i> </body> </html> """ msgbody = sys.stdin.read() newmsg = MIMEMultipart("alternative") plain = MIMEText(msgbody, "plain") plain["Content-Disposition"] = "inline" html = MIMEText(html, "html") html["Content-Disposition"] = "inline" newmsg.attach(plain) newmsg.attach(html) print newmsg.as_string() ``` Unfortunately, in mutt, you only get the message body sent to the filter command when you compose (the headers are not included). Once I get this working, I think the markdown part won't be too hard.
2013/02/02
[ "https://Stackoverflow.com/questions/14657498", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1053149/" ]
One of the advantages of using `dialogFragment` compared to just using `alertDialogBuilder` is exactly because dialogfragment can automatically recreate itself upon rotation without user intervention. However, when the dialogfragment does not recreate itself, it is possible that you overwrite `onSaveInstanceState` but didn't to call `super`: ``` @Override protected void onSaveInstanceState(Bundle outState) { super.onSaveInstanceState(outState); // <-- must call this if you want to retain dialogFragment upon rotation ... } ```
In case nothing helps, and you need a solution that works, you can go on the safe side, and each time you open a dialog save its basic info to the activity ViewModel (and remove it from this list when you dismiss dialog). This basic info could be dialog type and some id (the information you need in order to open this dialog). This ViewModel is not destroyed during changes of Activity lifecycle. Let's say user opens a dialog to leave a reference to a restaurant. So dialog type would be LeaveReferenceDialog and the id would be the restaurant id. When opening this dialog, you save this information in an Object that you can call DialogInfo, and add this object to the ViewModel of the Activity. This information will allow you to reopen the dialog when the activity onResume() is being called: ``` // On resume in Activity override fun onResume() { super.onResume() // Restore dialogs that were open before activity went to background restoreDialogs() } ``` Which calls: ``` fun restoreDialogs() { mainActivityViewModel.setIsRestoringDialogs(true) // lock list in view model for (dialogInfo in mainActivityViewModel.openDialogs) openDialog(dialogInfo) mainActivityViewModel.setIsRestoringDialogs(false) // open lock } ``` When IsRestoringDialogs in ViewModel is set to true, dialog info will not be added to the list in view model, and it's important because we're now restoring dialogs which are already in that list. Otherwise, changing the list while using it would cause an exception. So: ``` // Create new dialog override fun openLeaveReferenceDialog(restaurantId: String) { var dialog = LeaveReferenceDialog() // Add id to dialog in bundle val bundle = Bundle() bundle.putString(Constants.RESTAURANT_ID, restaurantId) dialog.arguments = bundle dialog.show(supportFragmentManager, "") // Add dialog info to list of open dialogs addOpenDialogInfo(DialogInfo(LEAVE_REFERENCE_DIALOG, restaurantId)) } ``` Then remove dialog info when dismissing it: ``` // Dismiss dialog override fun dismissLeaveReferenceDialog(Dialog dialog, id: String) { if (dialog?.isAdded()){ dialog.dismiss() mainActivityViewModel.removeOpenDialog(LEAVE_REFERENCE_DIALOG, id) } } ``` And in the ViewModel of the Activity: ``` fun addOpenDialogInfo(dialogInfo: DialogInfo){ if (!isRestoringDialogs){ val dialogWasInList = removeOpenDialog(dialogInfo.type, dialogInfo.id) openDialogs.add(dialogInfo) } } fun removeOpenDialog(type: Int, id: String) { if (!isRestoringDialogs) for (dialogInfo in openDialogs) if (dialogInfo.type == type && dialogInfo.id == id) openDialogs.remove(dialogInfo) } ``` You actually reopen all the dialogs that were open before, in the same order. But how do they retain their information? Each dialog has a ViewModel of its own, which is also not destroyed during the activity lifecycle. So when you open the dialog, you get the ViewModel and init the UI using this ViewModel of the dialog as always.
14,657,498
I'd like to create a `text/plain` message using Markdown formatting and transform that into a `multipart/alternative` message where the `text/html` part has been generated from the Markdown. I've tried using the filter command to filter this through a python program that creates the message, but it seems that the message doesn't get sent through properly. The code is below (this is just test code to see if I can make `multipart/alternative` messages at all. ``` import sys from email.mime.text import MIMEText from email.mime.multipart import MIMEMultipart html = """<html> <body> This is <i>HTML</i> </body> </html> """ msgbody = sys.stdin.read() newmsg = MIMEMultipart("alternative") plain = MIMEText(msgbody, "plain") plain["Content-Disposition"] = "inline" html = MIMEText(html, "html") html["Content-Disposition"] = "inline" newmsg.attach(plain) newmsg.attach(html) print newmsg.as_string() ``` Unfortunately, in mutt, you only get the message body sent to the filter command when you compose (the headers are not included). Once I get this working, I think the markdown part won't be too hard.
2013/02/02
[ "https://Stackoverflow.com/questions/14657498", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1053149/" ]
One of the advantages of using `dialogFragment` compared to just using `alertDialogBuilder` is exactly because dialogfragment can automatically recreate itself upon rotation without user intervention. However, when the dialogfragment does not recreate itself, it is possible that you overwrite `onSaveInstanceState` but didn't to call `super`: ``` @Override protected void onSaveInstanceState(Bundle outState) { super.onSaveInstanceState(outState); // <-- must call this if you want to retain dialogFragment upon rotation ... } ```
Most of the answers here are incorrect because they use setRetainInstance(true), but this is now deprecated as of [API 28](https://developer.android.com/reference/android/app/Fragment.html#setRetainInstance%28boolean%29). Here is the solution I am using: ``` fun isDialogVisible(fm: FragmentManager): Boolean { val dialog = fm.findFragmentByTag("<FRAGMENT_TAG>") return dialog?.isResumed ?: false } ``` If the function returns false, then simply call dialog.show(fm, "<FRAGMENT\_TAG>") to show it again.
14,657,498
I'd like to create a `text/plain` message using Markdown formatting and transform that into a `multipart/alternative` message where the `text/html` part has been generated from the Markdown. I've tried using the filter command to filter this through a python program that creates the message, but it seems that the message doesn't get sent through properly. The code is below (this is just test code to see if I can make `multipart/alternative` messages at all. ``` import sys from email.mime.text import MIMEText from email.mime.multipart import MIMEMultipart html = """<html> <body> This is <i>HTML</i> </body> </html> """ msgbody = sys.stdin.read() newmsg = MIMEMultipart("alternative") plain = MIMEText(msgbody, "plain") plain["Content-Disposition"] = "inline" html = MIMEText(html, "html") html["Content-Disposition"] = "inline" newmsg.attach(plain) newmsg.attach(html) print newmsg.as_string() ``` Unfortunately, in mutt, you only get the message body sent to the filter command when you compose (the headers are not included). Once I get this working, I think the markdown part won't be too hard.
2013/02/02
[ "https://Stackoverflow.com/questions/14657498", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1053149/" ]
This is a convenience method using the fix from antonyt's answer: ``` public class RetainableDialogFragment extends DialogFragment { public RetainableDialogFragment() { setRetainInstance(true); } @Override public void onDestroyView() { Dialog dialog = getDialog(); // handles https://code.google.com/p/android/issues/detail?id=17423 if (dialog != null && getRetainInstance()) { dialog.setDismissMessage(null); } super.onDestroyView(); } } ``` Just let your `DialogFragment` extend this class and everything will be fine. This becomes especially handy, if you have multiple `DialogFragments` in your project which all need this fix.
In case nothing helps, and you need a solution that works, you can go on the safe side, and each time you open a dialog save its basic info to the activity ViewModel (and remove it from this list when you dismiss dialog). This basic info could be dialog type and some id (the information you need in order to open this dialog). This ViewModel is not destroyed during changes of Activity lifecycle. Let's say user opens a dialog to leave a reference to a restaurant. So dialog type would be LeaveReferenceDialog and the id would be the restaurant id. When opening this dialog, you save this information in an Object that you can call DialogInfo, and add this object to the ViewModel of the Activity. This information will allow you to reopen the dialog when the activity onResume() is being called: ``` // On resume in Activity override fun onResume() { super.onResume() // Restore dialogs that were open before activity went to background restoreDialogs() } ``` Which calls: ``` fun restoreDialogs() { mainActivityViewModel.setIsRestoringDialogs(true) // lock list in view model for (dialogInfo in mainActivityViewModel.openDialogs) openDialog(dialogInfo) mainActivityViewModel.setIsRestoringDialogs(false) // open lock } ``` When IsRestoringDialogs in ViewModel is set to true, dialog info will not be added to the list in view model, and it's important because we're now restoring dialogs which are already in that list. Otherwise, changing the list while using it would cause an exception. So: ``` // Create new dialog override fun openLeaveReferenceDialog(restaurantId: String) { var dialog = LeaveReferenceDialog() // Add id to dialog in bundle val bundle = Bundle() bundle.putString(Constants.RESTAURANT_ID, restaurantId) dialog.arguments = bundle dialog.show(supportFragmentManager, "") // Add dialog info to list of open dialogs addOpenDialogInfo(DialogInfo(LEAVE_REFERENCE_DIALOG, restaurantId)) } ``` Then remove dialog info when dismissing it: ``` // Dismiss dialog override fun dismissLeaveReferenceDialog(Dialog dialog, id: String) { if (dialog?.isAdded()){ dialog.dismiss() mainActivityViewModel.removeOpenDialog(LEAVE_REFERENCE_DIALOG, id) } } ``` And in the ViewModel of the Activity: ``` fun addOpenDialogInfo(dialogInfo: DialogInfo){ if (!isRestoringDialogs){ val dialogWasInList = removeOpenDialog(dialogInfo.type, dialogInfo.id) openDialogs.add(dialogInfo) } } fun removeOpenDialog(type: Int, id: String) { if (!isRestoringDialogs) for (dialogInfo in openDialogs) if (dialogInfo.type == type && dialogInfo.id == id) openDialogs.remove(dialogInfo) } ``` You actually reopen all the dialogs that were open before, in the same order. But how do they retain their information? Each dialog has a ViewModel of its own, which is also not destroyed during the activity lifecycle. So when you open the dialog, you get the ViewModel and init the UI using this ViewModel of the dialog as always.
14,657,498
I'd like to create a `text/plain` message using Markdown formatting and transform that into a `multipart/alternative` message where the `text/html` part has been generated from the Markdown. I've tried using the filter command to filter this through a python program that creates the message, but it seems that the message doesn't get sent through properly. The code is below (this is just test code to see if I can make `multipart/alternative` messages at all. ``` import sys from email.mime.text import MIMEText from email.mime.multipart import MIMEMultipart html = """<html> <body> This is <i>HTML</i> </body> </html> """ msgbody = sys.stdin.read() newmsg = MIMEMultipart("alternative") plain = MIMEText(msgbody, "plain") plain["Content-Disposition"] = "inline" html = MIMEText(html, "html") html["Content-Disposition"] = "inline" newmsg.attach(plain) newmsg.attach(html) print newmsg.as_string() ``` Unfortunately, in mutt, you only get the message body sent to the filter command when you compose (the headers are not included). Once I get this working, I think the markdown part won't be too hard.
2013/02/02
[ "https://Stackoverflow.com/questions/14657498", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1053149/" ]
This is a convenience method using the fix from antonyt's answer: ``` public class RetainableDialogFragment extends DialogFragment { public RetainableDialogFragment() { setRetainInstance(true); } @Override public void onDestroyView() { Dialog dialog = getDialog(); // handles https://code.google.com/p/android/issues/detail?id=17423 if (dialog != null && getRetainInstance()) { dialog.setDismissMessage(null); } super.onDestroyView(); } } ``` Just let your `DialogFragment` extend this class and everything will be fine. This becomes especially handy, if you have multiple `DialogFragments` in your project which all need this fix.
Most of the answers here are incorrect because they use setRetainInstance(true), but this is now deprecated as of [API 28](https://developer.android.com/reference/android/app/Fragment.html#setRetainInstance%28boolean%29). Here is the solution I am using: ``` fun isDialogVisible(fm: FragmentManager): Boolean { val dialog = fm.findFragmentByTag("<FRAGMENT_TAG>") return dialog?.isResumed ?: false } ``` If the function returns false, then simply call dialog.show(fm, "<FRAGMENT\_TAG>") to show it again.
40,390,874
So, I'm making a Bank class in python. It has the basic functions of deposit, withdrawing, and checking your balance. I'm having trouble with a transfer method though. This is my code for the class. ``` def __init__(self, customerID): self.ID = customerID self.total = 0 def deposit(self, amount): self.total = self.total + amount return self.total def withdraw(self, amount): self.total = self.total - amount return self.total def balance(self): return self.total def transfer(self, amount, ID): self.total = self.total - amount ID.total = ID.total + amount return ID.balance() ``` Now, it works, but not the way I want it to. If I write a statement like this, it'll work ``` bank1 = Bank(111) bank1.deposit(150) bank2 = Bank(222) bank1.transfer(50, bank2) ``` But I want to be able to use the bank's ID number, not the name I gave it, if that makes any sense? So instead of saying ``` bank1.transfer(50, bank2) ``` I want to it say ``` bank1.transfer(50, 222) ``` I just have no idea how to do this.
2016/11/02
[ "https://Stackoverflow.com/questions/40390874", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6023942/" ]
``` def __init__(self, customerID): self.ID = customerID self.__class__.__dict__.setdefault("idents",{})[self.ID] = self self.total = 0 @classmethod def get_bank(cls,id): return cls.__dict__.setdefault("idents",{}).get(id) ``` is one kind of gross way you could do it ``` bank2_found = Bank.get_bank(222) ```
You could store all the ID numbers and their associated objects in a dict with the ID as the key and the object as the value.
31,977,902
How can I calculates the elapsed time between a start time and an end time of a event using python, in format like 00:00:00 and 23:59:59?
2015/08/13
[ "https://Stackoverflow.com/questions/31977902", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5221453/" ]
Make it easy on yourself and try to make your code easy to read. I personally prefer to write my html cleanly and outside of echo statements like so: **Html** ``` if (strlen($in) > 0 and strlen($in) < 20) { $sql = "select name, entry, displayid from item_template where name like '%{$in}%' LIMIT 10"; // the query foreach ($dbo->query($sql) as $nt) { //$msg.=$nt[name]."->$nt[id]<br>"; ?> <table style="table-layout:fixed;"> <tr> <td>Name</td> <td>Entry ID</td> <td>Display ID</td> </tr> <tr> <td align="center"> <a href="http://wowhead.com/item=<?=$nt['entry'];?>"><?=$nt['name'];?></a> </td> <td><?=$nt['entry'];?></td> <td> <input type="button" class="button" value="<?=$nt['displayid'];?>"> </td> </tr> </table> <?php } } ``` **Javascript** ``` $( document ).ready(function() { var $theButtons = $(".button"); var $theinput = $("#theinput"); $theButtons.click(function() { // $theinput is out of scope here unless you make it a global (remove 'var') // Okay, not out of scope, but I feel it is confusing unless you're using this specific selector more than once or twice. $("#theinput").val(jQuery(this).val()); }); }); ```
Ok, here goes... 1. Use event delegation in your JavaScript to handle the button clicks. This will work for all present and future buttons ``` jQuery(function($) { var $theInput = $('#theinput'); $(document).on('click', '.button', function() { $theInput.val(this.value); }); }); ``` 2. Less important but I have no idea why you're producing a complete table for each record. I'd structure it like this... ``` // snip if (strlen($in)>0 and strlen($in) <20 ) : // you really should be using a prepared statement $sql="select name, entry, displayid from item_template where name like '%$in%' LIMIT 10"; ?> <table style="table-layout:fixed;"> <thead> <tr> <th>Name</th> <th>Entry ID</th> <th>Display ID</th> </tr> </thead> <tbody> <?php foreach ($dbo->query($sql) as $nt) : ?> <tr> <td align="center"> <a href="http://wowhead.com/?item=<?= htmlspecialchars($nt['entry']) ?>"><?= htmlspecialchars($nt['name']) ?></a> </td> <td><?= htmlspecialchars($nt['entry']) ?></td> <td> <button type="button" class="button" value="<?= htmlspecialchars($nt['displayid']) ?>"><?= htmlspecialchars($nt['displayid']) ?></button> </td> </tr> <?php endforeach ?> </tbody> </table> <?php endif; ```
47,717,179
If my python script is pivoting and i can no predict how many columns will be outputed, can this be done with the U-SQL REDUCE statement? e.g. ``` @pythonOutput = REDUCE @filteredBets ON [BetDetailID] PRODUCE [BetDetailID] string, EventID float USING new Extension.Python.Reducer(pyScript:@myScript); ``` There could be multiple columns, so i can't hard set the names in the Produce part. Any ideas?
2017/12/08
[ "https://Stackoverflow.com/questions/47717179", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2725941/" ]
If you have a way to produce a `SqlMap<string,string>` value from within Python (I am not sure if that is supported right now, you can do it with a C# reducer :)), then you could use the map for the dynamic schema part. If it is not supported in Python, please file a feature request at <http://aka.ms/adlfeedback>.
The only way right now is to serialize all the columns into a single column, either as a byte[] or string in your python script. SqlMap/SqlArray are not supported yet as output columns.
50,113,683
i try to train.py in object\_detection in under git url <https://github.com/tensorflow/models/tree/master/research/object_detection> However, the following error occurs. > > ModuleNotFoundError: No module named 'object\_detection' > > > So I tried to solve the problem by writing the following code. ``` import sys sys.path.append('/home/user/Documents/imgmlreport/inception/models/research/object_detection') from object_detection.builders import dataset_builder ``` This problem has not been solved yet. The directory structure is shown below. ``` ~/object_detection/train.py ~/object_detection/builders/dataset_bulider.py ``` and here is full error massage > > /home/user/anaconda3/lib/python3.6/site-packages/h5py/**init**.py:34: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. > > > In future, it will be treated as `np.float64 == np.dtype(float).type`. > from .\_conv import register\_converters as \_register\_converters > > > Traceback (most recent call last): > > > File "train.py", line 52, in > import trainer > > > File"/home/user/Documents/imgmlreport/inception/models/research/object\_detection/trainer.py", line 26, in > from object\_detection.builders import optimizer\_builder > > > ModuleNotFoundError: No module named 'object\_detection' > > > how can i import modules?
2018/05/01
[ "https://Stackoverflow.com/questions/50113683", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9019755/" ]
try this: python setup.py build python setup.py install
I had to do: `sudo pip3 install -e .` ([ref](https://github.com/tensorflow/models/issues/2031#issuecomment-343782858)) `sudo python3 setup.py install` System: OS: Ubuntu 16.04, Anaconda (I guess this is why I need to use `pip3` and `python3` even I made virtual environment with Pyehon 3.8)
50,113,683
i try to train.py in object\_detection in under git url <https://github.com/tensorflow/models/tree/master/research/object_detection> However, the following error occurs. > > ModuleNotFoundError: No module named 'object\_detection' > > > So I tried to solve the problem by writing the following code. ``` import sys sys.path.append('/home/user/Documents/imgmlreport/inception/models/research/object_detection') from object_detection.builders import dataset_builder ``` This problem has not been solved yet. The directory structure is shown below. ``` ~/object_detection/train.py ~/object_detection/builders/dataset_bulider.py ``` and here is full error massage > > /home/user/anaconda3/lib/python3.6/site-packages/h5py/**init**.py:34: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. > > > In future, it will be treated as `np.float64 == np.dtype(float).type`. > from .\_conv import register\_converters as \_register\_converters > > > Traceback (most recent call last): > > > File "train.py", line 52, in > import trainer > > > File"/home/user/Documents/imgmlreport/inception/models/research/object\_detection/trainer.py", line 26, in > from object\_detection.builders import optimizer\_builder > > > ModuleNotFoundError: No module named 'object\_detection' > > > how can i import modules?
2018/05/01
[ "https://Stackoverflow.com/questions/50113683", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9019755/" ]
There are a number of modules in the object\_detection folder, and I have created setup.py in the parent directory(research folder) to import all of them. ``` from setuptools import find_packages from setuptools import setup REQUIRED_PACKAGES = ['Pillow>=1.0', 'Matplotlib>=2.1', 'Cython>=0.28.1'] setup( name='object_detection', version='0.1', install_requires=REQUIRED_PACKAGES, include_package_data=True, packages=[p for p in find_packages() if p.startswith('object_detection')], description='Tensorflow Object Detection Library', ) ```
I had to do: `sudo pip3 install -e .` ([ref](https://github.com/tensorflow/models/issues/2031#issuecomment-343782858)) `sudo python3 setup.py install` System: OS: Ubuntu 16.04, Anaconda (I guess this is why I need to use `pip3` and `python3` even I made virtual environment with Pyehon 3.8)
50,113,683
i try to train.py in object\_detection in under git url <https://github.com/tensorflow/models/tree/master/research/object_detection> However, the following error occurs. > > ModuleNotFoundError: No module named 'object\_detection' > > > So I tried to solve the problem by writing the following code. ``` import sys sys.path.append('/home/user/Documents/imgmlreport/inception/models/research/object_detection') from object_detection.builders import dataset_builder ``` This problem has not been solved yet. The directory structure is shown below. ``` ~/object_detection/train.py ~/object_detection/builders/dataset_bulider.py ``` and here is full error massage > > /home/user/anaconda3/lib/python3.6/site-packages/h5py/**init**.py:34: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. > > > In future, it will be treated as `np.float64 == np.dtype(float).type`. > from .\_conv import register\_converters as \_register\_converters > > > Traceback (most recent call last): > > > File "train.py", line 52, in > import trainer > > > File"/home/user/Documents/imgmlreport/inception/models/research/object\_detection/trainer.py", line 26, in > from object\_detection.builders import optimizer\_builder > > > ModuleNotFoundError: No module named 'object\_detection' > > > how can i import modules?
2018/05/01
[ "https://Stackoverflow.com/questions/50113683", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9019755/" ]
You need to export the environmental variables every time you open a new terminal in that environment. Please note that there are are back quotes on each of the pwd in the command as this might not be showing in the command below. Back quote is the same as the tilde key without pressing the shift key (US keyboard). From tensorflow/models/research/ ``` export PYTHONPATH=$PYTHONPATH:`pwd`:`pwd`/slim ```
try this: python setup.py build python setup.py install
50,113,683
i try to train.py in object\_detection in under git url <https://github.com/tensorflow/models/tree/master/research/object_detection> However, the following error occurs. > > ModuleNotFoundError: No module named 'object\_detection' > > > So I tried to solve the problem by writing the following code. ``` import sys sys.path.append('/home/user/Documents/imgmlreport/inception/models/research/object_detection') from object_detection.builders import dataset_builder ``` This problem has not been solved yet. The directory structure is shown below. ``` ~/object_detection/train.py ~/object_detection/builders/dataset_bulider.py ``` and here is full error massage > > /home/user/anaconda3/lib/python3.6/site-packages/h5py/**init**.py:34: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. > > > In future, it will be treated as `np.float64 == np.dtype(float).type`. > from .\_conv import register\_converters as \_register\_converters > > > Traceback (most recent call last): > > > File "train.py", line 52, in > import trainer > > > File"/home/user/Documents/imgmlreport/inception/models/research/object\_detection/trainer.py", line 26, in > from object\_detection.builders import optimizer\_builder > > > ModuleNotFoundError: No module named 'object\_detection' > > > how can i import modules?
2018/05/01
[ "https://Stackoverflow.com/questions/50113683", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9019755/" ]
Try install **Tensorflow Object Detection Library Packaged** ``` pip install tensorflow-object-detection-api ```
Cause of this error is installing object\_detection library, So one of the solution which can work is running the below command inside models/research ``` sudo python setup.py install ``` If such solution does not work, please execute the below command one by one in the directory models/research ``` export PYTHONPATH=$PYTHONPATH:`pwd`:`pwd`/slim sudo python setup.py install ``` I hope this will work. I also faced the same problem while creating model from export\_inference\_graph.py. It worked for me.
50,113,683
i try to train.py in object\_detection in under git url <https://github.com/tensorflow/models/tree/master/research/object_detection> However, the following error occurs. > > ModuleNotFoundError: No module named 'object\_detection' > > > So I tried to solve the problem by writing the following code. ``` import sys sys.path.append('/home/user/Documents/imgmlreport/inception/models/research/object_detection') from object_detection.builders import dataset_builder ``` This problem has not been solved yet. The directory structure is shown below. ``` ~/object_detection/train.py ~/object_detection/builders/dataset_bulider.py ``` and here is full error massage > > /home/user/anaconda3/lib/python3.6/site-packages/h5py/**init**.py:34: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. > > > In future, it will be treated as `np.float64 == np.dtype(float).type`. > from .\_conv import register\_converters as \_register\_converters > > > Traceback (most recent call last): > > > File "train.py", line 52, in > import trainer > > > File"/home/user/Documents/imgmlreport/inception/models/research/object\_detection/trainer.py", line 26, in > from object\_detection.builders import optimizer\_builder > > > ModuleNotFoundError: No module named 'object\_detection' > > > how can i import modules?
2018/05/01
[ "https://Stackoverflow.com/questions/50113683", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9019755/" ]
Cause of this error is installing object\_detection library, So one of the solution which can work is running the below command inside models/research ``` sudo python setup.py install ``` If such solution does not work, please execute the below command one by one in the directory models/research ``` export PYTHONPATH=$PYTHONPATH:`pwd`:`pwd`/slim sudo python setup.py install ``` I hope this will work. I also faced the same problem while creating model from export\_inference\_graph.py. It worked for me.
There are a number of modules in the object\_detection folder, and I have created setup.py in the parent directory(research folder) to import all of them. ``` from setuptools import find_packages from setuptools import setup REQUIRED_PACKAGES = ['Pillow>=1.0', 'Matplotlib>=2.1', 'Cython>=0.28.1'] setup( name='object_detection', version='0.1', install_requires=REQUIRED_PACKAGES, include_package_data=True, packages=[p for p in find_packages() if p.startswith('object_detection')], description='Tensorflow Object Detection Library', ) ```
50,113,683
i try to train.py in object\_detection in under git url <https://github.com/tensorflow/models/tree/master/research/object_detection> However, the following error occurs. > > ModuleNotFoundError: No module named 'object\_detection' > > > So I tried to solve the problem by writing the following code. ``` import sys sys.path.append('/home/user/Documents/imgmlreport/inception/models/research/object_detection') from object_detection.builders import dataset_builder ``` This problem has not been solved yet. The directory structure is shown below. ``` ~/object_detection/train.py ~/object_detection/builders/dataset_bulider.py ``` and here is full error massage > > /home/user/anaconda3/lib/python3.6/site-packages/h5py/**init**.py:34: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. > > > In future, it will be treated as `np.float64 == np.dtype(float).type`. > from .\_conv import register\_converters as \_register\_converters > > > Traceback (most recent call last): > > > File "train.py", line 52, in > import trainer > > > File"/home/user/Documents/imgmlreport/inception/models/research/object\_detection/trainer.py", line 26, in > from object\_detection.builders import optimizer\_builder > > > ModuleNotFoundError: No module named 'object\_detection' > > > how can i import modules?
2018/05/01
[ "https://Stackoverflow.com/questions/50113683", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9019755/" ]
You did have "sys.path.append()" before you imported the object detection, so I am surprised that you are facing this error! Please check that the path you have used in sys.path.append() is right. Well, the only and obvious answer for the error is that the path of the module is not added properly. Besides the various ways mentioned here, here is a way in which you can add the "object\_detection" path permanently to the PYTHONPATH variable. If you are using a Linux system, here is how you would go about it: Go to the Home directory. Press Ctrl + H to show hidden files. You will see a file called ".bashrc". Open this file using a code editor (I used Visual Studio). In the last line of .bashrc file, add the line: ``` export PYTHONPATH=/your/module/path:/your/other/module/path:your/someother/module/path ``` Then press "save" in the code editor. Since ".bashrc" is a "Read-only" file the editor will throw a pop-up saying the same. Also in the pop-up there will be an option that says: "Try with sudo". Hit this button and now you are good to go. All your modules are now **permanently** added to the PYTHONPATH. This means that you need not run sys.path.append every time you open your terminal and start a session! Below is the screenshot with no error when I followed the said steps: [![enter image description here](https://i.stack.imgur.com/GyUgN.png)](https://i.stack.imgur.com/GyUgN.png) Try this. I hope it helps.
I had to do: `sudo pip3 install -e .` ([ref](https://github.com/tensorflow/models/issues/2031#issuecomment-343782858)) `sudo python3 setup.py install` System: OS: Ubuntu 16.04, Anaconda (I guess this is why I need to use `pip3` and `python3` even I made virtual environment with Pyehon 3.8)
50,113,683
i try to train.py in object\_detection in under git url <https://github.com/tensorflow/models/tree/master/research/object_detection> However, the following error occurs. > > ModuleNotFoundError: No module named 'object\_detection' > > > So I tried to solve the problem by writing the following code. ``` import sys sys.path.append('/home/user/Documents/imgmlreport/inception/models/research/object_detection') from object_detection.builders import dataset_builder ``` This problem has not been solved yet. The directory structure is shown below. ``` ~/object_detection/train.py ~/object_detection/builders/dataset_bulider.py ``` and here is full error massage > > /home/user/anaconda3/lib/python3.6/site-packages/h5py/**init**.py:34: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. > > > In future, it will be treated as `np.float64 == np.dtype(float).type`. > from .\_conv import register\_converters as \_register\_converters > > > Traceback (most recent call last): > > > File "train.py", line 52, in > import trainer > > > File"/home/user/Documents/imgmlreport/inception/models/research/object\_detection/trainer.py", line 26, in > from object\_detection.builders import optimizer\_builder > > > ModuleNotFoundError: No module named 'object\_detection' > > > how can i import modules?
2018/05/01
[ "https://Stackoverflow.com/questions/50113683", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9019755/" ]
Cause of this error is installing object\_detection library, So one of the solution which can work is running the below command inside models/research ``` sudo python setup.py install ``` If such solution does not work, please execute the below command one by one in the directory models/research ``` export PYTHONPATH=$PYTHONPATH:`pwd`:`pwd`/slim sudo python setup.py install ``` I hope this will work. I also faced the same problem while creating model from export\_inference\_graph.py. It worked for me.
I had to do: `sudo pip3 install -e .` ([ref](https://github.com/tensorflow/models/issues/2031#issuecomment-343782858)) `sudo python3 setup.py install` System: OS: Ubuntu 16.04, Anaconda (I guess this is why I need to use `pip3` and `python3` even I made virtual environment with Pyehon 3.8)
50,113,683
i try to train.py in object\_detection in under git url <https://github.com/tensorflow/models/tree/master/research/object_detection> However, the following error occurs. > > ModuleNotFoundError: No module named 'object\_detection' > > > So I tried to solve the problem by writing the following code. ``` import sys sys.path.append('/home/user/Documents/imgmlreport/inception/models/research/object_detection') from object_detection.builders import dataset_builder ``` This problem has not been solved yet. The directory structure is shown below. ``` ~/object_detection/train.py ~/object_detection/builders/dataset_bulider.py ``` and here is full error massage > > /home/user/anaconda3/lib/python3.6/site-packages/h5py/**init**.py:34: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. > > > In future, it will be treated as `np.float64 == np.dtype(float).type`. > from .\_conv import register\_converters as \_register\_converters > > > Traceback (most recent call last): > > > File "train.py", line 52, in > import trainer > > > File"/home/user/Documents/imgmlreport/inception/models/research/object\_detection/trainer.py", line 26, in > from object\_detection.builders import optimizer\_builder > > > ModuleNotFoundError: No module named 'object\_detection' > > > how can i import modules?
2018/05/01
[ "https://Stackoverflow.com/questions/50113683", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9019755/" ]
Try install **Tensorflow Object Detection Library Packaged** ``` pip install tensorflow-object-detection-api ```
There are a number of modules in the object\_detection folder, and I have created setup.py in the parent directory(research folder) to import all of them. ``` from setuptools import find_packages from setuptools import setup REQUIRED_PACKAGES = ['Pillow>=1.0', 'Matplotlib>=2.1', 'Cython>=0.28.1'] setup( name='object_detection', version='0.1', install_requires=REQUIRED_PACKAGES, include_package_data=True, packages=[p for p in find_packages() if p.startswith('object_detection')], description='Tensorflow Object Detection Library', ) ```
50,113,683
i try to train.py in object\_detection in under git url <https://github.com/tensorflow/models/tree/master/research/object_detection> However, the following error occurs. > > ModuleNotFoundError: No module named 'object\_detection' > > > So I tried to solve the problem by writing the following code. ``` import sys sys.path.append('/home/user/Documents/imgmlreport/inception/models/research/object_detection') from object_detection.builders import dataset_builder ``` This problem has not been solved yet. The directory structure is shown below. ``` ~/object_detection/train.py ~/object_detection/builders/dataset_bulider.py ``` and here is full error massage > > /home/user/anaconda3/lib/python3.6/site-packages/h5py/**init**.py:34: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. > > > In future, it will be treated as `np.float64 == np.dtype(float).type`. > from .\_conv import register\_converters as \_register\_converters > > > Traceback (most recent call last): > > > File "train.py", line 52, in > import trainer > > > File"/home/user/Documents/imgmlreport/inception/models/research/object\_detection/trainer.py", line 26, in > from object\_detection.builders import optimizer\_builder > > > ModuleNotFoundError: No module named 'object\_detection' > > > how can i import modules?
2018/05/01
[ "https://Stackoverflow.com/questions/50113683", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9019755/" ]
And finally, If you've followed all the steps here and are at your wit's end...make sure the file that you're running (the one with your source code in it ya know), isn't named `object_detection.py` - that would preclude it being searched for as a module. Certainly I've never done anything like this that led me to add an embarrassing answer on Stack Overflow...
I had to do: `sudo pip3 install -e .` ([ref](https://github.com/tensorflow/models/issues/2031#issuecomment-343782858)) `sudo python3 setup.py install` System: OS: Ubuntu 16.04, Anaconda (I guess this is why I need to use `pip3` and `python3` even I made virtual environment with Pyehon 3.8)
50,113,683
i try to train.py in object\_detection in under git url <https://github.com/tensorflow/models/tree/master/research/object_detection> However, the following error occurs. > > ModuleNotFoundError: No module named 'object\_detection' > > > So I tried to solve the problem by writing the following code. ``` import sys sys.path.append('/home/user/Documents/imgmlreport/inception/models/research/object_detection') from object_detection.builders import dataset_builder ``` This problem has not been solved yet. The directory structure is shown below. ``` ~/object_detection/train.py ~/object_detection/builders/dataset_bulider.py ``` and here is full error massage > > /home/user/anaconda3/lib/python3.6/site-packages/h5py/**init**.py:34: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. > > > In future, it will be treated as `np.float64 == np.dtype(float).type`. > from .\_conv import register\_converters as \_register\_converters > > > Traceback (most recent call last): > > > File "train.py", line 52, in > import trainer > > > File"/home/user/Documents/imgmlreport/inception/models/research/object\_detection/trainer.py", line 26, in > from object\_detection.builders import optimizer\_builder > > > ModuleNotFoundError: No module named 'object\_detection' > > > how can i import modules?
2018/05/01
[ "https://Stackoverflow.com/questions/50113683", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9019755/" ]
Cause of this error is installing object\_detection library, So one of the solution which can work is running the below command inside models/research ``` sudo python setup.py install ``` If such solution does not work, please execute the below command one by one in the directory models/research ``` export PYTHONPATH=$PYTHONPATH:`pwd`:`pwd`/slim sudo python setup.py install ``` I hope this will work. I also faced the same problem while creating model from export\_inference\_graph.py. It worked for me.
You did have "sys.path.append()" before you imported the object detection, so I am surprised that you are facing this error! Please check that the path you have used in sys.path.append() is right. Well, the only and obvious answer for the error is that the path of the module is not added properly. Besides the various ways mentioned here, here is a way in which you can add the "object\_detection" path permanently to the PYTHONPATH variable. If you are using a Linux system, here is how you would go about it: Go to the Home directory. Press Ctrl + H to show hidden files. You will see a file called ".bashrc". Open this file using a code editor (I used Visual Studio). In the last line of .bashrc file, add the line: ``` export PYTHONPATH=/your/module/path:/your/other/module/path:your/someother/module/path ``` Then press "save" in the code editor. Since ".bashrc" is a "Read-only" file the editor will throw a pop-up saying the same. Also in the pop-up there will be an option that says: "Try with sudo". Hit this button and now you are good to go. All your modules are now **permanently** added to the PYTHONPATH. This means that you need not run sys.path.append every time you open your terminal and start a session! Below is the screenshot with no error when I followed the said steps: [![enter image description here](https://i.stack.imgur.com/GyUgN.png)](https://i.stack.imgur.com/GyUgN.png) Try this. I hope it helps.
49,191,477
The `hypot` function, introduced into C in the 1999 revision of the language, calculates the hypotenuse of a right triangle given the other sides as arguments, but with care taken to avoid the over/underflow which would result from the naive implementation as ``` double hypot(double a, double b) { return sqrt(a*a + b*b); } ``` I find myself with the need for companion functionality: given a side and the hypotenuse of a triangle, find the third side (avoiding under/overflow). I can think of a few ways to do this, but wondered if there was an existing "best practice"? My target is Python, but really I'm looking for algorithm pointers. --- Thanks for the replies. In case anyone is interested in the result, my C99 implementation can be found [here](https://gitlab.com/jjg/cathetus) and a Python version [here](https://github.com/HypothesisWorks/hypothesis/blob/master/hypothesis-python/src/hypothesis/internal/cathetus.py), part of the [Hypothesis](https://hypothesis.works/) project.
2018/03/09
[ "https://Stackoverflow.com/questions/49191477", "https://Stackoverflow.com", "https://Stackoverflow.com/users/468334/" ]
Assuming IEEE 754 basic 64-bit binary floating-point, I would consider an algorithm such as: * Set *s* (for scale) to be 2−512 if 2100 ≤ *a*, 2+512 if *a* < 2−100, and 1 otherwise. * Let *a*' be *a*•*s* and *b*' be *b*•*s*. * Compute sqrt(*a*'•*a*' − *b*'•*b*') / *s*. Notes about the reasoning: * If *a* is large (or small), multiplying by *s* decreases (or increases) the values so that the square of *a*' remains in the floating-point range. * The scale factor is a power of two, so multiplying and dividing by it is exact in binary floating-point. * *b* is necessarily smaller than (or equal to) *a*, or else we return NaN, which is appropriate. In the case where we are increasing *a*, no error occurs; *b*' and *b*'•*b*' remain within range. In the case where we are decreasing *a*, *b*' may lose precision or become zero if *b* is small, but then *b* is so much smaller than *a* that the computed result cannot depend on the precise value of *b* in any case. * I partitioned the floating-point range into three intervals because two will not suffice. For example, if you set *s* to be 2−512 if 1 ≤ *a* and 2+512 otherwise, then 1 will scale to 2−512 and then square to 2−1024, at which point a *b* slightly under 1 will be losing precision relevant to the result. But if you use a less-magnitude power for *s*, such as 2−511, then 21023 will scale to 2512 and square to 21024, which is out of bounds. Therefore, we need different scale factors for *a* = 1 and *a* = 21023. Similarly, *a* = 2−1049 needs a scale factor that would be too large for *a* = 1. So three are needed. * Division is notoriously slow, so one might want to multiply by a prepared *s*−1 rather than dividing by *s*.
`hypot` has its idiosyncrasies in that it's one of a *very select few* C standard library functions that does **not** propagate `NaN`! (Another one is `pow` for the case where the first argument being 1.) Setting that aside, I'd be inclined to write merely ``` returns sqrt(h * h - a * a); // h is the hypotenuse ``` as the body of the function, and burden the caller with checking the inputs. If you can't do that then follow the specification of `hypot` faithfully.
49,191,477
The `hypot` function, introduced into C in the 1999 revision of the language, calculates the hypotenuse of a right triangle given the other sides as arguments, but with care taken to avoid the over/underflow which would result from the naive implementation as ``` double hypot(double a, double b) { return sqrt(a*a + b*b); } ``` I find myself with the need for companion functionality: given a side and the hypotenuse of a triangle, find the third side (avoiding under/overflow). I can think of a few ways to do this, but wondered if there was an existing "best practice"? My target is Python, but really I'm looking for algorithm pointers. --- Thanks for the replies. In case anyone is interested in the result, my C99 implementation can be found [here](https://gitlab.com/jjg/cathetus) and a Python version [here](https://github.com/HypothesisWorks/hypothesis/blob/master/hypothesis-python/src/hypothesis/internal/cathetus.py), part of the [Hypothesis](https://hypothesis.works/) project.
2018/03/09
[ "https://Stackoverflow.com/questions/49191477", "https://Stackoverflow.com", "https://Stackoverflow.com/users/468334/" ]
The first thing to do is factorize: ``` b = sqrt(h*h - a*a) = sqrt((h-a)*(h+a)) ``` We have not only avoided some overflow, but also gained accuracy. If any factor is close to `1E+154 = sqrt(1E+308)` (max with IEEE 754 64 bits float) then we must also avoid overflow: ``` sqrt((h-a)*(h+a)) = sqrt(h-a) * sqrt(h+a) ``` This case is very unlikely, so the two `sqrt`'s are justified, even if its slower than just a `sqrt`. Notice that if `h ~ 5E+7 * a` then `h ~ b` which means that there are not enough digits to represent `b` as different from `h`.
`hypot` has its idiosyncrasies in that it's one of a *very select few* C standard library functions that does **not** propagate `NaN`! (Another one is `pow` for the case where the first argument being 1.) Setting that aside, I'd be inclined to write merely ``` returns sqrt(h * h - a * a); // h is the hypotenuse ``` as the body of the function, and burden the caller with checking the inputs. If you can't do that then follow the specification of `hypot` faithfully.
49,191,477
The `hypot` function, introduced into C in the 1999 revision of the language, calculates the hypotenuse of a right triangle given the other sides as arguments, but with care taken to avoid the over/underflow which would result from the naive implementation as ``` double hypot(double a, double b) { return sqrt(a*a + b*b); } ``` I find myself with the need for companion functionality: given a side and the hypotenuse of a triangle, find the third side (avoiding under/overflow). I can think of a few ways to do this, but wondered if there was an existing "best practice"? My target is Python, but really I'm looking for algorithm pointers. --- Thanks for the replies. In case anyone is interested in the result, my C99 implementation can be found [here](https://gitlab.com/jjg/cathetus) and a Python version [here](https://github.com/HypothesisWorks/hypothesis/blob/master/hypothesis-python/src/hypothesis/internal/cathetus.py), part of the [Hypothesis](https://hypothesis.works/) project.
2018/03/09
[ "https://Stackoverflow.com/questions/49191477", "https://Stackoverflow.com", "https://Stackoverflow.com/users/468334/" ]
This answer assumes a platform that uses floating-point arithmetic compliant with IEEE-754 (2008) and provides fused multiply-add (FMA) capability. Both conditions are met by common architectures such as x86-64, ARM64, and Power. FMA is exposed in ISO C99 and later C standards as a standard math function `fma()`. On hardware that does not provide an FMA instruction, this requires emulation, which could be slow and [functionally deficient](https://stackoverflow.com/questions/42166563/is-my-fma-broken). Mathematically, the length of one leg (cathetus) in a right triangle, given the length of the hypotenuse and the other leg, is simply computed as `√(h²-a²)`, where `h` is the length of the hypotenuse. But when computed with finite-precision floating-point arithmetic, we face two problems: Overflow or underflow to zero may take place when computing the squares, and subtraction of the squares gives rise to [subtractive cancellation](https://en.wikipedia.org/wiki/Loss_of_significance) when the squares have similar magnitude. The first issue is easily taken care of by scaling by 2n such that the term larger in magnitude is moved closer to unity. As subnormal numbers may be involved, this cannot be accomplished by manipulating the exponent field, as there may be a need to normalize / denormalize. But we can compute the required scale factors by exponent field bit manipulation, the multiply by the factors. We know that the hypotenuse has to be longer or the same length as the given leg for non-exceptional cases, so can base the scaling on that argument. Dealing with subtractive cancellation is harder, but we are lucky that computation very similar to our computation h²-a² occurs in other important problems. For example, the grandmaster of floating-point computation looked into the accurate computation of the discriminant of the quadratic formula, `b²-4ac`: William Kahan, "On the Cost of Floating-Point Computation Without Extra-Precise Arithmetic", Nov. 21, 2004 ([online](https://people.eecs.berkeley.edu/~wkahan/Qdrtcs.pdf)) More recently, French researchers addressed the more general case of the difference of two products, `ad-bc`: Claude-Pierre Jeannerod, Nicolas Louvet, Jean-Michel Muller, "Further analysis of Kahan's algorithm for the accurate computation of 2 x 2 determinants." *Mathematics of Computation*, Vol. 82, No. 284, Oct. 2013, pp. 2245-2264 ([online](https://hal.inria.fr/ensl-00649347/en)) The FMA-based algorithm in the second paper computes the difference of two products with a proven maximum error of 1.5 [ulp](https://en.wikipedia.org/wiki/Unit_in_the_last_place). With this building block, we arrive at the straightforward ISO C99 implementation of the cathetus computation below. A maximum error of 1.2 ulp was observed in one billion random trials as determined by comparing with the results from an arbitrary-precision library: ```c #include <stdint.h> #include <string.h> #include <float.h> #include <math.h> uint64_t __double_as_uint64 (double a) { uint64_t r; memcpy (&r, &a, sizeof r); return r; } double __uint64_as_double (uint64_t a) { double r; memcpy (&r, &a, sizeof r); return r; } /* diff_of_products() computes a*b-c*d with a maximum error < 1.5 ulp Claude-Pierre Jeannerod, Nicolas Louvet, and Jean-Michel Muller, "Further Analysis of Kahan's Algorithm for the Accurate Computation of 2x2 Determinants". Mathematics of Computation, Vol. 82, No. 284, Oct. 2013, pp. 2245-2264 */ double diff_of_products (double a, double b, double c, double d) { double w = d * c; double e = fma (-d, c, w); double f = fma (a, b, -w); return f + e; } /* compute sqrt (h*h - a*a) accurately, avoiding spurious overflow */ double my_cathetus (double h, double a) { double fh, fa, res, scale_in, scale_out, d, s; uint64_t expo; fh = fabs (h); fa = fabs (a); /* compute scale factors */ expo = __double_as_uint64 (fh) & 0xff80000000000000ULL; scale_in = __uint64_as_double (0x7fc0000000000000ULL - expo); scale_out = __uint64_as_double (expo + 0x0020000000000000ULL); /* scale fh towards unity */ fh = fh * scale_in; fa = fa * scale_in; /* compute sqrt of difference of scaled arguments, avoiding overflow */ d = diff_of_products (fh, fh, fa, fa); s = sqrt (d); /* reverse previous scaling */ res = s * scale_out; /* handle special arguments */ if (isnan (h) || isnan (a)) { res = h + a; } return res; } ```
`hypot` has its idiosyncrasies in that it's one of a *very select few* C standard library functions that does **not** propagate `NaN`! (Another one is `pow` for the case where the first argument being 1.) Setting that aside, I'd be inclined to write merely ``` returns sqrt(h * h - a * a); // h is the hypotenuse ``` as the body of the function, and burden the caller with checking the inputs. If you can't do that then follow the specification of `hypot` faithfully.
49,191,477
The `hypot` function, introduced into C in the 1999 revision of the language, calculates the hypotenuse of a right triangle given the other sides as arguments, but with care taken to avoid the over/underflow which would result from the naive implementation as ``` double hypot(double a, double b) { return sqrt(a*a + b*b); } ``` I find myself with the need for companion functionality: given a side and the hypotenuse of a triangle, find the third side (avoiding under/overflow). I can think of a few ways to do this, but wondered if there was an existing "best practice"? My target is Python, but really I'm looking for algorithm pointers. --- Thanks for the replies. In case anyone is interested in the result, my C99 implementation can be found [here](https://gitlab.com/jjg/cathetus) and a Python version [here](https://github.com/HypothesisWorks/hypothesis/blob/master/hypothesis-python/src/hypothesis/internal/cathetus.py), part of the [Hypothesis](https://hypothesis.works/) project.
2018/03/09
[ "https://Stackoverflow.com/questions/49191477", "https://Stackoverflow.com", "https://Stackoverflow.com/users/468334/" ]
The first thing to do is factorize: ``` b = sqrt(h*h - a*a) = sqrt((h-a)*(h+a)) ``` We have not only avoided some overflow, but also gained accuracy. If any factor is close to `1E+154 = sqrt(1E+308)` (max with IEEE 754 64 bits float) then we must also avoid overflow: ``` sqrt((h-a)*(h+a)) = sqrt(h-a) * sqrt(h+a) ``` This case is very unlikely, so the two `sqrt`'s are justified, even if its slower than just a `sqrt`. Notice that if `h ~ 5E+7 * a` then `h ~ b` which means that there are not enough digits to represent `b` as different from `h`.
Assuming IEEE 754 basic 64-bit binary floating-point, I would consider an algorithm such as: * Set *s* (for scale) to be 2−512 if 2100 ≤ *a*, 2+512 if *a* < 2−100, and 1 otherwise. * Let *a*' be *a*•*s* and *b*' be *b*•*s*. * Compute sqrt(*a*'•*a*' − *b*'•*b*') / *s*. Notes about the reasoning: * If *a* is large (or small), multiplying by *s* decreases (or increases) the values so that the square of *a*' remains in the floating-point range. * The scale factor is a power of two, so multiplying and dividing by it is exact in binary floating-point. * *b* is necessarily smaller than (or equal to) *a*, or else we return NaN, which is appropriate. In the case where we are increasing *a*, no error occurs; *b*' and *b*'•*b*' remain within range. In the case where we are decreasing *a*, *b*' may lose precision or become zero if *b* is small, but then *b* is so much smaller than *a* that the computed result cannot depend on the precise value of *b* in any case. * I partitioned the floating-point range into three intervals because two will not suffice. For example, if you set *s* to be 2−512 if 1 ≤ *a* and 2+512 otherwise, then 1 will scale to 2−512 and then square to 2−1024, at which point a *b* slightly under 1 will be losing precision relevant to the result. But if you use a less-magnitude power for *s*, such as 2−511, then 21023 will scale to 2512 and square to 21024, which is out of bounds. Therefore, we need different scale factors for *a* = 1 and *a* = 21023. Similarly, *a* = 2−1049 needs a scale factor that would be too large for *a* = 1. So three are needed. * Division is notoriously slow, so one might want to multiply by a prepared *s*−1 rather than dividing by *s*.
49,191,477
The `hypot` function, introduced into C in the 1999 revision of the language, calculates the hypotenuse of a right triangle given the other sides as arguments, but with care taken to avoid the over/underflow which would result from the naive implementation as ``` double hypot(double a, double b) { return sqrt(a*a + b*b); } ``` I find myself with the need for companion functionality: given a side and the hypotenuse of a triangle, find the third side (avoiding under/overflow). I can think of a few ways to do this, but wondered if there was an existing "best practice"? My target is Python, but really I'm looking for algorithm pointers. --- Thanks for the replies. In case anyone is interested in the result, my C99 implementation can be found [here](https://gitlab.com/jjg/cathetus) and a Python version [here](https://github.com/HypothesisWorks/hypothesis/blob/master/hypothesis-python/src/hypothesis/internal/cathetus.py), part of the [Hypothesis](https://hypothesis.works/) project.
2018/03/09
[ "https://Stackoverflow.com/questions/49191477", "https://Stackoverflow.com", "https://Stackoverflow.com/users/468334/" ]
This answer assumes a platform that uses floating-point arithmetic compliant with IEEE-754 (2008) and provides fused multiply-add (FMA) capability. Both conditions are met by common architectures such as x86-64, ARM64, and Power. FMA is exposed in ISO C99 and later C standards as a standard math function `fma()`. On hardware that does not provide an FMA instruction, this requires emulation, which could be slow and [functionally deficient](https://stackoverflow.com/questions/42166563/is-my-fma-broken). Mathematically, the length of one leg (cathetus) in a right triangle, given the length of the hypotenuse and the other leg, is simply computed as `√(h²-a²)`, where `h` is the length of the hypotenuse. But when computed with finite-precision floating-point arithmetic, we face two problems: Overflow or underflow to zero may take place when computing the squares, and subtraction of the squares gives rise to [subtractive cancellation](https://en.wikipedia.org/wiki/Loss_of_significance) when the squares have similar magnitude. The first issue is easily taken care of by scaling by 2n such that the term larger in magnitude is moved closer to unity. As subnormal numbers may be involved, this cannot be accomplished by manipulating the exponent field, as there may be a need to normalize / denormalize. But we can compute the required scale factors by exponent field bit manipulation, the multiply by the factors. We know that the hypotenuse has to be longer or the same length as the given leg for non-exceptional cases, so can base the scaling on that argument. Dealing with subtractive cancellation is harder, but we are lucky that computation very similar to our computation h²-a² occurs in other important problems. For example, the grandmaster of floating-point computation looked into the accurate computation of the discriminant of the quadratic formula, `b²-4ac`: William Kahan, "On the Cost of Floating-Point Computation Without Extra-Precise Arithmetic", Nov. 21, 2004 ([online](https://people.eecs.berkeley.edu/~wkahan/Qdrtcs.pdf)) More recently, French researchers addressed the more general case of the difference of two products, `ad-bc`: Claude-Pierre Jeannerod, Nicolas Louvet, Jean-Michel Muller, "Further analysis of Kahan's algorithm for the accurate computation of 2 x 2 determinants." *Mathematics of Computation*, Vol. 82, No. 284, Oct. 2013, pp. 2245-2264 ([online](https://hal.inria.fr/ensl-00649347/en)) The FMA-based algorithm in the second paper computes the difference of two products with a proven maximum error of 1.5 [ulp](https://en.wikipedia.org/wiki/Unit_in_the_last_place). With this building block, we arrive at the straightforward ISO C99 implementation of the cathetus computation below. A maximum error of 1.2 ulp was observed in one billion random trials as determined by comparing with the results from an arbitrary-precision library: ```c #include <stdint.h> #include <string.h> #include <float.h> #include <math.h> uint64_t __double_as_uint64 (double a) { uint64_t r; memcpy (&r, &a, sizeof r); return r; } double __uint64_as_double (uint64_t a) { double r; memcpy (&r, &a, sizeof r); return r; } /* diff_of_products() computes a*b-c*d with a maximum error < 1.5 ulp Claude-Pierre Jeannerod, Nicolas Louvet, and Jean-Michel Muller, "Further Analysis of Kahan's Algorithm for the Accurate Computation of 2x2 Determinants". Mathematics of Computation, Vol. 82, No. 284, Oct. 2013, pp. 2245-2264 */ double diff_of_products (double a, double b, double c, double d) { double w = d * c; double e = fma (-d, c, w); double f = fma (a, b, -w); return f + e; } /* compute sqrt (h*h - a*a) accurately, avoiding spurious overflow */ double my_cathetus (double h, double a) { double fh, fa, res, scale_in, scale_out, d, s; uint64_t expo; fh = fabs (h); fa = fabs (a); /* compute scale factors */ expo = __double_as_uint64 (fh) & 0xff80000000000000ULL; scale_in = __uint64_as_double (0x7fc0000000000000ULL - expo); scale_out = __uint64_as_double (expo + 0x0020000000000000ULL); /* scale fh towards unity */ fh = fh * scale_in; fa = fa * scale_in; /* compute sqrt of difference of scaled arguments, avoiding overflow */ d = diff_of_products (fh, fh, fa, fa); s = sqrt (d); /* reverse previous scaling */ res = s * scale_out; /* handle special arguments */ if (isnan (h) || isnan (a)) { res = h + a; } return res; } ```
Assuming IEEE 754 basic 64-bit binary floating-point, I would consider an algorithm such as: * Set *s* (for scale) to be 2−512 if 2100 ≤ *a*, 2+512 if *a* < 2−100, and 1 otherwise. * Let *a*' be *a*•*s* and *b*' be *b*•*s*. * Compute sqrt(*a*'•*a*' − *b*'•*b*') / *s*. Notes about the reasoning: * If *a* is large (or small), multiplying by *s* decreases (or increases) the values so that the square of *a*' remains in the floating-point range. * The scale factor is a power of two, so multiplying and dividing by it is exact in binary floating-point. * *b* is necessarily smaller than (or equal to) *a*, or else we return NaN, which is appropriate. In the case where we are increasing *a*, no error occurs; *b*' and *b*'•*b*' remain within range. In the case where we are decreasing *a*, *b*' may lose precision or become zero if *b* is small, but then *b* is so much smaller than *a* that the computed result cannot depend on the precise value of *b* in any case. * I partitioned the floating-point range into three intervals because two will not suffice. For example, if you set *s* to be 2−512 if 1 ≤ *a* and 2+512 otherwise, then 1 will scale to 2−512 and then square to 2−1024, at which point a *b* slightly under 1 will be losing precision relevant to the result. But if you use a less-magnitude power for *s*, such as 2−511, then 21023 will scale to 2512 and square to 21024, which is out of bounds. Therefore, we need different scale factors for *a* = 1 and *a* = 21023. Similarly, *a* = 2−1049 needs a scale factor that would be too large for *a* = 1. So three are needed. * Division is notoriously slow, so one might want to multiply by a prepared *s*−1 rather than dividing by *s*.
49,191,477
The `hypot` function, introduced into C in the 1999 revision of the language, calculates the hypotenuse of a right triangle given the other sides as arguments, but with care taken to avoid the over/underflow which would result from the naive implementation as ``` double hypot(double a, double b) { return sqrt(a*a + b*b); } ``` I find myself with the need for companion functionality: given a side and the hypotenuse of a triangle, find the third side (avoiding under/overflow). I can think of a few ways to do this, but wondered if there was an existing "best practice"? My target is Python, but really I'm looking for algorithm pointers. --- Thanks for the replies. In case anyone is interested in the result, my C99 implementation can be found [here](https://gitlab.com/jjg/cathetus) and a Python version [here](https://github.com/HypothesisWorks/hypothesis/blob/master/hypothesis-python/src/hypothesis/internal/cathetus.py), part of the [Hypothesis](https://hypothesis.works/) project.
2018/03/09
[ "https://Stackoverflow.com/questions/49191477", "https://Stackoverflow.com", "https://Stackoverflow.com/users/468334/" ]
The first thing to do is factorize: ``` b = sqrt(h*h - a*a) = sqrt((h-a)*(h+a)) ``` We have not only avoided some overflow, but also gained accuracy. If any factor is close to `1E+154 = sqrt(1E+308)` (max with IEEE 754 64 bits float) then we must also avoid overflow: ``` sqrt((h-a)*(h+a)) = sqrt(h-a) * sqrt(h+a) ``` This case is very unlikely, so the two `sqrt`'s are justified, even if its slower than just a `sqrt`. Notice that if `h ~ 5E+7 * a` then `h ~ b` which means that there are not enough digits to represent `b` as different from `h`.
This answer assumes a platform that uses floating-point arithmetic compliant with IEEE-754 (2008) and provides fused multiply-add (FMA) capability. Both conditions are met by common architectures such as x86-64, ARM64, and Power. FMA is exposed in ISO C99 and later C standards as a standard math function `fma()`. On hardware that does not provide an FMA instruction, this requires emulation, which could be slow and [functionally deficient](https://stackoverflow.com/questions/42166563/is-my-fma-broken). Mathematically, the length of one leg (cathetus) in a right triangle, given the length of the hypotenuse and the other leg, is simply computed as `√(h²-a²)`, where `h` is the length of the hypotenuse. But when computed with finite-precision floating-point arithmetic, we face two problems: Overflow or underflow to zero may take place when computing the squares, and subtraction of the squares gives rise to [subtractive cancellation](https://en.wikipedia.org/wiki/Loss_of_significance) when the squares have similar magnitude. The first issue is easily taken care of by scaling by 2n such that the term larger in magnitude is moved closer to unity. As subnormal numbers may be involved, this cannot be accomplished by manipulating the exponent field, as there may be a need to normalize / denormalize. But we can compute the required scale factors by exponent field bit manipulation, the multiply by the factors. We know that the hypotenuse has to be longer or the same length as the given leg for non-exceptional cases, so can base the scaling on that argument. Dealing with subtractive cancellation is harder, but we are lucky that computation very similar to our computation h²-a² occurs in other important problems. For example, the grandmaster of floating-point computation looked into the accurate computation of the discriminant of the quadratic formula, `b²-4ac`: William Kahan, "On the Cost of Floating-Point Computation Without Extra-Precise Arithmetic", Nov. 21, 2004 ([online](https://people.eecs.berkeley.edu/~wkahan/Qdrtcs.pdf)) More recently, French researchers addressed the more general case of the difference of two products, `ad-bc`: Claude-Pierre Jeannerod, Nicolas Louvet, Jean-Michel Muller, "Further analysis of Kahan's algorithm for the accurate computation of 2 x 2 determinants." *Mathematics of Computation*, Vol. 82, No. 284, Oct. 2013, pp. 2245-2264 ([online](https://hal.inria.fr/ensl-00649347/en)) The FMA-based algorithm in the second paper computes the difference of two products with a proven maximum error of 1.5 [ulp](https://en.wikipedia.org/wiki/Unit_in_the_last_place). With this building block, we arrive at the straightforward ISO C99 implementation of the cathetus computation below. A maximum error of 1.2 ulp was observed in one billion random trials as determined by comparing with the results from an arbitrary-precision library: ```c #include <stdint.h> #include <string.h> #include <float.h> #include <math.h> uint64_t __double_as_uint64 (double a) { uint64_t r; memcpy (&r, &a, sizeof r); return r; } double __uint64_as_double (uint64_t a) { double r; memcpy (&r, &a, sizeof r); return r; } /* diff_of_products() computes a*b-c*d with a maximum error < 1.5 ulp Claude-Pierre Jeannerod, Nicolas Louvet, and Jean-Michel Muller, "Further Analysis of Kahan's Algorithm for the Accurate Computation of 2x2 Determinants". Mathematics of Computation, Vol. 82, No. 284, Oct. 2013, pp. 2245-2264 */ double diff_of_products (double a, double b, double c, double d) { double w = d * c; double e = fma (-d, c, w); double f = fma (a, b, -w); return f + e; } /* compute sqrt (h*h - a*a) accurately, avoiding spurious overflow */ double my_cathetus (double h, double a) { double fh, fa, res, scale_in, scale_out, d, s; uint64_t expo; fh = fabs (h); fa = fabs (a); /* compute scale factors */ expo = __double_as_uint64 (fh) & 0xff80000000000000ULL; scale_in = __uint64_as_double (0x7fc0000000000000ULL - expo); scale_out = __uint64_as_double (expo + 0x0020000000000000ULL); /* scale fh towards unity */ fh = fh * scale_in; fa = fa * scale_in; /* compute sqrt of difference of scaled arguments, avoiding overflow */ d = diff_of_products (fh, fh, fa, fa); s = sqrt (d); /* reverse previous scaling */ res = s * scale_out; /* handle special arguments */ if (isnan (h) || isnan (a)) { res = h + a; } return res; } ```
393,637
I'm running a Django application. Had it under Apache + mod\_python before, and it was all OK. Switched to Lighttpd + FastCGI. Now I randomly get the following exception (neither the place nor the time where it appears seem to be predictable). Since it's random, and it appears only after switching to FastCGI, I assume it has something to do with some settings. Found a few results when googleing, but they seem to be related to setting maxrequests=1. However, I use the default, which is 0. Any ideas where to look for? PS. I'm using PostgreSQL. Might be related to that as well, since the exception appears when making a database query. ``` File "/usr/lib/python2.6/site-packages/django/core/handlers/base.py", line 86, in get_response response = callback(request, *callback_args, **callback_kwargs) File "/usr/lib/python2.6/site-packages/django/contrib/admin/sites.py", line 140, in root if not self.has_permission(request): File "/usr/lib/python2.6/site-packages/django/contrib/admin/sites.py", line 99, in has_permission return request.user.is_authenticated() and request.user.is_staff File "/usr/lib/python2.6/site-packages/django/contrib/auth/middleware.py", line 5, in __get__ request._cached_user = get_user(request) File "/usr/lib/python2.6/site-packages/django/contrib/auth/__init__.py", line 83, in get_user user_id = request.session[SESSION_KEY] File "/usr/lib/python2.6/site-packages/django/contrib/sessions/backends/base.py", line 46, in __getitem__ return self._session[key] File "/usr/lib/python2.6/site-packages/django/contrib/sessions/backends/base.py", line 172, in _get_session self._session_cache = self.load() File "/usr/lib/python2.6/site-packages/django/contrib/sessions/backends/db.py", line 16, in load expire_date__gt=datetime.datetime.now() File "/usr/lib/python2.6/site-packages/django/db/models/manager.py", line 93, in get return self.get_query_set().get(*args, **kwargs) File "/usr/lib/python2.6/site-packages/django/db/models/query.py", line 304, in get num = len(clone) File "/usr/lib/python2.6/site-packages/django/db/models/query.py", line 160, in __len__ self._result_cache = list(self.iterator()) File "/usr/lib/python2.6/site-packages/django/db/models/query.py", line 275, in iterator for row in self.query.results_iter(): File "/usr/lib/python2.6/site-packages/django/db/models/sql/query.py", line 206, in results_iter for rows in self.execute_sql(MULTI): File "/usr/lib/python2.6/site-packages/django/db/models/sql/query.py", line 1734, in execute_sql cursor.execute(sql, params) OperationalError: server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request. ```
2008/12/26
[ "https://Stackoverflow.com/questions/393637", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5475/" ]
Possible solution: <http://groups.google.com/group/django-users/browse_thread/thread/2c7421cdb9b99e48> > > Until recently I was curious to test > this on Django 1.1.1. Will this > exception be thrown again... surprise, > there it was again. It took me some > time to debug this, helpful hint was > that it only shows when (pre)forking. > So for those who getting randomly > those exceptions, I can say... fix > your code :) Ok.. seriously, there > are always few ways of doing this, so > let me firs explain where is a > problem first. If you access database > when any of your modules will import > as, e.g. reading configuration from > database then you will get this error. > When your fastcgi-prefork application > starts, first it imports all modules, > and only after this forks children. > If you have established db connection > during import all children processes > will have an exact copy of that > object. This connection is being > closed at the end of request phase > (request\_finished signal). So first > child which will be called to process > your request, will close this > connection. But what will happen to > the rest of the child processes? They > will believe that they have open and > presumably working connection to the > db, so any db operation will cause an > exception. Why this is not showing in > threaded execution model? I suppose > because threads are using same object > and know when any other thread is > closing connection. How to fix this? > Best way is to fix your code... but > this can be difficult sometimes. > Other option, in my opinion quite > clean, is to write somewhere in your > application small piece of code: > > > ``` from django.db import connection from django.core import signals def close_connection(**kwargs): connection.close() signals.request_started.connect(close_connection) ``` Not ideal thought, connecting twice to the DB is a workaround at best. --- Possible solution: using connection pooling (pgpool, pgbouncer), so you have DB connections pooled and stable, and handed fast to your FCGI daemons. The problem is that this triggers another bug, psycopg2 raising an *InterfaceError* because it's trying to disconnect twice (pgbouncer already handled this). Now the culprit is Django signal *request\_finished* triggering *connection.close()*, and failing loud even if it was already disconnected. I don't think this behavior is desired, as if the request already finished, we don't care about the DB connection anymore. A patch for correcting this should be simple. The relevant traceback: ``` /usr/local/lib/python2.6/dist-packages/Django-1.1.1-py2.6.egg/django/core/handlers/wsgi.py in __call__(self=<django.core.handlers.wsgi.WSGIHandler object at 0x24fb210>, environ={'AUTH_TYPE': 'Basic', 'DOCUMENT_ROOT': '/storage/test', 'GATEWAY_INTERFACE': 'CGI/1.1', 'HTTPS': 'off', 'HTTP_ACCEPT': 'application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5', 'HTTP_ACCEPT_ENCODING': 'gzip, deflate', 'HTTP_AUTHORIZATION': 'Basic dGVzdGU6c3VjZXNzbw==', 'HTTP_CONNECTION': 'keep-alive', 'HTTP_COOKIE': '__utma=175602209.1371964931.1269354495.126938948...none); sessionid=a1990f0d8d32c78a285489586c510e8c', 'HTTP_HOST': 'www.rede-colibri.com', ...}, start_response=<function start_response at 0x24f87d0>) 246 response = self.apply_response_fixes(request, response) 247 finally: 248 signals.request_finished.send(sender=self.__class__) 249 250 try: global signals = <module 'django.core.signals' from '/usr/local/l.../Django-1.1.1-py2.6.egg/django/core/signals.pyc'>, signals.request_finished = <django.dispatch.dispatcher.Signal object at 0x1975710>, signals.request_finished.send = <bound method Signal.send of <django.dispatch.dispatcher.Signal object at 0x1975710>>, sender undefined, self = <django.core.handlers.wsgi.WSGIHandler object at 0x24fb210>, self.__class__ = <class 'django.core.handlers.wsgi.WSGIHandler'> /usr/local/lib/python2.6/dist-packages/Django-1.1.1-py2.6.egg/django/dispatch/dispatcher.py in send(self=<django.dispatch.dispatcher.Signal object at 0x1975710>, sender=<class 'django.core.handlers.wsgi.WSGIHandler'>, **named={}) 164 165 for receiver in self._live_receivers(_make_id(sender)): 166 response = receiver(signal=self, sender=sender, **named) 167 responses.append((receiver, response)) 168 return responses response undefined, receiver = <function close_connection at 0x197b050>, signal undefined, self = <django.dispatch.dispatcher.Signal object at 0x1975710>, sender = <class 'django.core.handlers.wsgi.WSGIHandler'>, named = {} /usr/local/lib/python2.6/dist-packages/Django-1.1.1-py2.6.egg/django/db/__init__.py in close_connection(**kwargs={'sender': <class 'django.core.handlers.wsgi.WSGIHandler'>, 'signal': <django.dispatch.dispatcher.Signal object at 0x1975710>}) 63 # when a Django request is finished. 64 def close_connection(**kwargs): 65 connection.close() 66 signals.request_finished.connect(close_connection) 67 global connection = <django.db.backends.postgresql_psycopg2.base.DatabaseWrapper object at 0x17b14c8>, connection.close = <bound method DatabaseWrapper.close of <django.d...ycopg2.base.DatabaseWrapper object at 0x17b14c8>> /usr/local/lib/python2.6/dist-packages/Django-1.1.1-py2.6.egg/django/db/backends/__init__.py in close(self=<django.db.backends.postgresql_psycopg2.base.DatabaseWrapper object at 0x17b14c8>) 74 def close(self): 75 if self.connection is not None: 76 self.connection.close() 77 self.connection = None 78 self = <django.db.backends.postgresql_psycopg2.base.DatabaseWrapper object at 0x17b14c8>, self.connection = <connection object at 0x1f80870; dsn: 'dbname=co...st=127.0.0.1 port=6432 user=postgres', closed: 2>, self.connection.close = <built-in method close of psycopg2._psycopg.connection object at 0x1f80870> ``` Exception handling here could add more leniency: **/usr/local/lib/python2.6/dist-packages/Django-1.1.1-py2.6.egg/django/db/\_\_init\_\_.py** ``` 63 # when a Django request is finished. 64 def close_connection(**kwargs): 65 connection.close() 66 signals.request_finished.connect(close_connection) ``` Or it could be handled better on psycopg2, so to not throw fatal errors if all we're trying to do is disconnect and it already is: **/usr/local/lib/python2.6/dist-packages/Django-1.1.1-py2.6.egg/django/db/backends/\_\_init\_\_.py** ``` 74 def close(self): 75 if self.connection is not None: 76 self.connection.close() 77 self.connection = None ``` Other than that, I'm short on ideas.
In the end I switched back to Apache + mod\_python (I was having other random errors with fcgi, besides this one) and everything is good and stable now. The question still remains open. In case anybody has this problem in the future and solves it they can record the solution here for future reference. :)
393,637
I'm running a Django application. Had it under Apache + mod\_python before, and it was all OK. Switched to Lighttpd + FastCGI. Now I randomly get the following exception (neither the place nor the time where it appears seem to be predictable). Since it's random, and it appears only after switching to FastCGI, I assume it has something to do with some settings. Found a few results when googleing, but they seem to be related to setting maxrequests=1. However, I use the default, which is 0. Any ideas where to look for? PS. I'm using PostgreSQL. Might be related to that as well, since the exception appears when making a database query. ``` File "/usr/lib/python2.6/site-packages/django/core/handlers/base.py", line 86, in get_response response = callback(request, *callback_args, **callback_kwargs) File "/usr/lib/python2.6/site-packages/django/contrib/admin/sites.py", line 140, in root if not self.has_permission(request): File "/usr/lib/python2.6/site-packages/django/contrib/admin/sites.py", line 99, in has_permission return request.user.is_authenticated() and request.user.is_staff File "/usr/lib/python2.6/site-packages/django/contrib/auth/middleware.py", line 5, in __get__ request._cached_user = get_user(request) File "/usr/lib/python2.6/site-packages/django/contrib/auth/__init__.py", line 83, in get_user user_id = request.session[SESSION_KEY] File "/usr/lib/python2.6/site-packages/django/contrib/sessions/backends/base.py", line 46, in __getitem__ return self._session[key] File "/usr/lib/python2.6/site-packages/django/contrib/sessions/backends/base.py", line 172, in _get_session self._session_cache = self.load() File "/usr/lib/python2.6/site-packages/django/contrib/sessions/backends/db.py", line 16, in load expire_date__gt=datetime.datetime.now() File "/usr/lib/python2.6/site-packages/django/db/models/manager.py", line 93, in get return self.get_query_set().get(*args, **kwargs) File "/usr/lib/python2.6/site-packages/django/db/models/query.py", line 304, in get num = len(clone) File "/usr/lib/python2.6/site-packages/django/db/models/query.py", line 160, in __len__ self._result_cache = list(self.iterator()) File "/usr/lib/python2.6/site-packages/django/db/models/query.py", line 275, in iterator for row in self.query.results_iter(): File "/usr/lib/python2.6/site-packages/django/db/models/sql/query.py", line 206, in results_iter for rows in self.execute_sql(MULTI): File "/usr/lib/python2.6/site-packages/django/db/models/sql/query.py", line 1734, in execute_sql cursor.execute(sql, params) OperationalError: server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request. ```
2008/12/26
[ "https://Stackoverflow.com/questions/393637", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5475/" ]
Change from method=prefork to method=threaded solved the problem for me.
Have you considered downgrading to Python 2.5.x (2.5.4 specifically)? I don't think Django would be considered mature on Python 2.6 since there are some backwards incompatible changes. However, I doubt this will fix your problem. Also, Django 1.0.2 fixed some nefarious little bugs so make sure you're running that. This very well could fix your problem.
393,637
I'm running a Django application. Had it under Apache + mod\_python before, and it was all OK. Switched to Lighttpd + FastCGI. Now I randomly get the following exception (neither the place nor the time where it appears seem to be predictable). Since it's random, and it appears only after switching to FastCGI, I assume it has something to do with some settings. Found a few results when googleing, but they seem to be related to setting maxrequests=1. However, I use the default, which is 0. Any ideas where to look for? PS. I'm using PostgreSQL. Might be related to that as well, since the exception appears when making a database query. ``` File "/usr/lib/python2.6/site-packages/django/core/handlers/base.py", line 86, in get_response response = callback(request, *callback_args, **callback_kwargs) File "/usr/lib/python2.6/site-packages/django/contrib/admin/sites.py", line 140, in root if not self.has_permission(request): File "/usr/lib/python2.6/site-packages/django/contrib/admin/sites.py", line 99, in has_permission return request.user.is_authenticated() and request.user.is_staff File "/usr/lib/python2.6/site-packages/django/contrib/auth/middleware.py", line 5, in __get__ request._cached_user = get_user(request) File "/usr/lib/python2.6/site-packages/django/contrib/auth/__init__.py", line 83, in get_user user_id = request.session[SESSION_KEY] File "/usr/lib/python2.6/site-packages/django/contrib/sessions/backends/base.py", line 46, in __getitem__ return self._session[key] File "/usr/lib/python2.6/site-packages/django/contrib/sessions/backends/base.py", line 172, in _get_session self._session_cache = self.load() File "/usr/lib/python2.6/site-packages/django/contrib/sessions/backends/db.py", line 16, in load expire_date__gt=datetime.datetime.now() File "/usr/lib/python2.6/site-packages/django/db/models/manager.py", line 93, in get return self.get_query_set().get(*args, **kwargs) File "/usr/lib/python2.6/site-packages/django/db/models/query.py", line 304, in get num = len(clone) File "/usr/lib/python2.6/site-packages/django/db/models/query.py", line 160, in __len__ self._result_cache = list(self.iterator()) File "/usr/lib/python2.6/site-packages/django/db/models/query.py", line 275, in iterator for row in self.query.results_iter(): File "/usr/lib/python2.6/site-packages/django/db/models/sql/query.py", line 206, in results_iter for rows in self.execute_sql(MULTI): File "/usr/lib/python2.6/site-packages/django/db/models/sql/query.py", line 1734, in execute_sql cursor.execute(sql, params) OperationalError: server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request. ```
2008/12/26
[ "https://Stackoverflow.com/questions/393637", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5475/" ]
Smells like a possible threading problem. Django is *not* guaranteed thread-safe although the in-file docs seem to indicate that Django/FCGI can be run that way. Try running with prefork and then beat the crap out of the server. If the problem goes away ...
Have you considered downgrading to Python 2.5.x (2.5.4 specifically)? I don't think Django would be considered mature on Python 2.6 since there are some backwards incompatible changes. However, I doubt this will fix your problem. Also, Django 1.0.2 fixed some nefarious little bugs so make sure you're running that. This very well could fix your problem.
393,637
I'm running a Django application. Had it under Apache + mod\_python before, and it was all OK. Switched to Lighttpd + FastCGI. Now I randomly get the following exception (neither the place nor the time where it appears seem to be predictable). Since it's random, and it appears only after switching to FastCGI, I assume it has something to do with some settings. Found a few results when googleing, but they seem to be related to setting maxrequests=1. However, I use the default, which is 0. Any ideas where to look for? PS. I'm using PostgreSQL. Might be related to that as well, since the exception appears when making a database query. ``` File "/usr/lib/python2.6/site-packages/django/core/handlers/base.py", line 86, in get_response response = callback(request, *callback_args, **callback_kwargs) File "/usr/lib/python2.6/site-packages/django/contrib/admin/sites.py", line 140, in root if not self.has_permission(request): File "/usr/lib/python2.6/site-packages/django/contrib/admin/sites.py", line 99, in has_permission return request.user.is_authenticated() and request.user.is_staff File "/usr/lib/python2.6/site-packages/django/contrib/auth/middleware.py", line 5, in __get__ request._cached_user = get_user(request) File "/usr/lib/python2.6/site-packages/django/contrib/auth/__init__.py", line 83, in get_user user_id = request.session[SESSION_KEY] File "/usr/lib/python2.6/site-packages/django/contrib/sessions/backends/base.py", line 46, in __getitem__ return self._session[key] File "/usr/lib/python2.6/site-packages/django/contrib/sessions/backends/base.py", line 172, in _get_session self._session_cache = self.load() File "/usr/lib/python2.6/site-packages/django/contrib/sessions/backends/db.py", line 16, in load expire_date__gt=datetime.datetime.now() File "/usr/lib/python2.6/site-packages/django/db/models/manager.py", line 93, in get return self.get_query_set().get(*args, **kwargs) File "/usr/lib/python2.6/site-packages/django/db/models/query.py", line 304, in get num = len(clone) File "/usr/lib/python2.6/site-packages/django/db/models/query.py", line 160, in __len__ self._result_cache = list(self.iterator()) File "/usr/lib/python2.6/site-packages/django/db/models/query.py", line 275, in iterator for row in self.query.results_iter(): File "/usr/lib/python2.6/site-packages/django/db/models/sql/query.py", line 206, in results_iter for rows in self.execute_sql(MULTI): File "/usr/lib/python2.6/site-packages/django/db/models/sql/query.py", line 1734, in execute_sql cursor.execute(sql, params) OperationalError: server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request. ```
2008/12/26
[ "https://Stackoverflow.com/questions/393637", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5475/" ]
In the end I switched back to Apache + mod\_python (I was having other random errors with fcgi, besides this one) and everything is good and stable now. The question still remains open. In case anybody has this problem in the future and solves it they can record the solution here for future reference. :)
Smells like a possible threading problem. Django is *not* guaranteed thread-safe although the in-file docs seem to indicate that Django/FCGI can be run that way. Try running with prefork and then beat the crap out of the server. If the problem goes away ...
393,637
I'm running a Django application. Had it under Apache + mod\_python before, and it was all OK. Switched to Lighttpd + FastCGI. Now I randomly get the following exception (neither the place nor the time where it appears seem to be predictable). Since it's random, and it appears only after switching to FastCGI, I assume it has something to do with some settings. Found a few results when googleing, but they seem to be related to setting maxrequests=1. However, I use the default, which is 0. Any ideas where to look for? PS. I'm using PostgreSQL. Might be related to that as well, since the exception appears when making a database query. ``` File "/usr/lib/python2.6/site-packages/django/core/handlers/base.py", line 86, in get_response response = callback(request, *callback_args, **callback_kwargs) File "/usr/lib/python2.6/site-packages/django/contrib/admin/sites.py", line 140, in root if not self.has_permission(request): File "/usr/lib/python2.6/site-packages/django/contrib/admin/sites.py", line 99, in has_permission return request.user.is_authenticated() and request.user.is_staff File "/usr/lib/python2.6/site-packages/django/contrib/auth/middleware.py", line 5, in __get__ request._cached_user = get_user(request) File "/usr/lib/python2.6/site-packages/django/contrib/auth/__init__.py", line 83, in get_user user_id = request.session[SESSION_KEY] File "/usr/lib/python2.6/site-packages/django/contrib/sessions/backends/base.py", line 46, in __getitem__ return self._session[key] File "/usr/lib/python2.6/site-packages/django/contrib/sessions/backends/base.py", line 172, in _get_session self._session_cache = self.load() File "/usr/lib/python2.6/site-packages/django/contrib/sessions/backends/db.py", line 16, in load expire_date__gt=datetime.datetime.now() File "/usr/lib/python2.6/site-packages/django/db/models/manager.py", line 93, in get return self.get_query_set().get(*args, **kwargs) File "/usr/lib/python2.6/site-packages/django/db/models/query.py", line 304, in get num = len(clone) File "/usr/lib/python2.6/site-packages/django/db/models/query.py", line 160, in __len__ self._result_cache = list(self.iterator()) File "/usr/lib/python2.6/site-packages/django/db/models/query.py", line 275, in iterator for row in self.query.results_iter(): File "/usr/lib/python2.6/site-packages/django/db/models/sql/query.py", line 206, in results_iter for rows in self.execute_sql(MULTI): File "/usr/lib/python2.6/site-packages/django/db/models/sql/query.py", line 1734, in execute_sql cursor.execute(sql, params) OperationalError: server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request. ```
2008/12/26
[ "https://Stackoverflow.com/questions/393637", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5475/" ]
Possible solution: <http://groups.google.com/group/django-users/browse_thread/thread/2c7421cdb9b99e48> > > Until recently I was curious to test > this on Django 1.1.1. Will this > exception be thrown again... surprise, > there it was again. It took me some > time to debug this, helpful hint was > that it only shows when (pre)forking. > So for those who getting randomly > those exceptions, I can say... fix > your code :) Ok.. seriously, there > are always few ways of doing this, so > let me firs explain where is a > problem first. If you access database > when any of your modules will import > as, e.g. reading configuration from > database then you will get this error. > When your fastcgi-prefork application > starts, first it imports all modules, > and only after this forks children. > If you have established db connection > during import all children processes > will have an exact copy of that > object. This connection is being > closed at the end of request phase > (request\_finished signal). So first > child which will be called to process > your request, will close this > connection. But what will happen to > the rest of the child processes? They > will believe that they have open and > presumably working connection to the > db, so any db operation will cause an > exception. Why this is not showing in > threaded execution model? I suppose > because threads are using same object > and know when any other thread is > closing connection. How to fix this? > Best way is to fix your code... but > this can be difficult sometimes. > Other option, in my opinion quite > clean, is to write somewhere in your > application small piece of code: > > > ``` from django.db import connection from django.core import signals def close_connection(**kwargs): connection.close() signals.request_started.connect(close_connection) ``` Not ideal thought, connecting twice to the DB is a workaround at best. --- Possible solution: using connection pooling (pgpool, pgbouncer), so you have DB connections pooled and stable, and handed fast to your FCGI daemons. The problem is that this triggers another bug, psycopg2 raising an *InterfaceError* because it's trying to disconnect twice (pgbouncer already handled this). Now the culprit is Django signal *request\_finished* triggering *connection.close()*, and failing loud even if it was already disconnected. I don't think this behavior is desired, as if the request already finished, we don't care about the DB connection anymore. A patch for correcting this should be simple. The relevant traceback: ``` /usr/local/lib/python2.6/dist-packages/Django-1.1.1-py2.6.egg/django/core/handlers/wsgi.py in __call__(self=<django.core.handlers.wsgi.WSGIHandler object at 0x24fb210>, environ={'AUTH_TYPE': 'Basic', 'DOCUMENT_ROOT': '/storage/test', 'GATEWAY_INTERFACE': 'CGI/1.1', 'HTTPS': 'off', 'HTTP_ACCEPT': 'application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5', 'HTTP_ACCEPT_ENCODING': 'gzip, deflate', 'HTTP_AUTHORIZATION': 'Basic dGVzdGU6c3VjZXNzbw==', 'HTTP_CONNECTION': 'keep-alive', 'HTTP_COOKIE': '__utma=175602209.1371964931.1269354495.126938948...none); sessionid=a1990f0d8d32c78a285489586c510e8c', 'HTTP_HOST': 'www.rede-colibri.com', ...}, start_response=<function start_response at 0x24f87d0>) 246 response = self.apply_response_fixes(request, response) 247 finally: 248 signals.request_finished.send(sender=self.__class__) 249 250 try: global signals = <module 'django.core.signals' from '/usr/local/l.../Django-1.1.1-py2.6.egg/django/core/signals.pyc'>, signals.request_finished = <django.dispatch.dispatcher.Signal object at 0x1975710>, signals.request_finished.send = <bound method Signal.send of <django.dispatch.dispatcher.Signal object at 0x1975710>>, sender undefined, self = <django.core.handlers.wsgi.WSGIHandler object at 0x24fb210>, self.__class__ = <class 'django.core.handlers.wsgi.WSGIHandler'> /usr/local/lib/python2.6/dist-packages/Django-1.1.1-py2.6.egg/django/dispatch/dispatcher.py in send(self=<django.dispatch.dispatcher.Signal object at 0x1975710>, sender=<class 'django.core.handlers.wsgi.WSGIHandler'>, **named={}) 164 165 for receiver in self._live_receivers(_make_id(sender)): 166 response = receiver(signal=self, sender=sender, **named) 167 responses.append((receiver, response)) 168 return responses response undefined, receiver = <function close_connection at 0x197b050>, signal undefined, self = <django.dispatch.dispatcher.Signal object at 0x1975710>, sender = <class 'django.core.handlers.wsgi.WSGIHandler'>, named = {} /usr/local/lib/python2.6/dist-packages/Django-1.1.1-py2.6.egg/django/db/__init__.py in close_connection(**kwargs={'sender': <class 'django.core.handlers.wsgi.WSGIHandler'>, 'signal': <django.dispatch.dispatcher.Signal object at 0x1975710>}) 63 # when a Django request is finished. 64 def close_connection(**kwargs): 65 connection.close() 66 signals.request_finished.connect(close_connection) 67 global connection = <django.db.backends.postgresql_psycopg2.base.DatabaseWrapper object at 0x17b14c8>, connection.close = <bound method DatabaseWrapper.close of <django.d...ycopg2.base.DatabaseWrapper object at 0x17b14c8>> /usr/local/lib/python2.6/dist-packages/Django-1.1.1-py2.6.egg/django/db/backends/__init__.py in close(self=<django.db.backends.postgresql_psycopg2.base.DatabaseWrapper object at 0x17b14c8>) 74 def close(self): 75 if self.connection is not None: 76 self.connection.close() 77 self.connection = None 78 self = <django.db.backends.postgresql_psycopg2.base.DatabaseWrapper object at 0x17b14c8>, self.connection = <connection object at 0x1f80870; dsn: 'dbname=co...st=127.0.0.1 port=6432 user=postgres', closed: 2>, self.connection.close = <built-in method close of psycopg2._psycopg.connection object at 0x1f80870> ``` Exception handling here could add more leniency: **/usr/local/lib/python2.6/dist-packages/Django-1.1.1-py2.6.egg/django/db/\_\_init\_\_.py** ``` 63 # when a Django request is finished. 64 def close_connection(**kwargs): 65 connection.close() 66 signals.request_finished.connect(close_connection) ``` Or it could be handled better on psycopg2, so to not throw fatal errors if all we're trying to do is disconnect and it already is: **/usr/local/lib/python2.6/dist-packages/Django-1.1.1-py2.6.egg/django/db/backends/\_\_init\_\_.py** ``` 74 def close(self): 75 if self.connection is not None: 76 self.connection.close() 77 self.connection = None ``` Other than that, I'm short on ideas.
Maybe the PYTHONPATH and PATH environment variable is different for both setups (Apache+mod\_python and lighttpd + FastCGI).
393,637
I'm running a Django application. Had it under Apache + mod\_python before, and it was all OK. Switched to Lighttpd + FastCGI. Now I randomly get the following exception (neither the place nor the time where it appears seem to be predictable). Since it's random, and it appears only after switching to FastCGI, I assume it has something to do with some settings. Found a few results when googleing, but they seem to be related to setting maxrequests=1. However, I use the default, which is 0. Any ideas where to look for? PS. I'm using PostgreSQL. Might be related to that as well, since the exception appears when making a database query. ``` File "/usr/lib/python2.6/site-packages/django/core/handlers/base.py", line 86, in get_response response = callback(request, *callback_args, **callback_kwargs) File "/usr/lib/python2.6/site-packages/django/contrib/admin/sites.py", line 140, in root if not self.has_permission(request): File "/usr/lib/python2.6/site-packages/django/contrib/admin/sites.py", line 99, in has_permission return request.user.is_authenticated() and request.user.is_staff File "/usr/lib/python2.6/site-packages/django/contrib/auth/middleware.py", line 5, in __get__ request._cached_user = get_user(request) File "/usr/lib/python2.6/site-packages/django/contrib/auth/__init__.py", line 83, in get_user user_id = request.session[SESSION_KEY] File "/usr/lib/python2.6/site-packages/django/contrib/sessions/backends/base.py", line 46, in __getitem__ return self._session[key] File "/usr/lib/python2.6/site-packages/django/contrib/sessions/backends/base.py", line 172, in _get_session self._session_cache = self.load() File "/usr/lib/python2.6/site-packages/django/contrib/sessions/backends/db.py", line 16, in load expire_date__gt=datetime.datetime.now() File "/usr/lib/python2.6/site-packages/django/db/models/manager.py", line 93, in get return self.get_query_set().get(*args, **kwargs) File "/usr/lib/python2.6/site-packages/django/db/models/query.py", line 304, in get num = len(clone) File "/usr/lib/python2.6/site-packages/django/db/models/query.py", line 160, in __len__ self._result_cache = list(self.iterator()) File "/usr/lib/python2.6/site-packages/django/db/models/query.py", line 275, in iterator for row in self.query.results_iter(): File "/usr/lib/python2.6/site-packages/django/db/models/sql/query.py", line 206, in results_iter for rows in self.execute_sql(MULTI): File "/usr/lib/python2.6/site-packages/django/db/models/sql/query.py", line 1734, in execute_sql cursor.execute(sql, params) OperationalError: server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request. ```
2008/12/26
[ "https://Stackoverflow.com/questions/393637", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5475/" ]
In the end I switched back to Apache + mod\_python (I was having other random errors with fcgi, besides this one) and everything is good and stable now. The question still remains open. In case anybody has this problem in the future and solves it they can record the solution here for future reference. :)
The problem could be mainly with Imports. Atleast thats what happened to me. I wrote my own solution after finding nothing from the web. Please check my blogpost here: [Simple Python Utility to check all Imports in your project](http://nandakishore.posterous.com/simple-djangopython-utility-to-check-all-the) Ofcourse this will only help you to get to the solution of the original issue pretty quickly and not the actual solution for your problem by itself.
393,637
I'm running a Django application. Had it under Apache + mod\_python before, and it was all OK. Switched to Lighttpd + FastCGI. Now I randomly get the following exception (neither the place nor the time where it appears seem to be predictable). Since it's random, and it appears only after switching to FastCGI, I assume it has something to do with some settings. Found a few results when googleing, but they seem to be related to setting maxrequests=1. However, I use the default, which is 0. Any ideas where to look for? PS. I'm using PostgreSQL. Might be related to that as well, since the exception appears when making a database query. ``` File "/usr/lib/python2.6/site-packages/django/core/handlers/base.py", line 86, in get_response response = callback(request, *callback_args, **callback_kwargs) File "/usr/lib/python2.6/site-packages/django/contrib/admin/sites.py", line 140, in root if not self.has_permission(request): File "/usr/lib/python2.6/site-packages/django/contrib/admin/sites.py", line 99, in has_permission return request.user.is_authenticated() and request.user.is_staff File "/usr/lib/python2.6/site-packages/django/contrib/auth/middleware.py", line 5, in __get__ request._cached_user = get_user(request) File "/usr/lib/python2.6/site-packages/django/contrib/auth/__init__.py", line 83, in get_user user_id = request.session[SESSION_KEY] File "/usr/lib/python2.6/site-packages/django/contrib/sessions/backends/base.py", line 46, in __getitem__ return self._session[key] File "/usr/lib/python2.6/site-packages/django/contrib/sessions/backends/base.py", line 172, in _get_session self._session_cache = self.load() File "/usr/lib/python2.6/site-packages/django/contrib/sessions/backends/db.py", line 16, in load expire_date__gt=datetime.datetime.now() File "/usr/lib/python2.6/site-packages/django/db/models/manager.py", line 93, in get return self.get_query_set().get(*args, **kwargs) File "/usr/lib/python2.6/site-packages/django/db/models/query.py", line 304, in get num = len(clone) File "/usr/lib/python2.6/site-packages/django/db/models/query.py", line 160, in __len__ self._result_cache = list(self.iterator()) File "/usr/lib/python2.6/site-packages/django/db/models/query.py", line 275, in iterator for row in self.query.results_iter(): File "/usr/lib/python2.6/site-packages/django/db/models/sql/query.py", line 206, in results_iter for rows in self.execute_sql(MULTI): File "/usr/lib/python2.6/site-packages/django/db/models/sql/query.py", line 1734, in execute_sql cursor.execute(sql, params) OperationalError: server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request. ```
2008/12/26
[ "https://Stackoverflow.com/questions/393637", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5475/" ]
In the end I switched back to Apache + mod\_python (I was having other random errors with fcgi, besides this one) and everything is good and stable now. The question still remains open. In case anybody has this problem in the future and solves it they can record the solution here for future reference. :)
An applicable quote: `"2019 anyone?" - half of YouTube comments, circa 2019` If anyone is still dealing with this, make sure your app is "eagerly forking" such that your Python DB driver (`psycopg2` for me) isn't sharing resources between processes. I solved this issue on uWSGI by adding the `lazy-apps = true` option, which causes is to fork app processes right out of the gate, rather than waiting for copy-on-write. I imagine other WSGI / FastCGI hosts have similar options.
393,637
I'm running a Django application. Had it under Apache + mod\_python before, and it was all OK. Switched to Lighttpd + FastCGI. Now I randomly get the following exception (neither the place nor the time where it appears seem to be predictable). Since it's random, and it appears only after switching to FastCGI, I assume it has something to do with some settings. Found a few results when googleing, but they seem to be related to setting maxrequests=1. However, I use the default, which is 0. Any ideas where to look for? PS. I'm using PostgreSQL. Might be related to that as well, since the exception appears when making a database query. ``` File "/usr/lib/python2.6/site-packages/django/core/handlers/base.py", line 86, in get_response response = callback(request, *callback_args, **callback_kwargs) File "/usr/lib/python2.6/site-packages/django/contrib/admin/sites.py", line 140, in root if not self.has_permission(request): File "/usr/lib/python2.6/site-packages/django/contrib/admin/sites.py", line 99, in has_permission return request.user.is_authenticated() and request.user.is_staff File "/usr/lib/python2.6/site-packages/django/contrib/auth/middleware.py", line 5, in __get__ request._cached_user = get_user(request) File "/usr/lib/python2.6/site-packages/django/contrib/auth/__init__.py", line 83, in get_user user_id = request.session[SESSION_KEY] File "/usr/lib/python2.6/site-packages/django/contrib/sessions/backends/base.py", line 46, in __getitem__ return self._session[key] File "/usr/lib/python2.6/site-packages/django/contrib/sessions/backends/base.py", line 172, in _get_session self._session_cache = self.load() File "/usr/lib/python2.6/site-packages/django/contrib/sessions/backends/db.py", line 16, in load expire_date__gt=datetime.datetime.now() File "/usr/lib/python2.6/site-packages/django/db/models/manager.py", line 93, in get return self.get_query_set().get(*args, **kwargs) File "/usr/lib/python2.6/site-packages/django/db/models/query.py", line 304, in get num = len(clone) File "/usr/lib/python2.6/site-packages/django/db/models/query.py", line 160, in __len__ self._result_cache = list(self.iterator()) File "/usr/lib/python2.6/site-packages/django/db/models/query.py", line 275, in iterator for row in self.query.results_iter(): File "/usr/lib/python2.6/site-packages/django/db/models/sql/query.py", line 206, in results_iter for rows in self.execute_sql(MULTI): File "/usr/lib/python2.6/site-packages/django/db/models/sql/query.py", line 1734, in execute_sql cursor.execute(sql, params) OperationalError: server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request. ```
2008/12/26
[ "https://Stackoverflow.com/questions/393637", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5475/" ]
In the end I switched back to Apache + mod\_python (I was having other random errors with fcgi, besides this one) and everything is good and stable now. The question still remains open. In case anybody has this problem in the future and solves it they can record the solution here for future reference. :)
Change from method=prefork to method=threaded solved the problem for me.
393,637
I'm running a Django application. Had it under Apache + mod\_python before, and it was all OK. Switched to Lighttpd + FastCGI. Now I randomly get the following exception (neither the place nor the time where it appears seem to be predictable). Since it's random, and it appears only after switching to FastCGI, I assume it has something to do with some settings. Found a few results when googleing, but they seem to be related to setting maxrequests=1. However, I use the default, which is 0. Any ideas where to look for? PS. I'm using PostgreSQL. Might be related to that as well, since the exception appears when making a database query. ``` File "/usr/lib/python2.6/site-packages/django/core/handlers/base.py", line 86, in get_response response = callback(request, *callback_args, **callback_kwargs) File "/usr/lib/python2.6/site-packages/django/contrib/admin/sites.py", line 140, in root if not self.has_permission(request): File "/usr/lib/python2.6/site-packages/django/contrib/admin/sites.py", line 99, in has_permission return request.user.is_authenticated() and request.user.is_staff File "/usr/lib/python2.6/site-packages/django/contrib/auth/middleware.py", line 5, in __get__ request._cached_user = get_user(request) File "/usr/lib/python2.6/site-packages/django/contrib/auth/__init__.py", line 83, in get_user user_id = request.session[SESSION_KEY] File "/usr/lib/python2.6/site-packages/django/contrib/sessions/backends/base.py", line 46, in __getitem__ return self._session[key] File "/usr/lib/python2.6/site-packages/django/contrib/sessions/backends/base.py", line 172, in _get_session self._session_cache = self.load() File "/usr/lib/python2.6/site-packages/django/contrib/sessions/backends/db.py", line 16, in load expire_date__gt=datetime.datetime.now() File "/usr/lib/python2.6/site-packages/django/db/models/manager.py", line 93, in get return self.get_query_set().get(*args, **kwargs) File "/usr/lib/python2.6/site-packages/django/db/models/query.py", line 304, in get num = len(clone) File "/usr/lib/python2.6/site-packages/django/db/models/query.py", line 160, in __len__ self._result_cache = list(self.iterator()) File "/usr/lib/python2.6/site-packages/django/db/models/query.py", line 275, in iterator for row in self.query.results_iter(): File "/usr/lib/python2.6/site-packages/django/db/models/sql/query.py", line 206, in results_iter for rows in self.execute_sql(MULTI): File "/usr/lib/python2.6/site-packages/django/db/models/sql/query.py", line 1734, in execute_sql cursor.execute(sql, params) OperationalError: server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request. ```
2008/12/26
[ "https://Stackoverflow.com/questions/393637", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5475/" ]
An applicable quote: `"2019 anyone?" - half of YouTube comments, circa 2019` If anyone is still dealing with this, make sure your app is "eagerly forking" such that your Python DB driver (`psycopg2` for me) isn't sharing resources between processes. I solved this issue on uWSGI by adding the `lazy-apps = true` option, which causes is to fork app processes right out of the gate, rather than waiting for copy-on-write. I imagine other WSGI / FastCGI hosts have similar options.
Have you considered downgrading to Python 2.5.x (2.5.4 specifically)? I don't think Django would be considered mature on Python 2.6 since there are some backwards incompatible changes. However, I doubt this will fix your problem. Also, Django 1.0.2 fixed some nefarious little bugs so make sure you're running that. This very well could fix your problem.
393,637
I'm running a Django application. Had it under Apache + mod\_python before, and it was all OK. Switched to Lighttpd + FastCGI. Now I randomly get the following exception (neither the place nor the time where it appears seem to be predictable). Since it's random, and it appears only after switching to FastCGI, I assume it has something to do with some settings. Found a few results when googleing, but they seem to be related to setting maxrequests=1. However, I use the default, which is 0. Any ideas where to look for? PS. I'm using PostgreSQL. Might be related to that as well, since the exception appears when making a database query. ``` File "/usr/lib/python2.6/site-packages/django/core/handlers/base.py", line 86, in get_response response = callback(request, *callback_args, **callback_kwargs) File "/usr/lib/python2.6/site-packages/django/contrib/admin/sites.py", line 140, in root if not self.has_permission(request): File "/usr/lib/python2.6/site-packages/django/contrib/admin/sites.py", line 99, in has_permission return request.user.is_authenticated() and request.user.is_staff File "/usr/lib/python2.6/site-packages/django/contrib/auth/middleware.py", line 5, in __get__ request._cached_user = get_user(request) File "/usr/lib/python2.6/site-packages/django/contrib/auth/__init__.py", line 83, in get_user user_id = request.session[SESSION_KEY] File "/usr/lib/python2.6/site-packages/django/contrib/sessions/backends/base.py", line 46, in __getitem__ return self._session[key] File "/usr/lib/python2.6/site-packages/django/contrib/sessions/backends/base.py", line 172, in _get_session self._session_cache = self.load() File "/usr/lib/python2.6/site-packages/django/contrib/sessions/backends/db.py", line 16, in load expire_date__gt=datetime.datetime.now() File "/usr/lib/python2.6/site-packages/django/db/models/manager.py", line 93, in get return self.get_query_set().get(*args, **kwargs) File "/usr/lib/python2.6/site-packages/django/db/models/query.py", line 304, in get num = len(clone) File "/usr/lib/python2.6/site-packages/django/db/models/query.py", line 160, in __len__ self._result_cache = list(self.iterator()) File "/usr/lib/python2.6/site-packages/django/db/models/query.py", line 275, in iterator for row in self.query.results_iter(): File "/usr/lib/python2.6/site-packages/django/db/models/sql/query.py", line 206, in results_iter for rows in self.execute_sql(MULTI): File "/usr/lib/python2.6/site-packages/django/db/models/sql/query.py", line 1734, in execute_sql cursor.execute(sql, params) OperationalError: server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request. ```
2008/12/26
[ "https://Stackoverflow.com/questions/393637", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5475/" ]
In the end I switched back to Apache + mod\_python (I was having other random errors with fcgi, besides this one) and everything is good and stable now. The question still remains open. In case anybody has this problem in the future and solves it they can record the solution here for future reference. :)
Maybe the PYTHONPATH and PATH environment variable is different for both setups (Apache+mod\_python and lighttpd + FastCGI).
43,893,431
I am new to python(version 3.4.) and I am wondering how I can make a code similar to this one: ``` #block letters B1 = ("BBBB ") B2 = ("B B ") B3 = ("B B ") B4 = ("BBBB ") B5 = ("B B ") B6 = ("B B ") B7 = ("BBBB ") B = [B1, B2, B3, B4, B5, B6, B7] E1 = ("EEEEE ") E2 = ("E ") E3 = ("E ") E4 = ("EEEEE ") E5 = ("E ") E6 = ("E ") E7 = ("EEEEE ") E = [E1, E2, E3, E4, E5, E6, E7] N1 = ("N N") N2 = ("NN N") N3 = ("N N N") N4 = ("N N N") N5 = ("N N N") N6 = ("N NN") N7 = ("N N") N = [N1, N2, N3, N4, N5, N6, N7] for i in range(7): print(B[i], E[i], N[i]) ``` The output of my current code looks like this: ``` BBBB EEEEE N N B B E NN N B B E N N N BBBB EEEEE N N N B B E N N N B B E N NN BBBB EEEEE N N ``` But I want to know how to make one that can take user input and print it in the style above. I have been trying for a few hours and can't come up with a solution, it would be great to see how other people could do/have done it. I think it becomes a lot harder when ther letters do not fit on the screen, so I only want to be able to print 10 letters. Thanks
2017/05/10
[ "https://Stackoverflow.com/questions/43893431", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7991835/" ]
> > **Assumption**: you have **all** the letters constructed and that **all letters have the same number of rows**. > > > In that case you can **construct a dictionary**, like: ``` ascii_art = { 'B': B, 'E': E, 'N': N } ``` of course in real life, you construct a dictionary with all letters, and perhaps spaces, digits, etc. Now you can take an string as input with: ``` text = input('Enter text? ') ``` Next we map the string onto an iterable of letters: ``` chars = map(ascii_art.get,text) ``` and finally we put these into a zip and print that: ``` for d in zip(*chars): print(*d) ``` Or putting it all together: ``` ascii_art = { 'B': B, 'E': E, 'N': N } text = input('Enter text? ') chars = map(ascii_art.get,text) for d in zip(*chars): print(*d) ``` In case you want to **limit** the output to 10 chars per line, you can alter the code to: ``` ascii_art = { 'B': B, 'E': E, 'N': N } text = input('Enter text? ') for i in range(0,len(text),10): chars = map(ascii_art.get,text[i:i+10]) for d in zip(*chars): print(*d) ``` This results into: ``` Enter text? BEBEBEBBEBEENNNENNNN BBBB EEEEE BBBB EEEEE BBBB EEEEE BBBB BBBB EEEEE BBBB B B E B B E B B E B B B B E B B B B E B B E B B E B B B B E B B BBBB EEEEE BBBB EEEEE BBBB EEEEE BBBB BBBB EEEEE BBBB B B E B B E B B E B B B B E B B B B E B B E B B E B B B B E B B BBBB EEEEE BBBB EEEEE BBBB EEEEE BBBB BBBB EEEEE BBBB EEEEE EEEEE N N N N N N EEEEE N N N N N N N N E E NN N NN N NN N E NN N NN N NN N NN N E E N N N N N N N N N E N N N N N N N N N N N N EEEEE EEEEE N N N N N N N N N EEEEE N N N N N N N N N N N N E E N N N N N N N N N E N N N N N N N N N N N N E E N NN N NN N NN E N NN N NN N NN N NN EEEEE EEEEE N N N N N N EEEEE N N N N N N N N ``` We can add an empty line per row, by adding a single extra statement: ``` ascii_art = { 'B': B, 'E': E, 'N': N } text = input('Enter text? ') for i in range(0,len(text),10): chars = map(ascii_art.get,text[i:i+10]) for d in zip(*chars): print(*d) **print()** ``` this generates: ``` Enter text? BBBEEEEEEENNNNN BBBB BBBB BBBB EEEEE EEEEE EEEEE EEEEE EEEEE EEEEE EEEEE B B B B B B E E E E E E E B B B B B B E E E E E E E BBBB BBBB BBBB EEEEE EEEEE EEEEE EEEEE EEEEE EEEEE EEEEE B B B B B B E E E E E E E B B B B B B E E E E E E E BBBB BBBB BBBB EEEEE EEEEE EEEEE EEEEE EEEEE EEEEE EEEEE N N N N N N N N N N NN N NN N NN N NN N NN N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N NN N NN N NN N NN N NN N N N N N N N N N N ```
First you'd have to manually make the alphabet as you did before, ``` N1 = ("N N") N2 = ("NN N") N3 = ("N N N") N4 = ("N N N") N5 = ("N N N") N6 = ("N NN") N7 = ("N N") N = [N1, N2, N3, N4, N5, N6, N7] ``` Do that for each letter. [a-z] ``` # Now to let user input print your alphabet we will use a dictionary # The the key is the letter and value is the printable array d = {'a':A,'b':B, ... , 'z':Z } # Let's ask for user input: line = input('What do you want to print> ') # Now lets print what the user said in our alphabet # iterate through the input and print it sentence = map(d.get,line) for letter in zip(*sentence): print(*letter) ```
43,893,431
I am new to python(version 3.4.) and I am wondering how I can make a code similar to this one: ``` #block letters B1 = ("BBBB ") B2 = ("B B ") B3 = ("B B ") B4 = ("BBBB ") B5 = ("B B ") B6 = ("B B ") B7 = ("BBBB ") B = [B1, B2, B3, B4, B5, B6, B7] E1 = ("EEEEE ") E2 = ("E ") E3 = ("E ") E4 = ("EEEEE ") E5 = ("E ") E6 = ("E ") E7 = ("EEEEE ") E = [E1, E2, E3, E4, E5, E6, E7] N1 = ("N N") N2 = ("NN N") N3 = ("N N N") N4 = ("N N N") N5 = ("N N N") N6 = ("N NN") N7 = ("N N") N = [N1, N2, N3, N4, N5, N6, N7] for i in range(7): print(B[i], E[i], N[i]) ``` The output of my current code looks like this: ``` BBBB EEEEE N N B B E NN N B B E N N N BBBB EEEEE N N N B B E N N N B B E N NN BBBB EEEEE N N ``` But I want to know how to make one that can take user input and print it in the style above. I have been trying for a few hours and can't come up with a solution, it would be great to see how other people could do/have done it. I think it becomes a lot harder when ther letters do not fit on the screen, so I only want to be able to print 10 letters. Thanks
2017/05/10
[ "https://Stackoverflow.com/questions/43893431", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7991835/" ]
> > **Assumption**: you have **all** the letters constructed and that **all letters have the same number of rows**. > > > In that case you can **construct a dictionary**, like: ``` ascii_art = { 'B': B, 'E': E, 'N': N } ``` of course in real life, you construct a dictionary with all letters, and perhaps spaces, digits, etc. Now you can take an string as input with: ``` text = input('Enter text? ') ``` Next we map the string onto an iterable of letters: ``` chars = map(ascii_art.get,text) ``` and finally we put these into a zip and print that: ``` for d in zip(*chars): print(*d) ``` Or putting it all together: ``` ascii_art = { 'B': B, 'E': E, 'N': N } text = input('Enter text? ') chars = map(ascii_art.get,text) for d in zip(*chars): print(*d) ``` In case you want to **limit** the output to 10 chars per line, you can alter the code to: ``` ascii_art = { 'B': B, 'E': E, 'N': N } text = input('Enter text? ') for i in range(0,len(text),10): chars = map(ascii_art.get,text[i:i+10]) for d in zip(*chars): print(*d) ``` This results into: ``` Enter text? BEBEBEBBEBEENNNENNNN BBBB EEEEE BBBB EEEEE BBBB EEEEE BBBB BBBB EEEEE BBBB B B E B B E B B E B B B B E B B B B E B B E B B E B B B B E B B BBBB EEEEE BBBB EEEEE BBBB EEEEE BBBB BBBB EEEEE BBBB B B E B B E B B E B B B B E B B B B E B B E B B E B B B B E B B BBBB EEEEE BBBB EEEEE BBBB EEEEE BBBB BBBB EEEEE BBBB EEEEE EEEEE N N N N N N EEEEE N N N N N N N N E E NN N NN N NN N E NN N NN N NN N NN N E E N N N N N N N N N E N N N N N N N N N N N N EEEEE EEEEE N N N N N N N N N EEEEE N N N N N N N N N N N N E E N N N N N N N N N E N N N N N N N N N N N N E E N NN N NN N NN E N NN N NN N NN N NN EEEEE EEEEE N N N N N N EEEEE N N N N N N N N ``` We can add an empty line per row, by adding a single extra statement: ``` ascii_art = { 'B': B, 'E': E, 'N': N } text = input('Enter text? ') for i in range(0,len(text),10): chars = map(ascii_art.get,text[i:i+10]) for d in zip(*chars): print(*d) **print()** ``` this generates: ``` Enter text? BBBEEEEEEENNNNN BBBB BBBB BBBB EEEEE EEEEE EEEEE EEEEE EEEEE EEEEE EEEEE B B B B B B E E E E E E E B B B B B B E E E E E E E BBBB BBBB BBBB EEEEE EEEEE EEEEE EEEEE EEEEE EEEEE EEEEE B B B B B B E E E E E E E B B B B B B E E E E E E E BBBB BBBB BBBB EEEEE EEEEE EEEEE EEEEE EEEEE EEEEE EEEEE N N N N N N N N N N NN N NN N NN N NN N NN N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N NN N NN N NN N NN N NN N N N N N N N N N N ```
Without getting super sophisticated, you need to hardcore (i.e. statically define) each letter as a list of strings, similar to how you did it with B, E and N. Then you build a dictionary that maps each letter to the corresponding list: ``` >>> letters = { ... 'b': ["BBBB ", "B B ", "B B ", "BBBB ", "B B ", "B B ", "BBBB "], ... 'e': ["EEEEE ", "E ", "E ", "EEEEE ", "E ", "E ", "EEEEE "], ... 'n': ["N N", "NN N", "N N N", "N N N", "N N N", "N NN", "N N"] ... } ``` This example only contains the definitions for the letters b, e and n, you need to add the definitions of all the others yourself. Make sure all the lists have the same length. Once you are done with that, you can use this dictionary to display any sequence of letters you get from user input. Demo: ``` >>> name = raw_input('enter your name: ') enter your name: Ben >>> >>> for row in zip(*[letters[x.lower()] for x in name]): ... print ''.join(row) ... BBBB EEEEE N N B B E NN N B B E N N N BBBB EEEEE N N N B B E N N N B B E N NN BBBB EEEEE N N ```
43,893,431
I am new to python(version 3.4.) and I am wondering how I can make a code similar to this one: ``` #block letters B1 = ("BBBB ") B2 = ("B B ") B3 = ("B B ") B4 = ("BBBB ") B5 = ("B B ") B6 = ("B B ") B7 = ("BBBB ") B = [B1, B2, B3, B4, B5, B6, B7] E1 = ("EEEEE ") E2 = ("E ") E3 = ("E ") E4 = ("EEEEE ") E5 = ("E ") E6 = ("E ") E7 = ("EEEEE ") E = [E1, E2, E3, E4, E5, E6, E7] N1 = ("N N") N2 = ("NN N") N3 = ("N N N") N4 = ("N N N") N5 = ("N N N") N6 = ("N NN") N7 = ("N N") N = [N1, N2, N3, N4, N5, N6, N7] for i in range(7): print(B[i], E[i], N[i]) ``` The output of my current code looks like this: ``` BBBB EEEEE N N B B E NN N B B E N N N BBBB EEEEE N N N B B E N N N B B E N NN BBBB EEEEE N N ``` But I want to know how to make one that can take user input and print it in the style above. I have been trying for a few hours and can't come up with a solution, it would be great to see how other people could do/have done it. I think it becomes a lot harder when ther letters do not fit on the screen, so I only want to be able to print 10 letters. Thanks
2017/05/10
[ "https://Stackoverflow.com/questions/43893431", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7991835/" ]
> > **Assumption**: you have **all** the letters constructed and that **all letters have the same number of rows**. > > > In that case you can **construct a dictionary**, like: ``` ascii_art = { 'B': B, 'E': E, 'N': N } ``` of course in real life, you construct a dictionary with all letters, and perhaps spaces, digits, etc. Now you can take an string as input with: ``` text = input('Enter text? ') ``` Next we map the string onto an iterable of letters: ``` chars = map(ascii_art.get,text) ``` and finally we put these into a zip and print that: ``` for d in zip(*chars): print(*d) ``` Or putting it all together: ``` ascii_art = { 'B': B, 'E': E, 'N': N } text = input('Enter text? ') chars = map(ascii_art.get,text) for d in zip(*chars): print(*d) ``` In case you want to **limit** the output to 10 chars per line, you can alter the code to: ``` ascii_art = { 'B': B, 'E': E, 'N': N } text = input('Enter text? ') for i in range(0,len(text),10): chars = map(ascii_art.get,text[i:i+10]) for d in zip(*chars): print(*d) ``` This results into: ``` Enter text? BEBEBEBBEBEENNNENNNN BBBB EEEEE BBBB EEEEE BBBB EEEEE BBBB BBBB EEEEE BBBB B B E B B E B B E B B B B E B B B B E B B E B B E B B B B E B B BBBB EEEEE BBBB EEEEE BBBB EEEEE BBBB BBBB EEEEE BBBB B B E B B E B B E B B B B E B B B B E B B E B B E B B B B E B B BBBB EEEEE BBBB EEEEE BBBB EEEEE BBBB BBBB EEEEE BBBB EEEEE EEEEE N N N N N N EEEEE N N N N N N N N E E NN N NN N NN N E NN N NN N NN N NN N E E N N N N N N N N N E N N N N N N N N N N N N EEEEE EEEEE N N N N N N N N N EEEEE N N N N N N N N N N N N E E N N N N N N N N N E N N N N N N N N N N N N E E N NN N NN N NN E N NN N NN N NN N NN EEEEE EEEEE N N N N N N EEEEE N N N N N N N N ``` We can add an empty line per row, by adding a single extra statement: ``` ascii_art = { 'B': B, 'E': E, 'N': N } text = input('Enter text? ') for i in range(0,len(text),10): chars = map(ascii_art.get,text[i:i+10]) for d in zip(*chars): print(*d) **print()** ``` this generates: ``` Enter text? BBBEEEEEEENNNNN BBBB BBBB BBBB EEEEE EEEEE EEEEE EEEEE EEEEE EEEEE EEEEE B B B B B B E E E E E E E B B B B B B E E E E E E E BBBB BBBB BBBB EEEEE EEEEE EEEEE EEEEE EEEEE EEEEE EEEEE B B B B B B E E E E E E E B B B B B B E E E E E E E BBBB BBBB BBBB EEEEE EEEEE EEEEE EEEEE EEEEE EEEEE EEEEE N N N N N N N N N N NN N NN N NN N NN N NN N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N NN N NN N NN N NN N NN N N N N N N N N N N ```
I have found a less complicated(for me) answer. I don't understand how to use zip or map, but this way work with out them.And I can understand what the code is doing. ``` A =[" A ", #CREATING LIST OF LETTERS " A A ", " A A ", "AAAAAAA ", "A A ", "A A ", "A A "] ``` create the alphabet in this manner then: ``` #PUTTING LETTERS IN LIST Alphabet = {"A":A,"B": B, "C": C, "D":D, "E":E ,"F":F, "N": N} #ASKER USER FOR INPUT text = input('Enter text(Only takes A, B, C, D, E, F, N)? ') #INPUT IN UPPERCASE text = text.upper() for i in range(7): for letter in text: temp = Alphabet[letter] print(temp[i],end="") print('') ```
43,893,431
I am new to python(version 3.4.) and I am wondering how I can make a code similar to this one: ``` #block letters B1 = ("BBBB ") B2 = ("B B ") B3 = ("B B ") B4 = ("BBBB ") B5 = ("B B ") B6 = ("B B ") B7 = ("BBBB ") B = [B1, B2, B3, B4, B5, B6, B7] E1 = ("EEEEE ") E2 = ("E ") E3 = ("E ") E4 = ("EEEEE ") E5 = ("E ") E6 = ("E ") E7 = ("EEEEE ") E = [E1, E2, E3, E4, E5, E6, E7] N1 = ("N N") N2 = ("NN N") N3 = ("N N N") N4 = ("N N N") N5 = ("N N N") N6 = ("N NN") N7 = ("N N") N = [N1, N2, N3, N4, N5, N6, N7] for i in range(7): print(B[i], E[i], N[i]) ``` The output of my current code looks like this: ``` BBBB EEEEE N N B B E NN N B B E N N N BBBB EEEEE N N N B B E N N N B B E N NN BBBB EEEEE N N ``` But I want to know how to make one that can take user input and print it in the style above. I have been trying for a few hours and can't come up with a solution, it would be great to see how other people could do/have done it. I think it becomes a lot harder when ther letters do not fit on the screen, so I only want to be able to print 10 letters. Thanks
2017/05/10
[ "https://Stackoverflow.com/questions/43893431", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7991835/" ]
First you'd have to manually make the alphabet as you did before, ``` N1 = ("N N") N2 = ("NN N") N3 = ("N N N") N4 = ("N N N") N5 = ("N N N") N6 = ("N NN") N7 = ("N N") N = [N1, N2, N3, N4, N5, N6, N7] ``` Do that for each letter. [a-z] ``` # Now to let user input print your alphabet we will use a dictionary # The the key is the letter and value is the printable array d = {'a':A,'b':B, ... , 'z':Z } # Let's ask for user input: line = input('What do you want to print> ') # Now lets print what the user said in our alphabet # iterate through the input and print it sentence = map(d.get,line) for letter in zip(*sentence): print(*letter) ```
I have found a less complicated(for me) answer. I don't understand how to use zip or map, but this way work with out them.And I can understand what the code is doing. ``` A =[" A ", #CREATING LIST OF LETTERS " A A ", " A A ", "AAAAAAA ", "A A ", "A A ", "A A "] ``` create the alphabet in this manner then: ``` #PUTTING LETTERS IN LIST Alphabet = {"A":A,"B": B, "C": C, "D":D, "E":E ,"F":F, "N": N} #ASKER USER FOR INPUT text = input('Enter text(Only takes A, B, C, D, E, F, N)? ') #INPUT IN UPPERCASE text = text.upper() for i in range(7): for letter in text: temp = Alphabet[letter] print(temp[i],end="") print('') ```
43,893,431
I am new to python(version 3.4.) and I am wondering how I can make a code similar to this one: ``` #block letters B1 = ("BBBB ") B2 = ("B B ") B3 = ("B B ") B4 = ("BBBB ") B5 = ("B B ") B6 = ("B B ") B7 = ("BBBB ") B = [B1, B2, B3, B4, B5, B6, B7] E1 = ("EEEEE ") E2 = ("E ") E3 = ("E ") E4 = ("EEEEE ") E5 = ("E ") E6 = ("E ") E7 = ("EEEEE ") E = [E1, E2, E3, E4, E5, E6, E7] N1 = ("N N") N2 = ("NN N") N3 = ("N N N") N4 = ("N N N") N5 = ("N N N") N6 = ("N NN") N7 = ("N N") N = [N1, N2, N3, N4, N5, N6, N7] for i in range(7): print(B[i], E[i], N[i]) ``` The output of my current code looks like this: ``` BBBB EEEEE N N B B E NN N B B E N N N BBBB EEEEE N N N B B E N N N B B E N NN BBBB EEEEE N N ``` But I want to know how to make one that can take user input and print it in the style above. I have been trying for a few hours and can't come up with a solution, it would be great to see how other people could do/have done it. I think it becomes a lot harder when ther letters do not fit on the screen, so I only want to be able to print 10 letters. Thanks
2017/05/10
[ "https://Stackoverflow.com/questions/43893431", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7991835/" ]
Without getting super sophisticated, you need to hardcore (i.e. statically define) each letter as a list of strings, similar to how you did it with B, E and N. Then you build a dictionary that maps each letter to the corresponding list: ``` >>> letters = { ... 'b': ["BBBB ", "B B ", "B B ", "BBBB ", "B B ", "B B ", "BBBB "], ... 'e': ["EEEEE ", "E ", "E ", "EEEEE ", "E ", "E ", "EEEEE "], ... 'n': ["N N", "NN N", "N N N", "N N N", "N N N", "N NN", "N N"] ... } ``` This example only contains the definitions for the letters b, e and n, you need to add the definitions of all the others yourself. Make sure all the lists have the same length. Once you are done with that, you can use this dictionary to display any sequence of letters you get from user input. Demo: ``` >>> name = raw_input('enter your name: ') enter your name: Ben >>> >>> for row in zip(*[letters[x.lower()] for x in name]): ... print ''.join(row) ... BBBB EEEEE N N B B E NN N B B E N N N BBBB EEEEE N N N B B E N N N B B E N NN BBBB EEEEE N N ```
I have found a less complicated(for me) answer. I don't understand how to use zip or map, but this way work with out them.And I can understand what the code is doing. ``` A =[" A ", #CREATING LIST OF LETTERS " A A ", " A A ", "AAAAAAA ", "A A ", "A A ", "A A "] ``` create the alphabet in this manner then: ``` #PUTTING LETTERS IN LIST Alphabet = {"A":A,"B": B, "C": C, "D":D, "E":E ,"F":F, "N": N} #ASKER USER FOR INPUT text = input('Enter text(Only takes A, B, C, D, E, F, N)? ') #INPUT IN UPPERCASE text = text.upper() for i in range(7): for letter in text: temp = Alphabet[letter] print(temp[i],end="") print('') ```
1,839,567
I have a vector consisting of a point, speed and direction. We will call this vector R. And another vector that only consists of a point and a speed. No direction. We will call this one T. Now, what I am trying to do is to find the shortest intersection point of these two vectors. Since T has no direction, this is proving to be difficult. I was able to create a formula that works in CaRMetal but I can not get it working in python. Can someone suggest a more efficient way to solve this problem? Or solve my existing formula for X? Formula: [![Formula](https://i.stack.imgur.com/kGd2H.png)](https://i.stack.imgur.com/kGd2H.png) (source: [bja888.com](http://storage.bja888.com/formula.png)) Key: [![Definitions](https://i.stack.imgur.com/1svrA.png)](https://i.stack.imgur.com/1svrA.png) (source: [bja888.com](http://storage.bja888.com/keys.png)) Where o or k is the speed difference between vectors. R.speed / T.speed
2009/12/03
[ "https://Stackoverflow.com/questions/1839567", "https://Stackoverflow.com", "https://Stackoverflow.com/users/223779/" ]
My math could be a bit rusty, but try this: *p* and *q* are the position vectors, *d* and *e* are the direction vectors. After time *t*, you want them to be at the same place: **(1)** *p+t\*d = q+t\*e* Since you want the direction vector *e*, write it like this **(2)** *e = (p-q)/t + d* Now you don't need the time *t*, which you can calculate using your speed constraint *s* (otherwise you could just travel to the other point directly): The direction vector *e* has to be of the length *s*, so **(3)** *e12 + e22 = s2* After some equation solving you end up with **(4)** **I)** *a = sum(p-q)/(s2-sum(d2))* **II)** *b = 2\*sum(d\*(p-q))/(s2-sum(d2))* **III)** *c = -1* **IV)** *a + b\*t + c\*t2 = 0* The *sum* goes over your vector components (2 in 2d, 3 in 3d) The last one is a quadratic formula which you should be able to solve on your own ;-)
1. Let's assume that the first point, A, has zero speed. In this case, it should be very simple to find the direction which will give the fastest intersection. 2. Now, A **does** have a speed. We can force it to have zero speed by deducting it's speed vector from the vector of B. Now we can solve as we did in 1. Just a rough idea that came to mind... **Some more thoughts:** If A is standing still, then the direction B need to travel in is directly towards A. This gives us the direction in the coordinate system in which A is standing still. Let's call it d. Now we only need to convert the direction B needs to travel from the coordinate system in which A is still to the coordinate system in which A is moving at the given speed and direction, d2. This is simply vector addition. d3 = d - d2 We can now find the direction of d3. **And a bit more formal:** *A is stationary*: Sb = speed of B, known, scalar alpha = atan2( a\_y-b\_y, a\_x-b\_x ) Vb\_x = Sb \* cos(alpha) Vb\_y = Sb \* sin(alpha) *A moves at speed Sa, direction beta*: Vb\_x' = Sb \* cos(alpha) + Sa \* cos(beta) Vb\_y' = Sb \* sin(alpha) + Sa \* sin(beta) alpha' = atan2( Vb\_y', Vb\_x' ) Haven't tested the above, but it looks reasonable at first glance...
1,839,567
I have a vector consisting of a point, speed and direction. We will call this vector R. And another vector that only consists of a point and a speed. No direction. We will call this one T. Now, what I am trying to do is to find the shortest intersection point of these two vectors. Since T has no direction, this is proving to be difficult. I was able to create a formula that works in CaRMetal but I can not get it working in python. Can someone suggest a more efficient way to solve this problem? Or solve my existing formula for X? Formula: [![Formula](https://i.stack.imgur.com/kGd2H.png)](https://i.stack.imgur.com/kGd2H.png) (source: [bja888.com](http://storage.bja888.com/formula.png)) Key: [![Definitions](https://i.stack.imgur.com/1svrA.png)](https://i.stack.imgur.com/1svrA.png) (source: [bja888.com](http://storage.bja888.com/keys.png)) Where o or k is the speed difference between vectors. R.speed / T.speed
2009/12/03
[ "https://Stackoverflow.com/questions/1839567", "https://Stackoverflow.com", "https://Stackoverflow.com/users/223779/" ]
My math could be a bit rusty, but try this: *p* and *q* are the position vectors, *d* and *e* are the direction vectors. After time *t*, you want them to be at the same place: **(1)** *p+t\*d = q+t\*e* Since you want the direction vector *e*, write it like this **(2)** *e = (p-q)/t + d* Now you don't need the time *t*, which you can calculate using your speed constraint *s* (otherwise you could just travel to the other point directly): The direction vector *e* has to be of the length *s*, so **(3)** *e12 + e22 = s2* After some equation solving you end up with **(4)** **I)** *a = sum(p-q)/(s2-sum(d2))* **II)** *b = 2\*sum(d\*(p-q))/(s2-sum(d2))* **III)** *c = -1* **IV)** *a + b\*t + c\*t2 = 0* The *sum* goes over your vector components (2 in 2d, 3 in 3d) The last one is a quadratic formula which you should be able to solve on your own ;-)
OK, if I understand you right, you have R = [ xy0, v, r ] T = [ xy1, v ] If you are concerned about the shortest intersection point, this will be achieved when your positions are identical, and in an Euclidean space this also forces the direction of the second "thing" being perpendicular to the first. You can write down the equations for this and solve them easily.
1,839,567
I have a vector consisting of a point, speed and direction. We will call this vector R. And another vector that only consists of a point and a speed. No direction. We will call this one T. Now, what I am trying to do is to find the shortest intersection point of these two vectors. Since T has no direction, this is proving to be difficult. I was able to create a formula that works in CaRMetal but I can not get it working in python. Can someone suggest a more efficient way to solve this problem? Or solve my existing formula for X? Formula: [![Formula](https://i.stack.imgur.com/kGd2H.png)](https://i.stack.imgur.com/kGd2H.png) (source: [bja888.com](http://storage.bja888.com/formula.png)) Key: [![Definitions](https://i.stack.imgur.com/1svrA.png)](https://i.stack.imgur.com/1svrA.png) (source: [bja888.com](http://storage.bja888.com/keys.png)) Where o or k is the speed difference between vectors. R.speed / T.speed
2009/12/03
[ "https://Stackoverflow.com/questions/1839567", "https://Stackoverflow.com", "https://Stackoverflow.com/users/223779/" ]
My math could be a bit rusty, but try this: *p* and *q* are the position vectors, *d* and *e* are the direction vectors. After time *t*, you want them to be at the same place: **(1)** *p+t\*d = q+t\*e* Since you want the direction vector *e*, write it like this **(2)** *e = (p-q)/t + d* Now you don't need the time *t*, which you can calculate using your speed constraint *s* (otherwise you could just travel to the other point directly): The direction vector *e* has to be of the length *s*, so **(3)** *e12 + e22 = s2* After some equation solving you end up with **(4)** **I)** *a = sum(p-q)/(s2-sum(d2))* **II)** *b = 2\*sum(d\*(p-q))/(s2-sum(d2))* **III)** *c = -1* **IV)** *a + b\*t + c\*t2 = 0* The *sum* goes over your vector components (2 in 2d, 3 in 3d) The last one is a quadratic formula which you should be able to solve on your own ;-)
In nature hunters use the constant bearing decreasing range algorithm to catch prey. I like the explanation of how bats do this [link text](http://www.plosbiology.org/article/info:doi%2F10.1371%2Fjournal.pbio.0040108 "Echolocating Bats Use a Nearly Time-Optimal Strategy to Intercept Prey") We need to define a few more terms. ``` Point A - the position associated with vector R. Point B - the position associated with vector T. Vector AB - the vector from point A to point B Angle beta - the angle between vector R and vector AB. Angle theta - the angle between vector T and vector AB ``` The formula is usually given as ``` theta = asin( |R| * sin(beta) / |T| ) ``` where beta = acos( AB.x*R.x + AB.y*R.y ) You don't want to use this directly, since asin and acos only return angles between -PI/2 to PI/2. ``` beta = atan2( R.y, R.x ) - atan2( AB.y, AB.x ) x = |R| * sin(beta) / |T| y = 1 + sqrt( 1 - x*x ) theta = 2*atan2( y, x ) ``` Of course if x > 1 R is too fast and intersection doesn't exist EG
1,839,567
I have a vector consisting of a point, speed and direction. We will call this vector R. And another vector that only consists of a point and a speed. No direction. We will call this one T. Now, what I am trying to do is to find the shortest intersection point of these two vectors. Since T has no direction, this is proving to be difficult. I was able to create a formula that works in CaRMetal but I can not get it working in python. Can someone suggest a more efficient way to solve this problem? Or solve my existing formula for X? Formula: [![Formula](https://i.stack.imgur.com/kGd2H.png)](https://i.stack.imgur.com/kGd2H.png) (source: [bja888.com](http://storage.bja888.com/formula.png)) Key: [![Definitions](https://i.stack.imgur.com/1svrA.png)](https://i.stack.imgur.com/1svrA.png) (source: [bja888.com](http://storage.bja888.com/keys.png)) Where o or k is the speed difference between vectors. R.speed / T.speed
2009/12/03
[ "https://Stackoverflow.com/questions/1839567", "https://Stackoverflow.com", "https://Stackoverflow.com/users/223779/" ]
1. Let's assume that the first point, A, has zero speed. In this case, it should be very simple to find the direction which will give the fastest intersection. 2. Now, A **does** have a speed. We can force it to have zero speed by deducting it's speed vector from the vector of B. Now we can solve as we did in 1. Just a rough idea that came to mind... **Some more thoughts:** If A is standing still, then the direction B need to travel in is directly towards A. This gives us the direction in the coordinate system in which A is standing still. Let's call it d. Now we only need to convert the direction B needs to travel from the coordinate system in which A is still to the coordinate system in which A is moving at the given speed and direction, d2. This is simply vector addition. d3 = d - d2 We can now find the direction of d3. **And a bit more formal:** *A is stationary*: Sb = speed of B, known, scalar alpha = atan2( a\_y-b\_y, a\_x-b\_x ) Vb\_x = Sb \* cos(alpha) Vb\_y = Sb \* sin(alpha) *A moves at speed Sa, direction beta*: Vb\_x' = Sb \* cos(alpha) + Sa \* cos(beta) Vb\_y' = Sb \* sin(alpha) + Sa \* sin(beta) alpha' = atan2( Vb\_y', Vb\_x' ) Haven't tested the above, but it looks reasonable at first glance...
OK, if I understand you right, you have R = [ xy0, v, r ] T = [ xy1, v ] If you are concerned about the shortest intersection point, this will be achieved when your positions are identical, and in an Euclidean space this also forces the direction of the second "thing" being perpendicular to the first. You can write down the equations for this and solve them easily.
1,839,567
I have a vector consisting of a point, speed and direction. We will call this vector R. And another vector that only consists of a point and a speed. No direction. We will call this one T. Now, what I am trying to do is to find the shortest intersection point of these two vectors. Since T has no direction, this is proving to be difficult. I was able to create a formula that works in CaRMetal but I can not get it working in python. Can someone suggest a more efficient way to solve this problem? Or solve my existing formula for X? Formula: [![Formula](https://i.stack.imgur.com/kGd2H.png)](https://i.stack.imgur.com/kGd2H.png) (source: [bja888.com](http://storage.bja888.com/formula.png)) Key: [![Definitions](https://i.stack.imgur.com/1svrA.png)](https://i.stack.imgur.com/1svrA.png) (source: [bja888.com](http://storage.bja888.com/keys.png)) Where o or k is the speed difference between vectors. R.speed / T.speed
2009/12/03
[ "https://Stackoverflow.com/questions/1839567", "https://Stackoverflow.com", "https://Stackoverflow.com/users/223779/" ]
In nature hunters use the constant bearing decreasing range algorithm to catch prey. I like the explanation of how bats do this [link text](http://www.plosbiology.org/article/info:doi%2F10.1371%2Fjournal.pbio.0040108 "Echolocating Bats Use a Nearly Time-Optimal Strategy to Intercept Prey") We need to define a few more terms. ``` Point A - the position associated with vector R. Point B - the position associated with vector T. Vector AB - the vector from point A to point B Angle beta - the angle between vector R and vector AB. Angle theta - the angle between vector T and vector AB ``` The formula is usually given as ``` theta = asin( |R| * sin(beta) / |T| ) ``` where beta = acos( AB.x*R.x + AB.y*R.y ) You don't want to use this directly, since asin and acos only return angles between -PI/2 to PI/2. ``` beta = atan2( R.y, R.x ) - atan2( AB.y, AB.x ) x = |R| * sin(beta) / |T| y = 1 + sqrt( 1 - x*x ) theta = 2*atan2( y, x ) ``` Of course if x > 1 R is too fast and intersection doesn't exist EG
OK, if I understand you right, you have R = [ xy0, v, r ] T = [ xy1, v ] If you are concerned about the shortest intersection point, this will be achieved when your positions are identical, and in an Euclidean space this also forces the direction of the second "thing" being perpendicular to the first. You can write down the equations for this and solve them easily.
34,278,955
On the linux system I'm using, the scheduler is not very generous giving cpu time to subprocesses spawned from python's multiprocessing module. When using 4 subprocceses on a 4-core machine, I get around 22% CPU according to `ps`. However, if the subprocesses are child processes of the shell, and not the python program, it goes up to near 100% CPU. But multiprocessing is a much nicer interface than manually splitting my data, and running separate python programs for each split, and it would be nice to get the best of both worlds (code organization and high CPU utilization). I tried setting the processes' niceness to -20, but that didn't help. I'm wondering whether recompiling the linux kernel with some option would help the scheduler give more CPU time to python multiprocessing workers. Maybe there is a relevant configuration option? The exact version I'm using is: ``` $ uname -a Linux <hostname> 3.19.0-39-generic #44~14.04.1-Ubuntu SMP Wed Dec 2 10:00:35 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux ``` In case this might be related to the way I'm using multiprocessing, it is of the form: ``` with Pool(4) as p: p.map(function,data) ``` Update: This is not a reproducible problem. The results reported here were from a few days ago, and I ran the test again and the multiprocessing processes were as fast as I hoped for. Maybe this question should get deleted, it wouldn't be good to mislead people about the performance to expect of `multiprocessing`.
2015/12/15
[ "https://Stackoverflow.com/questions/34278955", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1483516/" ]
I don't believe your benchmarks are executing as independent tasks as you might think they do. You didn't show the code of `function` but I suspect it does some synchronization. I wrote the following benchmark. If I run the script with either the `--fork` or the `--mp` option, I always get 400 % CPU utilization (on my quad core machine) and comparable overall execution time of about 18 seconds. If called with the `--threads` option, however, the program effectively runs sequentially, achieving only about 100 % CPU utilization and taking a minute to complete for the reason [mentioned](https://stackoverflow.com/questions/34278955/operating-system-level-changes-to-speed-up-pythons-multiprocessing/34279460#34279460) by [dave](https://stackoverflow.com/users/450609/dave). ``` import multiprocessing import os import random import sys import threading def find_lucky_number(x): prng = random.Random() prng.seed(x) for i in range(100000000): prng.random() return prng.randint(0, 100) def with_threading(inputs): callback = lambda x : print(find_lucky_number(x)) threads = [threading.Thread(target=callback, args=(x,)) for x in inputs] for t in threads: t.start() for t in threads: t.join() def with_multiprocessing(inputs): with multiprocessing.Pool(len(inputs)) as pool: for y in pool.map(find_lucky_number, inputs): print(y) def with_forking(inputs): pids = list() for x in inputs: pid = os.fork() if pid == 0: print(find_lucky_number(x)) sys.exit(0) else: pids.append(pid) for pid in pids: os.waitpid(pid, 0) if __name__ == '__main__': inputs = [1, 2, 3, 4] if sys.argv[1] == '--threads': with_threading(inputs) if sys.argv[1] == '--mp': with_multiprocessing(inputs) elif sys.argv[1] == '--fork': with_forking(inputs) else: print("What should I do?", file=sys.stderr) sys.exit(1) ```
Welcome to the CPython Global Interpreter Lock. Your threads show up as distinct processes to the linux kernel (that is how threads are implemented in Linux in general: each thread gets its own process so the kernel can schedule them). So why isn't Linux scheduling more than one of them to run at a time (that is why your 4 core machine is averaging around 25% minus a bit of overhead)? The python interpreter is holding a lock while interpreting each thread, thus blocking the other threads from running (so they can't be scheduled). To get around this you can either: 1. Use processes rather than threads (as you mention in your question) 2. Use a different python interpreter that doesn't have a Global Interpreter Lock.
49,037,104
So, I am making a login system in python with tkinter and I want it to move to another page after the email and password have been validated. The only way I have found to do this is by using a button click command. I only want it to move on to the next page after the email and password have been validated. Thanks in advance. ``` from tkinter import * class login: def __init__(self, master, *args, **kwargs): self.emailGranted = False self.passwordGranted = False self.attempts = 8 self.label_email = Label(text="email:", font=('Serif', 13)) self.label_email.grid(row=0, column=0, sticky=E) self.label_password = Label(text="password:", font=('Serif', 13)) self.label_password.grid(row=1, column=0, sticky=E) self.entry_email = Entry(width=30) self.entry_email.grid(row=0, column=1, padx=(3, 10)) self.entry_password = Entry(width=30, show="•") self.entry_password.grid(row=1, column=1, padx=(3, 10)) self.login = Button(text="Login", command=self.validate) self.login.grid(row=2, column=1, sticky=E, padx=(0, 10), pady=(2, 2)) self.label_granted = Label(text="") self.label_granted.grid(row=3, columnspan=3, sticky=N+E+S+W) def validate(self): self.email = self.entry_email.get() self.password = self.entry_password.get() if self.email == "email": self.emailGranted = True else: self.emailGranted = False self.label_granted.config(text="wrong email") self.attempts -= 1 self.entry_email.delete(0, END) if self.attempts == 0: root.destroy() if self.password == "password": self.passwordGranted = True else: self.passwordGranted = False self.label_granted.config(text="wrong password") self.attempts -= 1 self.entry_password.delete(0, END) if self.attempts == 0: root.destroy() if self.emailGranted is False and self.passwordGranted is False: self.label_granted.config(text="wrong email and password") if self.emailGranted is True and self.passwordGranted is True: self.label_granted.config(text="access granted") // I want it to move on to PageOne here but I'm not sure how class PageOne: def __init__(self, master, *args, **kwargs): Button(text="it works").grid(row=0, column=0) if __name__ == "__main__": root = Tk() root.resizable(False, False) root.title("login") login(root) root.mainloop() ```
2018/02/28
[ "https://Stackoverflow.com/questions/49037104", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7698965/" ]
You can split the string and then `Array.includes` to check whether the value exists in the array or not. ```js function check(str, val){ return str.split(", ").includes(val+""); } var str = "1, 13, 112, 12, 1212, 555" console.log(check(str, 12)); console.log(check(str, 121)); console.log(check(str, 1212)); ```
Another possible answer  : ```js var twelve = /(^| )12(,|$)/; var s = "1, 13, 112, 12, 1212, 555"; console.log(twelve.test(s)); // true ``` About the regular expression ---------------------------- Following your comment, let me give you a little help to understand the first line. `/(^| )12(,|$)/` is a regular expression. A regular expression is a sequence of characters that defines a search pattern. This is one of the features embedded in JavaScript, but it's not inherently linked to JavaScript. In other words, you should not learn regular expressions in the scope of JavaScript, but JavaScript remains a good way to experiment on regular expressions. That being said, what does `/(^| )12(,|$)/` mean ? The two `/` are delimiters indicating the boundaries of the expression. What's in between the `/` is the expression itself, `(^| )12(,|$)`, it describes the pattern we are looking for. We can classify the various characters involved in this expression into two categories : * regular characters (`1`, `2`, `,` and ), * metacharacters (`(`, `)`, `|`, `^` and `$`). Regular characters are characters with no special meaning. Example : ```none /cat/.test("cat") // true /cat/.test("concat") // true ``` Metacharacters are characters with a special meaning : * `^` means "beginning of the text", * `$` means "end of the text", * `()` indicates a subexpression, * `|` indicates a logical OR. Example 1, empty text : ```none /^$/.test("") // true /^$/.test("azerty") // false ``` Example 2, exact match : ```none /^zert$/.test("zert") // true /^zert$/.test("azerty") // false ``` Example 3, alternatives : ```none /(az|qw)erty/.test("azerty") // true /(az|qw)erty/.test("qwerty") // true ``` To wrap it up, let's come back to `/(^| )12(,|$)/` : ```none (^| ) start of the text or " " 12 then "12" (,|$) then "," or end of the text ``` Thus, our pattern matches with strings like `12`, `* 12`, `12,*` or `* 12,*`, where `*` means "zero or more characters". Last word, in JavaScript you can declare a regular expression using the `new` keyword : ``` var twelve = new RegExp("(^| )12(,|$)"); ``` This is useful when you need to change some part of the expression dynamically : ``` function newNumberPattern (n) { return new RegExp("(^| )" + n + "(,|$)"); } var eleven = newNumberPattern(11); var twelve = newNumberPattern(12); ``` That's it, I hope this is enlightening :-)
49,037,104
So, I am making a login system in python with tkinter and I want it to move to another page after the email and password have been validated. The only way I have found to do this is by using a button click command. I only want it to move on to the next page after the email and password have been validated. Thanks in advance. ``` from tkinter import * class login: def __init__(self, master, *args, **kwargs): self.emailGranted = False self.passwordGranted = False self.attempts = 8 self.label_email = Label(text="email:", font=('Serif', 13)) self.label_email.grid(row=0, column=0, sticky=E) self.label_password = Label(text="password:", font=('Serif', 13)) self.label_password.grid(row=1, column=0, sticky=E) self.entry_email = Entry(width=30) self.entry_email.grid(row=0, column=1, padx=(3, 10)) self.entry_password = Entry(width=30, show="•") self.entry_password.grid(row=1, column=1, padx=(3, 10)) self.login = Button(text="Login", command=self.validate) self.login.grid(row=2, column=1, sticky=E, padx=(0, 10), pady=(2, 2)) self.label_granted = Label(text="") self.label_granted.grid(row=3, columnspan=3, sticky=N+E+S+W) def validate(self): self.email = self.entry_email.get() self.password = self.entry_password.get() if self.email == "email": self.emailGranted = True else: self.emailGranted = False self.label_granted.config(text="wrong email") self.attempts -= 1 self.entry_email.delete(0, END) if self.attempts == 0: root.destroy() if self.password == "password": self.passwordGranted = True else: self.passwordGranted = False self.label_granted.config(text="wrong password") self.attempts -= 1 self.entry_password.delete(0, END) if self.attempts == 0: root.destroy() if self.emailGranted is False and self.passwordGranted is False: self.label_granted.config(text="wrong email and password") if self.emailGranted is True and self.passwordGranted is True: self.label_granted.config(text="access granted") // I want it to move on to PageOne here but I'm not sure how class PageOne: def __init__(self, master, *args, **kwargs): Button(text="it works").grid(row=0, column=0) if __name__ == "__main__": root = Tk() root.resizable(False, False) root.title("login") login(root) root.mainloop() ```
2018/02/28
[ "https://Stackoverflow.com/questions/49037104", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7698965/" ]
You can split the string and then `Array.includes` to check whether the value exists in the array or not. ```js function check(str, val){ return str.split(", ").includes(val+""); } var str = "1, 13, 112, 12, 1212, 555" console.log(check(str, 12)); console.log(check(str, 121)); console.log(check(str, 1212)); ```
I think .includes() isn't supported on google sheets. But this seems to work: ``` /** @customfunction */ function AK_CHECK(input){ var split_cell = input.split(",").indexOf("12"); if (split_cell > -1){ return "Bicycle" } else { return "NO"} } ```
49,037,104
So, I am making a login system in python with tkinter and I want it to move to another page after the email and password have been validated. The only way I have found to do this is by using a button click command. I only want it to move on to the next page after the email and password have been validated. Thanks in advance. ``` from tkinter import * class login: def __init__(self, master, *args, **kwargs): self.emailGranted = False self.passwordGranted = False self.attempts = 8 self.label_email = Label(text="email:", font=('Serif', 13)) self.label_email.grid(row=0, column=0, sticky=E) self.label_password = Label(text="password:", font=('Serif', 13)) self.label_password.grid(row=1, column=0, sticky=E) self.entry_email = Entry(width=30) self.entry_email.grid(row=0, column=1, padx=(3, 10)) self.entry_password = Entry(width=30, show="•") self.entry_password.grid(row=1, column=1, padx=(3, 10)) self.login = Button(text="Login", command=self.validate) self.login.grid(row=2, column=1, sticky=E, padx=(0, 10), pady=(2, 2)) self.label_granted = Label(text="") self.label_granted.grid(row=3, columnspan=3, sticky=N+E+S+W) def validate(self): self.email = self.entry_email.get() self.password = self.entry_password.get() if self.email == "email": self.emailGranted = True else: self.emailGranted = False self.label_granted.config(text="wrong email") self.attempts -= 1 self.entry_email.delete(0, END) if self.attempts == 0: root.destroy() if self.password == "password": self.passwordGranted = True else: self.passwordGranted = False self.label_granted.config(text="wrong password") self.attempts -= 1 self.entry_password.delete(0, END) if self.attempts == 0: root.destroy() if self.emailGranted is False and self.passwordGranted is False: self.label_granted.config(text="wrong email and password") if self.emailGranted is True and self.passwordGranted is True: self.label_granted.config(text="access granted") // I want it to move on to PageOne here but I'm not sure how class PageOne: def __init__(self, master, *args, **kwargs): Button(text="it works").grid(row=0, column=0) if __name__ == "__main__": root = Tk() root.resizable(False, False) root.title("login") login(root) root.mainloop() ```
2018/02/28
[ "https://Stackoverflow.com/questions/49037104", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7698965/" ]
Another possible answer  : ```js var twelve = /(^| )12(,|$)/; var s = "1, 13, 112, 12, 1212, 555"; console.log(twelve.test(s)); // true ``` About the regular expression ---------------------------- Following your comment, let me give you a little help to understand the first line. `/(^| )12(,|$)/` is a regular expression. A regular expression is a sequence of characters that defines a search pattern. This is one of the features embedded in JavaScript, but it's not inherently linked to JavaScript. In other words, you should not learn regular expressions in the scope of JavaScript, but JavaScript remains a good way to experiment on regular expressions. That being said, what does `/(^| )12(,|$)/` mean ? The two `/` are delimiters indicating the boundaries of the expression. What's in between the `/` is the expression itself, `(^| )12(,|$)`, it describes the pattern we are looking for. We can classify the various characters involved in this expression into two categories : * regular characters (`1`, `2`, `,` and ), * metacharacters (`(`, `)`, `|`, `^` and `$`). Regular characters are characters with no special meaning. Example : ```none /cat/.test("cat") // true /cat/.test("concat") // true ``` Metacharacters are characters with a special meaning : * `^` means "beginning of the text", * `$` means "end of the text", * `()` indicates a subexpression, * `|` indicates a logical OR. Example 1, empty text : ```none /^$/.test("") // true /^$/.test("azerty") // false ``` Example 2, exact match : ```none /^zert$/.test("zert") // true /^zert$/.test("azerty") // false ``` Example 3, alternatives : ```none /(az|qw)erty/.test("azerty") // true /(az|qw)erty/.test("qwerty") // true ``` To wrap it up, let's come back to `/(^| )12(,|$)/` : ```none (^| ) start of the text or " " 12 then "12" (,|$) then "," or end of the text ``` Thus, our pattern matches with strings like `12`, `* 12`, `12,*` or `* 12,*`, where `*` means "zero or more characters". Last word, in JavaScript you can declare a regular expression using the `new` keyword : ``` var twelve = new RegExp("(^| )12(,|$)"); ``` This is useful when you need to change some part of the expression dynamically : ``` function newNumberPattern (n) { return new RegExp("(^| )" + n + "(,|$)"); } var eleven = newNumberPattern(11); var twelve = newNumberPattern(12); ``` That's it, I hope this is enlightening :-)
I think .includes() isn't supported on google sheets. But this seems to work: ``` /** @customfunction */ function AK_CHECK(input){ var split_cell = input.split(",").indexOf("12"); if (split_cell > -1){ return "Bicycle" } else { return "NO"} } ```
7,598,159
I am trying to access the Amazon Advertising through Python and I created a Python script to automate the authentication process. This file, called amazon.py is located in ~/PROJECT/APP/amazon.py. I want to be able to play around with the API, so I launched python manage.py shell from the ~/PROJECT directory to enter the Python shell. My goal is to be able to execute the python script amazon.py within this shell. What command should I be using to execute amazon.py?
2011/09/29
[ "https://Stackoverflow.com/questions/7598159", "https://Stackoverflow.com", "https://Stackoverflow.com/users/971235/" ]
``` int* ptr = (int*)&a; ``` This is dangerous (this itself doesn't invoke UB, though). But this, ``` *ptr = 3; ``` This invokes undefined behavior (UB), because you're attempting to modify the `const` object pointing to by `ptr`. UB means anything could happen. Note that `a` is truly a const object. §7.1.5.1/4 (C++03) says, > > Except that any class member declared mutable (7.1.1) can be modified, **any attempt to modify a const object during its lifetime (3.8) results in undefined behavior.** > > > ``` [Example: [...] const int* ciq = new const int (3); // initialized as required int* iq = const_cast<int*>(ciq); // cast required *iq = 4; // undefined: modifies a const object ```
Don't do things like this. It's undefined behavior. If you lie to the compiler, it will get its revenge (c)
7,598,159
I am trying to access the Amazon Advertising through Python and I created a Python script to automate the authentication process. This file, called amazon.py is located in ~/PROJECT/APP/amazon.py. I want to be able to play around with the API, so I launched python manage.py shell from the ~/PROJECT directory to enter the Python shell. My goal is to be able to execute the python script amazon.py within this shell. What command should I be using to execute amazon.py?
2011/09/29
[ "https://Stackoverflow.com/questions/7598159", "https://Stackoverflow.com", "https://Stackoverflow.com/users/971235/" ]
``` int* ptr = (int*)&a; ``` This is dangerous (this itself doesn't invoke UB, though). But this, ``` *ptr = 3; ``` This invokes undefined behavior (UB), because you're attempting to modify the `const` object pointing to by `ptr`. UB means anything could happen. Note that `a` is truly a const object. §7.1.5.1/4 (C++03) says, > > Except that any class member declared mutable (7.1.1) can be modified, **any attempt to modify a const object during its lifetime (3.8) results in undefined behavior.** > > > ``` [Example: [...] const int* ciq = new const int (3); // initialized as required int* iq = const_cast<int*>(ciq); // cast required *iq = 4; // undefined: modifies a const object ```
I have a hypothesis that I have not tested: Compiler set aside an address for a (0xbf88d51c), and fills it with 2. int \*ptr gets set to that address, and \*ptr = 3 put 3 at that address. So \*ptr now points to a 3. But when it comes across the value a, compiler hard codes the "2", as though you'd said `#define a 2`. One way to verify is to pull up the resulting assembly code. By the way, I know it's undefined behavior, but so what? The OP asking WHY it happens.
7,598,159
I am trying to access the Amazon Advertising through Python and I created a Python script to automate the authentication process. This file, called amazon.py is located in ~/PROJECT/APP/amazon.py. I want to be able to play around with the API, so I launched python manage.py shell from the ~/PROJECT directory to enter the Python shell. My goal is to be able to execute the python script amazon.py within this shell. What command should I be using to execute amazon.py?
2011/09/29
[ "https://Stackoverflow.com/questions/7598159", "https://Stackoverflow.com", "https://Stackoverflow.com/users/971235/" ]
``` int* ptr = (int*)&a; ``` This is dangerous (this itself doesn't invoke UB, though). But this, ``` *ptr = 3; ``` This invokes undefined behavior (UB), because you're attempting to modify the `const` object pointing to by `ptr`. UB means anything could happen. Note that `a` is truly a const object. §7.1.5.1/4 (C++03) says, > > Except that any class member declared mutable (7.1.1) can be modified, **any attempt to modify a const object during its lifetime (3.8) results in undefined behavior.** > > > ``` [Example: [...] const int* ciq = new const int (3); // initialized as required int* iq = const_cast<int*>(ciq); // cast required *iq = 4; // undefined: modifies a const object ```
That's because compiler replaces `... " " << a << " " ...` with `... " " << 2 << " " ...`. It does so to avoid reading `a`'s value from memory when it's already know, constant and can be added right to assembler instruction.
7,598,159
I am trying to access the Amazon Advertising through Python and I created a Python script to automate the authentication process. This file, called amazon.py is located in ~/PROJECT/APP/amazon.py. I want to be able to play around with the API, so I launched python manage.py shell from the ~/PROJECT directory to enter the Python shell. My goal is to be able to execute the python script amazon.py within this shell. What command should I be using to execute amazon.py?
2011/09/29
[ "https://Stackoverflow.com/questions/7598159", "https://Stackoverflow.com", "https://Stackoverflow.com/users/971235/" ]
Don't do things like this. It's undefined behavior. If you lie to the compiler, it will get its revenge (c)
I have a hypothesis that I have not tested: Compiler set aside an address for a (0xbf88d51c), and fills it with 2. int \*ptr gets set to that address, and \*ptr = 3 put 3 at that address. So \*ptr now points to a 3. But when it comes across the value a, compiler hard codes the "2", as though you'd said `#define a 2`. One way to verify is to pull up the resulting assembly code. By the way, I know it's undefined behavior, but so what? The OP asking WHY it happens.
7,598,159
I am trying to access the Amazon Advertising through Python and I created a Python script to automate the authentication process. This file, called amazon.py is located in ~/PROJECT/APP/amazon.py. I want to be able to play around with the API, so I launched python manage.py shell from the ~/PROJECT directory to enter the Python shell. My goal is to be able to execute the python script amazon.py within this shell. What command should I be using to execute amazon.py?
2011/09/29
[ "https://Stackoverflow.com/questions/7598159", "https://Stackoverflow.com", "https://Stackoverflow.com/users/971235/" ]
Don't do things like this. It's undefined behavior. If you lie to the compiler, it will get its revenge (c)
That's because compiler replaces `... " " << a << " " ...` with `... " " << 2 << " " ...`. It does so to avoid reading `a`'s value from memory when it's already know, constant and can be added right to assembler instruction.
12,961,475
I am looking for a way to parse the following commandline syntax using the argparse module in python3: ``` myapp.py [folder] [[from] to] ``` Meaning: The user may optionally define a folder, which defaults to cwd. Additionally the user may pass up to two integers. If only one number is given, it should be stored in the `to` variable. This is similar to the syntax of the python builtin `range()`. e.g.: ``` myapp.py folder myapp.py 10 myapp.py 5 10 myapp.py folder 5 10 myapp.py folder 10 ``` Is that possible? If so, how?
2012/10/18
[ "https://Stackoverflow.com/questions/12961475", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1110748/" ]
Use options; that's what they're there for (and what `argparse` is good at parsing). Thus, a syntax like ``` myapp.py [-F folder] [[from] to] ``` would make a lot more sense, and be easier to parse.
I couldn't see a way to do it without using a named argument for folder: ``` # usage: argparsetest2.py [-h] [--folder [FOLDER]] [to] [fr] import argparse import os parser = argparse.ArgumentParser() parser.add_argument('--folder', dest='folder', nargs='?', default=os.getcwd()) parser.add_argument('to', type=int, nargs='?') parser.add_argument('fr', type=int, nargs='?') args = parser.parse_args() print args ```
12,961,475
I am looking for a way to parse the following commandline syntax using the argparse module in python3: ``` myapp.py [folder] [[from] to] ``` Meaning: The user may optionally define a folder, which defaults to cwd. Additionally the user may pass up to two integers. If only one number is given, it should be stored in the `to` variable. This is similar to the syntax of the python builtin `range()`. e.g.: ``` myapp.py folder myapp.py 10 myapp.py 5 10 myapp.py folder 5 10 myapp.py folder 10 ``` Is that possible? If so, how?
2012/10/18
[ "https://Stackoverflow.com/questions/12961475", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1110748/" ]
You can do something quite silly: ``` import argparse import os class MyAction(argparse.Action): def __call__(self,parser,namespace,values,option_string=None): namespace.numbers = [] namespace.path = os.getcwd() for v in values: if os.path.isdir(v): namespace.path = v else: try: namespace.numbers.append(int(v)) if len(namespace.numbers) > 2 parser.error("Barg2!!!") except ValueError: parser.error("Barg!!!") p = argparse.ArgumentParser() p.add_argument('stuff',nargs='*',action=MyAction) n = p.parse_args() print n ``` But if you're going to do this, you might as well just process `sys.argv` yourself -- you should really consider using actual options here...
I couldn't see a way to do it without using a named argument for folder: ``` # usage: argparsetest2.py [-h] [--folder [FOLDER]] [to] [fr] import argparse import os parser = argparse.ArgumentParser() parser.add_argument('--folder', dest='folder', nargs='?', default=os.getcwd()) parser.add_argument('to', type=int, nargs='?') parser.add_argument('fr', type=int, nargs='?') args = parser.parse_args() print args ```
12,961,475
I am looking for a way to parse the following commandline syntax using the argparse module in python3: ``` myapp.py [folder] [[from] to] ``` Meaning: The user may optionally define a folder, which defaults to cwd. Additionally the user may pass up to two integers. If only one number is given, it should be stored in the `to` variable. This is similar to the syntax of the python builtin `range()`. e.g.: ``` myapp.py folder myapp.py 10 myapp.py 5 10 myapp.py folder 5 10 myapp.py folder 10 ``` Is that possible? If so, how?
2012/10/18
[ "https://Stackoverflow.com/questions/12961475", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1110748/" ]
I'd also suggest parsing sys.argv yourself. FWIW, I parse sys.argv even on projects where argparse or similar would work, because parsing sys.argv yourself plays nicely with pylint or flake8.
I couldn't see a way to do it without using a named argument for folder: ``` # usage: argparsetest2.py [-h] [--folder [FOLDER]] [to] [fr] import argparse import os parser = argparse.ArgumentParser() parser.add_argument('--folder', dest='folder', nargs='?', default=os.getcwd()) parser.add_argument('to', type=int, nargs='?') parser.add_argument('fr', type=int, nargs='?') args = parser.parse_args() print args ```
46,341,816
I'm working on a Python project using PyCharm and now I need to generate the corresponding API documentation. I'm documenting the code methods and classes using `docstrings`. I read about Sphinx and Doxygen, with Sphinx being the most recommended right now. I tried to configure Sphinx whitin PyCharm but I had no luck in getting it working. This is the **project structure**: [![Project Structure](https://i.stack.imgur.com/Zf0pI.png)](https://i.stack.imgur.com/Zf0pI.png) and this was the I/O interaction with the command **Sphinx Quickstart** ``` C:\Python\Python36\Scripts\sphinx-quickstart.exe Welcome to the Sphinx 1.6.3 quickstart utility. Please enter values for the following settings (just press Enter to accept a default value, if one is given in brackets). Enter the root path for documentation. > Root path for the documentation [.]: You have two options for placing the build directory for Sphinx output. Either, you use a directory "_build" within the root path, or you separate "source" and "build" directories within the root path. > Separate source and build directories (y/n) [n]: Inside the root directory, two more directories will be created; "_templates" for custom HTML templates and "_static" for custom stylesheets and other static files. You can enter another prefix (such as ".") to replace the underscore. > Name prefix for templates and static dir [_]: . The project name will occur in several places in the built documentation. > Project name: Attributed Graph Profiler > Author name(s): M.C & D.A.T. Sphinx has the notion of a "version" and a "release" for the software. Each version can have multiple releases. For example, for Python the version is something like 2.5 or 3.0, while the release is something like 2.5.1 or 3.0a1. If you don't need this dual structure, just set both to the same value. > Project version []: 0.0.1 > Project release [0.0.1]: If the documents are to be written in a language other than English, you can select a language here by its language code. Sphinx will then translate text that it generates into that language. For a list of supported codes, see http://sphinx-doc.org/config.html#confval-language. > Project language [en]: The file name suffix for source files. Commonly, this is either ".txt" or ".rst". Only files with this suffix are considered documents. > Source file suffix [.rst]: One document is special in that it is considered the top node of the "contents tree", that is, it is the root of the hierarchical structure of the documents. Normally, this is "index", but if your "index" document is a custom template, you can also set this to another filename. > Name of your master document (without suffix) [index]: Sphinx can also add configuration for epub output: > Do you want to use the epub builder (y/n) [n]: Please indicate if you want to use one of the following Sphinx extensions: > autodoc: automatically insert docstrings from modules (y/n) [n]: y > doctest: automatically test code snippets in doctest blocks (y/n) [n]: > intersphinx: link between Sphinx documentation of different projects (y/n) [n]: > todo: write "todo" entries that can be shown or hidden on build (y/n) [n]: > coverage: checks for documentation coverage (y/n) [n]: > imgmath: include math, rendered as PNG or SVG images (y/n) [n]: > mathjax: include math, rendered in the browser by MathJax (y/n) [n]: > ifconfig: conditional inclusion of content based on config values (y/n) [n]: > viewcode: include links to the source code of documented Python objects (y/n) [n]: y > githubpages: create .nojekyll file to publish the document on GitHub pages (y/n) [n]: y A Makefile and a Windows command file can be generated for you so that you only have to run e.g. `make html' instead of invoking sphinx-build directly. > Create Makefile? (y/n) [y]: > Create Windows command file? (y/n) [y]: Creating file .\conf.py. Creating file .\index.rst. Creating file .\Makefile. Creating file .\make.bat. Finished: An initial directory structure has been created. You should now populate your master file .\index.rst and create other documentation source files. Use the Makefile to build the docs, like so: make builder where "builder" is one of the supported builders, e.g. html, latex or linkcheck. Process finished with exit code 0 ``` Then I moved to the `/docs` folder [![enter image description here](https://i.stack.imgur.com/8hZMB.png)](https://i.stack.imgur.com/8hZMB.png) , edited the **conf.py** file: ``` #!/usr/bin/env python3 # -*- coding: utf-8 -*- # # "Query Rewriter" documentation build configuration file, created by # sphinx-quickstart on Thu Sep 21 14:56:19 2017. # # This file is execfile()d with the current directory set to its # containing dir. # # Note that not all possible configuration values are present in this # autogenerated file. # # All configuration values have a default; values that are commented out # serve to show the default. # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. # import os import sys sys.path.append(os.path.abspath("../../query_rewriter")) # -- General configuration ------------------------------------------------ # If your documentation needs a minimal Sphinx version, state it here. # # needs_sphinx = '1.0' # Add any Sphinx extension module names here, as strings. They can be # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom # ones. extensions = ['sphinx.ext.autodoc', 'sphinx.ext.intersphinx', 'sphinx.ext.ifconfig', 'sphinx.ext.viewcode', 'sphinx.ext.githubpages'] # Add any paths that contain templates here, relative to this directory. templates_path = ['_templates'] # The suffix(es) of source filenames. # You can specify multiple suffix as a list of string: # # source_suffix = ['.rst', '.md'] source_suffix = '.rst' # The master toctree document. master_doc = 'index' # General information about the project. project = '"Query Rewriter"' copyright = '2017, M.C & D.A.T' author = 'M.C & D.A.T' # The version info for the project you're documenting, acts as replacement for # |version| and |release|, also used in various other places throughout the # built documents. # # The short X.Y version. version = '0.0.1' # The full version, including alpha/beta/rc tags. release = '0.0.1' # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. # # This is also used if you do content translation via gettext catalogs. # Usually you set "language" from the command line for these cases. language = "en" # List of patterns, relative to source directory, that match files and # directories to ignore when looking for source files. # This patterns also effect to html_static_path and html_extra_path exclude_patterns = [] # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'sphinx' # If true, `todo` and `todoList` produce output, else they produce nothing. todo_include_todos = False # -- Options for HTML output ---------------------------------------------- # The theme to use for HTML and HTML Help pages. See the documentation for # a list of builtin themes. # html_theme = 'alabaster' # Theme options are theme-specific and customize the look and feel of a theme # further. For a list of options available for each theme, see the # documentation. # # html_theme_options = {} # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". html_static_path = ['_static'] # Custom sidebar templates, must be a dictionary that maps document names # to template names. # # This is required for the alabaster theme # refs: http://alabaster.readthedocs.io/en/latest/installation.html#sidebars html_sidebars = { '**': [ 'about.html', 'navigation.html', 'relations.html', # needs 'show_related': True theme option to display 'searchbox.html', 'donate.html', ] } # -- Options for HTMLHelp output ------------------------------------------ # Output file base name for HTML help builder. htmlhelp_basename = 'QueryRewriterdoc' # -- Options for LaTeX output --------------------------------------------- latex_elements = { # The paper size ('letterpaper' or 'a4paper'). # # 'papersize': 'letterpaper', # The font size ('10pt', '11pt' or '12pt'). # # 'pointsize': '10pt', # Additional stuff for the LaTeX preamble. # # 'preamble': '', # Latex figure (float) alignment # # 'figure_align': 'htbp', } # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, # author, documentclass [howto, manual, or own class]). latex_documents = [ (master_doc, 'QueryRewriter.tex', '"Query Rewriter" Documentation', 'M.C \\& D.A.T', 'manual'), ] # -- Options for manual page output --------------------------------------- # One entry per manual page. List of tuples # (source start file, name, description, authors, manual section). man_pages = [ (master_doc, 'queryrewriter', '"Query Rewriter" Documentation', [author], 1) ] # -- Options for Texinfo output ------------------------------------------- # Grouping the document tree into Texinfo files. List of tuples # (source start file, target name, title, author, # dir menu entry, description, category) texinfo_documents = [ (master_doc, 'QueryRewriter', '"Query Rewriter" Documentation', author, 'QueryRewriter', 'One line description of project.', 'Miscellaneous'), ] # Example configuration for intersphinx: refer to the Python standard library. intersphinx_mapping = {'https://docs.python.org/': None} ``` and ran the following command: ``` B:\_Python_Workspace\AttributedGraphProfiler\docs>make html Running Sphinx v1.6.3 making output directory... loading pickled environment... not yet created building [mo]: targets for 0 po files that are out of date building [html]: targets for 1 source files that are out of date updating environment: 1 added, 0 changed, 0 removed reading sources... [100%] index looking for now-outdated files... none found pickling environment... done checking consistency... done preparing documents... done writing output... [100%] index generating indices... genindex writing additional pages... search copying static files... done copying extra files... done dumping search index in English (code: en) ... done dumping object inventory... done build succeeded. Build finished. The HTML pages are in .build\html. B:\_Python_Workspace\AttributedGraphProfiler\docs> ``` I thought I was done, but this is the poor result I got, without any documentation for classes and modules. **index.html** [![enter image description here](https://i.stack.imgur.com/6pAuT.png)](https://i.stack.imgur.com/6pAuT.png) **genindex.html** [![enter image description here](https://i.stack.imgur.com/9f0Xk.png)](https://i.stack.imgur.com/9f0Xk.png) Am I doing something wrong? Thanks in advance for your time.
2017/09/21
[ "https://Stackoverflow.com/questions/46341816", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8078050/" ]
Just solved excatly the same problem Juan. **Sphinx unfortunately is not a fully automated doc generator from code comments** like doxygen, jautodoc etc. As in the link mentioned in mzjn's [comment](https://stackoverflow.com/a/25555982/1980180) some steps are necessary for a proper work. As I see you are working on Pycharm, so I will touch on Pycharm-Sphinx integration. I hope you will not have to change anything manually like conf.py. 1. In PyCharm "File/setting/tools/python integrated tools" define the sphinx-working-directory to codebase/Docs. (Just for clearness, choose where ever you want) So your sphinx scripts will run at this path. --- 2. Run "Tools/Sphinx Quickstart" in PyCharm Like you wrote above select the proper options. But "autodoc" is a must (y) and "Separate source and build directories" is recommended (y) to understand what is going on. *This script will generate the skeleton of the sphinx project.* --- 3. Create a python task in Run/Edit-Configurations... in PyCharm like below. Be careful with the python interpreter and your script (If you use python environment like me). *This script will generate the rst files for your modules.* `source/` shows Docs/Source directory created by 1.step. It has the .rst files for our modules. `../` shows our modules' py files. UPDATE 1: --------- > > A-) Run this task to generate the rst files. > > > B-) Add "modules" term to index.rst file, like; > > > ``` bla bla .. toctree:: :maxdepth: 2 :caption: Contents: modules bla bla ``` There is no need to run and add "modules" term again in every doc creation. Step A is necessary only when new modules are introduced in the project. [![Sphinx create Rst Files for modules](https://i.stack.imgur.com/iOZfg.png)](https://i.stack.imgur.com/iOZfg.png) 4. Create a Python Docs task in Run/Edit-Configurations... in PyCharm like below. `command` shows the documentation type. `Input` shows our rst files path. `output` shows our document output directory. (FINALLY :)) [![enter image description here](https://i.stack.imgur.com/MJhqo.png)](https://i.stack.imgur.com/MJhqo.png) 5. Run and enjoy... UPDATE 2: --------- If you integrate with the Readthedocs: (After sign up and github authorization) 1. You have to add modules path to sys.path. Readthedocs calls sphinx-build so it should know the path of the py files. Basically uncomment the line in conf.py `sys.path.insert(0, os.path.abspath('..//..//'))` (if working directory is pyfiles/Docs/source) 2. Always check the build results on readthedocs, siply click on lines like these: You will see if there is an error or a warning. [![enter image description here](https://i.stack.imgur.com/um6ox.png)](https://i.stack.imgur.com/um6ox.png) 3. If you didn't do 1 & 2 probably you will not have a working "Indices and tables" section. **genindex.html** and **py-modindex.html** will be respectively an *empty template* and a *non existing page*.
Prior to `make html` you should do the reading the apidoc comments from your code and creating .rst files. You should run something like this from your project root folder: ``` sphinx-apidoc . -o ./docs -f tests ``` This will rewrite .rst files (hence `-f`) in `docs` folder and ignore to do the apidoc reading in `tests` package. More on `sphinx-apidoc` you can find [here](http://www.sphinx-doc.org/en/stable/man/sphinx-apidoc.html). After this there goes `make html` command.
13,788,349
Reading Guido's infamous answer to the question [Sorting a million 32-bit integers in 2MB of RAM using Python](http://neopythonic.blogspot.fr/2008/10/sorting-million-32-bit-integers-in-2mb.html), I discovered the module [heapq](http://docs.python.org/2.7/library/heapq.html). I also discover I didn't understand jack about it, nor did I know what I could do with it. Can you explain to me (with the proverbial 6 years old target) what is the heap queue algorithm for and what you can do with it ? Can you provide a **simple** Python snippet where using it (with the `heapq` module) solves a problem that will be better solved with it and not with something else ?
2012/12/09
[ "https://Stackoverflow.com/questions/13788349", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9951/" ]
`heapq` implements [binary heaps](https://en.wikipedia.org/wiki/Binary_heap), which are a partially sorted data structure. In particular, they have three interesting operations: * `heapify` turns a list into a heap, in-place, in O(*n*) time; * `heappush` adds an element to the heap in O(lg *n*) time; * `heappop` retrieves the *smallest* element off the heap in O(lg *n*) time. Many interesting algorithms rely on heaps for performance. The simplest one is probably partial sorting: getting the *k* smallest (or largest) elements of a list without sorting the entire list. `heapq.nsmallest` (`nlargest`) does that. The [implementation of `nlargest`](http://hg.python.org/cpython/file/70ee0b76239d/Lib/heapq.py#l183) can be paraphrased as: ``` def nlargest(n, l): # make a heap of the first n elements heap = l[:n] heapify(heap) # loop over the other len(l)-n elements of l for i in xrange(n, len(l)): # push the current element onto the heap, so its size becomes n+1 heappush(heap, l[i]) # pop the smallest element off, so that the heap will contain # the largest n elements of l seen so far heappop(heap) return sorted(heap, reverse=True) ``` Analysis: let N be the number of elements in `l`. `heapify` is run once, for a cost of O(n); that's negligible. Then, in a loop running N-n = O(N) times, we perform a `heappop` and a `heappush` at O(lg n) cost each, giving a total running time of O(N lg n). When N >> n, this is a big win compared to the other obvious algorithm, `sorted(l)[:n]`, which takes O(N lg N) time.
For example: you have a set of 1000 floating-point number. You want to repeatedly remove the smallest item from the set and replace it with a random number between 0 and 1. The fastest way to do it is with the heapq module: ``` heap = [0.0] * 1000 # heapify(heap) # usually you need this, but not if the list is initially sorted while True: x = heappop(heap) heappush(head, random.random()) ``` This takes a time per iteration that is logarithmic in the length of the heap (i.e. around 7 units, for a list of length 1000). Other solutions take a linear time (i.e. around 1000 units, which is 140 times slower, and gets slower and slower when the length increases): ``` lst = [0.0] * 1000 while True: x = min(lst) # linear lst.remove(x) # linear lst.append(random.random()) ``` or: ``` lst = [0.0] * 1000 while True: x = lst.pop() # get the largest one in this example lst.append(random.random()) lst.sort() # linear (in this case) ``` or even: ``` lst = [0.0] * 1000 while True: x = lst.pop() # get the largest one in this example bisect.insort(lst, random.random()) # linear ```
51,347,732
I am trying to replace a block of text which is spanning over multiple lines of text file using python. Here is how my input file looks like. input.txt: ``` ABCD abcd ( . X (x), .Y (y) ); ABCD1 abcd1 ( . X1 (x1), .Y1 (y1) ); ``` I am reading the above file with the below code and trying to replace the text but failed to do so. Below is my code. ``` fo = open(input.txt, 'r') input_str = fo.read() find_str = '''ABCD abcd ( .X (x), .Y (y) );''' replace_str = '''ABCDE abcde ( . XX (xx), .YY (yy) );''' input_str = re.sub(find_str, replace_str, input_str) ``` But the input\_str seems to be unchanged. Not sure what am I missing. Any clues?
2018/07/15
[ "https://Stackoverflow.com/questions/51347732", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3843912/" ]
Running `oc whoami --show-console` returns the link to the console app.
Thanks, `oc login` helped me to get the web console url
51,347,732
I am trying to replace a block of text which is spanning over multiple lines of text file using python. Here is how my input file looks like. input.txt: ``` ABCD abcd ( . X (x), .Y (y) ); ABCD1 abcd1 ( . X1 (x1), .Y1 (y1) ); ``` I am reading the above file with the below code and trying to replace the text but failed to do so. Below is my code. ``` fo = open(input.txt, 'r') input_str = fo.read() find_str = '''ABCD abcd ( .X (x), .Y (y) );''' replace_str = '''ABCDE abcde ( . XX (xx), .YY (yy) );''' input_str = re.sub(find_str, replace_str, input_str) ``` But the input\_str seems to be unchanged. Not sure what am I missing. Any clues?
2018/07/15
[ "https://Stackoverflow.com/questions/51347732", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3843912/" ]
You can obtain the console URL in OpenShift Container Platform 4 as follows: ``` $ oc get routes -n openshift-console ```
Thanks, `oc login` helped me to get the web console url
51,347,732
I am trying to replace a block of text which is spanning over multiple lines of text file using python. Here is how my input file looks like. input.txt: ``` ABCD abcd ( . X (x), .Y (y) ); ABCD1 abcd1 ( . X1 (x1), .Y1 (y1) ); ``` I am reading the above file with the below code and trying to replace the text but failed to do so. Below is my code. ``` fo = open(input.txt, 'r') input_str = fo.read() find_str = '''ABCD abcd ( .X (x), .Y (y) );''' replace_str = '''ABCDE abcde ( . XX (xx), .YY (yy) );''' input_str = re.sub(find_str, replace_str, input_str) ``` But the input\_str seems to be unchanged. Not sure what am I missing. Any clues?
2018/07/15
[ "https://Stackoverflow.com/questions/51347732", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3843912/" ]
Running `oc whoami --show-console` returns the link to the console app.
You can obtain the console URL in OpenShift Container Platform 4 as follows: ``` $ oc get routes -n openshift-console ```
34,394,650
I have a python's pexpect code where it sends some commands listed in a file. Say I store some commands in a file named `commandbase` ``` ls -l /dev/ ls -l /home/ramana ls -l /home/ramana/xyz ls -l /home/ramana/xxx ls -l /home/ramana/xyz/abc ls -l /home/ramana/xxx/def ls -l /home/dir/ ``` and so on. Observe here that after `/` I have `dev` and `home` as variables. If I'm in `home` again `ramana` and `dir` are as variables. If enter into into `ramana` there are again `xyz` and `xxx`. So basically it is of the form ``` ls -l /variable1/variable2/variable3/ ``` and so on. Here I need to build a tree for every variable and its specific secondary variables. Now I should have a list/array/file where I will store first variable and its secondary variables in another list and so on. So I need a function like this In the main script ``` for line in database: child.sendline(line+"\r") child.expect("\$",timeout) ``` The data base file should be something like: ``` def commands(): return "ls -l <some variable>/<second variable and so on>" ``` This function should return all commands with all the combinations How do I return variable commands here instead of defining all the commands? Is it possible with arrays or lists? **[EDIT] Editing as it is less clear. Hope I'm clear this time**
2015/12/21
[ "https://Stackoverflow.com/questions/34394650", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4894197/" ]
This can be done with a list comprehension... ``` paths = ['/dev/', '/dev/ramana/', ...] command = 'ls -l' commandsandpaths = [command + ' ' + x for x in paths] ``` `commandsandpaths` will be a list with... ``` ls -l /dev/ ls -l /dev/ramana/ ``` Personally, I prefer to use string formatting rather than string concatenation... ``` commandsandpaths = ['{0} {1}'.format(command, x) for x in paths] ``` But it may be less readable if you're not familiar with the syntax
Your requirements are a little more complicated than it appears at first glance. Below I have adopted a convention to use lists `[...]` to indicate things to concatenate, and tuples `(...)` for things to choose from, i.e. optionals. Your list of path names can now be expressed as this:- ``` database = ( 'dev', ['home', ( 'dir', ['ramana', ( '', ['xyz', ( '', 'abc' ) ], ['xxx', ( '', 'def' ) ] ) ] ) ] ) ``` The above syntax avoids redundancy as much as possible. The whitespace is not necessary but helps here to illustrate which parts are on the same nested level. Next we need a way to transform this into a list of commands:- ``` def permute(prefix, tree): def flatten(branch): #print 'flatten', branch results = [ ] if type(branch) is list: parts = [ ] for part in branch: if type(part) is basestring: if part: parts.append([part]) else: parts.append(flatten(part)) index = map(lambda x: 0, parts) count = map(len, parts) #print 'combining', parts, index, count while True: line = map(lambda i: parts[i][index[i]], range(len(parts))) line = '/'.join(line) #print '1:', line results.append( line ) curIndex = len(parts)-1 while curIndex >= 0: index[curIndex] += 1 if index[curIndex] < count[curIndex]: break index[curIndex] = 0 curIndex -= 1 if curIndex < 0: break elif type(branch) is tuple: for option in branch: if type(option) is basestring: if len(option): #print '2:', option results.append( option ) else: results.extend(flatten(option)) else: #print '3:', branch results.append( branch ) return results return map(lambda x: prefix + x, flatten(tree)) ``` So now if we call `permute('ls -l /', database)` it returns the following list:- ``` [ 'ls -l /dev', 'ls -l /home/dir', 'ls -l /home/ramana/', 'ls -l /home/ramana/xyz/', 'ls -l /home/ramana/xyz/abc', 'ls -l /home/ramana/xxx/', 'ls -l /home/ramana/xxx/def' ] ``` From here it is now trivial to write these strings to a file named `commandbase` or execute it line by line.
34,394,650
I have a python's pexpect code where it sends some commands listed in a file. Say I store some commands in a file named `commandbase` ``` ls -l /dev/ ls -l /home/ramana ls -l /home/ramana/xyz ls -l /home/ramana/xxx ls -l /home/ramana/xyz/abc ls -l /home/ramana/xxx/def ls -l /home/dir/ ``` and so on. Observe here that after `/` I have `dev` and `home` as variables. If I'm in `home` again `ramana` and `dir` are as variables. If enter into into `ramana` there are again `xyz` and `xxx`. So basically it is of the form ``` ls -l /variable1/variable2/variable3/ ``` and so on. Here I need to build a tree for every variable and its specific secondary variables. Now I should have a list/array/file where I will store first variable and its secondary variables in another list and so on. So I need a function like this In the main script ``` for line in database: child.sendline(line+"\r") child.expect("\$",timeout) ``` The data base file should be something like: ``` def commands(): return "ls -l <some variable>/<second variable and so on>" ``` This function should return all commands with all the combinations How do I return variable commands here instead of defining all the commands? Is it possible with arrays or lists? **[EDIT] Editing as it is less clear. Hope I'm clear this time**
2015/12/21
[ "https://Stackoverflow.com/questions/34394650", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4894197/" ]
This can be done with a list comprehension... ``` paths = ['/dev/', '/dev/ramana/', ...] command = 'ls -l' commandsandpaths = [command + ' ' + x for x in paths] ``` `commandsandpaths` will be a list with... ``` ls -l /dev/ ls -l /dev/ramana/ ``` Personally, I prefer to use string formatting rather than string concatenation... ``` commandsandpaths = ['{0} {1}'.format(command, x) for x in paths] ``` But it may be less readable if you're not familiar with the syntax
Not sure what these variables of which you speak are. They look like path segments to me. Assuming you have a tree data structure consisting of nodes where each node is a tuple of a path segment, and a list of subtrees: ``` tree = [ ('dev', []), ('home', [ ('ramana', [ ('xyz', [ ('abc', []), ]), ('xxx', [ ('def', []), ]), ]), ('dir', []), ]), ] ``` You can write a recursive generator function to walk the tree to generate all paths: ``` import os import os.path def walk_tree(tree): for name, subtree in tree: yield name for path in walk_tree(subtree): yield os.path.join(name, path) ``` You can then generate all commands: ``` commands = ['ls -l {path}'.format(path=os.sep + path) for path in walk_tree(tree)] ``` Note, this code is not the database, it is code that can generate the database. Also, the code shown walks the tree in a depth-first order, which is different from the order you show (which appears to be some combination of depth-first and breadth-first).
35,205,173
I am trying to learn numpy array slicing. But this is a syntax i cannot seem to understand. What does `a[:1]` do. I ran it in python. ``` a = np.array([1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16]) a = a.reshape(2,2,2,2) a[:1] ``` **Output:** ``` array([[[ 5, 6], [ 7, 8]], [[13, 14], [15, 16]]]) ``` Can someone explain to me the slicing and how it works. The documentation doesn't seem to answer this question. Another question would be would there be a way to generate the a array using something like `np.array(1:16)` or something like in python where ``` x = [x for x in range(16)] ```
2016/02/04
[ "https://Stackoverflow.com/questions/35205173", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1939166/" ]
The commas in slicing are to separate the various dimensions you may have. In your first example you are reshaping the data to have 4 dimensions each of length 2. This may be a little difficult to visualize so if you start with a 2D structure it might make more sense: ``` >>> a = np.arange(16).reshape((4, 4)) >>> a array([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11], [12, 13, 14, 15]]) >>> a[0] # access the first "row" of data array([0, 1, 2, 3]) >>> a[0, 2] # access the 3rd column (index 2) in the first row of the data 2 ``` If you want to access multiple values using slicing you can use the colon to express a range: ``` >>> a[:, 1] # get the entire 2nd (index 1) column array([[1, 5, 9, 13]]) >>> a[1:3, -1] # get the second and third elements from the last column array([ 7, 11]) >>> a[1:3, 1:3] # get the data in the second and third rows and columns array([[ 5, 6], [ 9, 10]]) ``` You can do steps too: ``` >>> a[::2, ::2] # get every other element (column-wise and row-wise) array([[ 0, 2], [ 8, 10]]) ``` Hope that helps. Once that makes more sense you can look in to stuff like adding dimensions by using `None` or `np.newaxis` or using the `...` ellipsis: ``` >>> a[:, None].shape (4, 1, 4) ``` You can find more here: <http://docs.scipy.org/doc/numpy/reference/arrays.indexing.html>
It might pay to explore the `shape` and individual entries as we go along. Let's start with ``` >>> a = np.array([1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16]) >>> a.shape (16, ) ``` This is a one-dimensional array of length 16. Now let's try ``` >>> a = a.reshape(2,2,2,2) >>> a.shape (2, 2, 2, 2) ``` It's a multi-dimensional array with 4 dimensions. Let's see the 0, 1 element: ``` >>> a[0, 1] array([[5, 6], [7, 8]]) ``` Since there are two dimensions left, it's a matrix of two dimensions. --- Now `a[:, 1]` says: take `a[i, 1` for all possible values of `i`: ``` >>> a[:, 1] array([[[ 5, 6], [ 7, 8]], [[13, 14], [15, 16]]]) ``` It gives you an array where the first item is `a[0, 1]`, and the second item is `a[1, 1]`.
35,205,173
I am trying to learn numpy array slicing. But this is a syntax i cannot seem to understand. What does `a[:1]` do. I ran it in python. ``` a = np.array([1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16]) a = a.reshape(2,2,2,2) a[:1] ``` **Output:** ``` array([[[ 5, 6], [ 7, 8]], [[13, 14], [15, 16]]]) ``` Can someone explain to me the slicing and how it works. The documentation doesn't seem to answer this question. Another question would be would there be a way to generate the a array using something like `np.array(1:16)` or something like in python where ``` x = [x for x in range(16)] ```
2016/02/04
[ "https://Stackoverflow.com/questions/35205173", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1939166/" ]
The commas in slicing are to separate the various dimensions you may have. In your first example you are reshaping the data to have 4 dimensions each of length 2. This may be a little difficult to visualize so if you start with a 2D structure it might make more sense: ``` >>> a = np.arange(16).reshape((4, 4)) >>> a array([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11], [12, 13, 14, 15]]) >>> a[0] # access the first "row" of data array([0, 1, 2, 3]) >>> a[0, 2] # access the 3rd column (index 2) in the first row of the data 2 ``` If you want to access multiple values using slicing you can use the colon to express a range: ``` >>> a[:, 1] # get the entire 2nd (index 1) column array([[1, 5, 9, 13]]) >>> a[1:3, -1] # get the second and third elements from the last column array([ 7, 11]) >>> a[1:3, 1:3] # get the data in the second and third rows and columns array([[ 5, 6], [ 9, 10]]) ``` You can do steps too: ``` >>> a[::2, ::2] # get every other element (column-wise and row-wise) array([[ 0, 2], [ 8, 10]]) ``` Hope that helps. Once that makes more sense you can look in to stuff like adding dimensions by using `None` or `np.newaxis` or using the `...` ellipsis: ``` >>> a[:, None].shape (4, 1, 4) ``` You can find more here: <http://docs.scipy.org/doc/numpy/reference/arrays.indexing.html>
To answer the second part of your question (generating arrays of sequential values) you can use [`np.arange(start, stop, step)`](http://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.arange.html) or [`np.linspace(start, stop, num_elements)`](http://docs.scipy.org/doc/numpy-1.10.0/reference/generated/numpy.linspace.html). Both of these return a numpy array with the corresponding range of values.
62,719,356
Hi i'm codding a bot in python to the zoom download api, but but now i'm going through this. I need to know the name of the file I am downloading through that URL, but inside the URL it does not contain the name of the file. It is just downloaded automatically through it. Ex of an download URL: <https://zztop.us/rec/download/6cUsf-r5pjo3GNfGtgSDAv9xIXbzy9vms0iRKq6YNn0m8UHILNlKiMrMWMecDkmKyv5o675Hp1ZrKPF16> How can i code in python a way to know the filename being downloaded ?
2020/07/03
[ "https://Stackoverflow.com/questions/62719356", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13860212/" ]
with the help of Mostafa Labib I managed to get where I wanted, here is the code working for those who want to know the filename of a file downloaded by download\_url of zoom ``` from urllib.request import urlopen from os.path import basename url="https://zztop.us/rec/download/6cUsfr5pjo3GNfGtgSDAv9xIXbzy9vms0iRKq6YNn0m8UHILNlKiMrMWMecDkmKyv5o675Hp1ZrKPF16" token = "XXXXXXXXXXXXXXXXXXXXXXX" url = (url + token) response = urlopen(url) arq_name = basename(response.url) arq, tsh = arq_name.split("?", 1) print(arq) ```
You can use urllib to parse the link then get the filename from the headers. ``` from urllib.request import urlopen url = "https://zztop.us/rec/download/6cUsf-r5pjo3GNfGtgSDAv9xIXbzy9vms0iRKq6YNn0m8UHILNlKiMrMWMecDkmKyv5o675Hp1ZrKPF16" response = urlopen(url) filename = response.headers.get_filename() print(filename) ```
11,459,861
I am a molecular biologist using Biopython to analyze mutations in genes and my problem is this: I have a file containing many different sequences (millions), most of which are duplicates. I need to find the duplicates and discard them, keeping one copy of each unique sequence. I was planning on using the module editdist to calculate the edit distance between them all to determine which ones the duplicates are, but editdist can only work with 2 strings, not files. Anyone know how I can use that module with files instead of strings?
2012/07/12
[ "https://Stackoverflow.com/questions/11459861", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1513202/" ]
If you want to filter out exact duplicates, you can use the `set` Python built-in type. As an example : ``` a = ["tccggatcc", "actcctgct", "tccggatcc"] # You have a list of sequences s = set(a) # Put that into a set ``` `s` is then equal to `['tccggatcc', 'actcctgct']`, without duplicates.
Don't be afraid of files! ;-) I'm posting an example by assuming the following: 1. its a text-file 2. one sequence per line - ``` filename = 'sequence.txt' with open(filename, 'r') as sqfile: sequences = sqfile.readlines() # now we have a list of strings #discarding the duplicates: uniques = list(set(sequences)) ``` That's it - by using pythons set-type we eliminate all duplicates automagically. if you have the id and the sequence in the same line like: ``` 423401 ttacguactg ``` you may want to eliminate the ids like: ``` sequences = [s.strip().split()[-1] for s in sequences] ``` with strip we strip the string from leading and trailing whitespaces and with split we split the line/string into 2 components: the id, and the sequence. with the [-1] we select the last component (= the sequence-string) and repack it into our sequence-list.
11,459,861
I am a molecular biologist using Biopython to analyze mutations in genes and my problem is this: I have a file containing many different sequences (millions), most of which are duplicates. I need to find the duplicates and discard them, keeping one copy of each unique sequence. I was planning on using the module editdist to calculate the edit distance between them all to determine which ones the duplicates are, but editdist can only work with 2 strings, not files. Anyone know how I can use that module with files instead of strings?
2012/07/12
[ "https://Stackoverflow.com/questions/11459861", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1513202/" ]
Assuming your file consists solely of sequences arranged one sequence per line, I would suggest the following: ``` seq_file = open(#your file) sequences = [seq for seq in seq_file] uniques = list(set(sequences)) ``` Assuming you have the memory for it. How many millions? ETA: Was reading the comments above (but don't have comment privs) - assuming the sequence IDs are the same for any duplicates, this will work. If duplicate sequences can different sequence IDs, then would to know which comes first and what is between them in the file.
If you want to filter out exact duplicates, you can use the `set` Python built-in type. As an example : ``` a = ["tccggatcc", "actcctgct", "tccggatcc"] # You have a list of sequences s = set(a) # Put that into a set ``` `s` is then equal to `['tccggatcc', 'actcctgct']`, without duplicates.
11,459,861
I am a molecular biologist using Biopython to analyze mutations in genes and my problem is this: I have a file containing many different sequences (millions), most of which are duplicates. I need to find the duplicates and discard them, keeping one copy of each unique sequence. I was planning on using the module editdist to calculate the edit distance between them all to determine which ones the duplicates are, but editdist can only work with 2 strings, not files. Anyone know how I can use that module with files instead of strings?
2012/07/12
[ "https://Stackoverflow.com/questions/11459861", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1513202/" ]
If you want to filter out exact duplicates, you can use the `set` Python built-in type. As an example : ``` a = ["tccggatcc", "actcctgct", "tccggatcc"] # You have a list of sequences s = set(a) # Put that into a set ``` `s` is then equal to `['tccggatcc', 'actcctgct']`, without duplicates.
Four things come to mind: 1. You can use a set(), as described by F.X. - assuming the unique strings will all fit in memory 2. You can use one file per sequence, and feed the files to a program like equivs3e: <http://stromberg.dnsalias.org/~strombrg/equivalence-classes.html#python-3e> 3. You could perhaps use a gdbm as a set, instead of its usual key-value store use. This is good if you need something that's 100% accurate, but you have too much data to fit all the uniques in Virtual Memory. 4. You could perhaps use a bloom filter to cut down the data to more manageable sizes, if you have a truly huge number of strings to check and lots of duplicates. Basically a bloom filter can say "This is definitely not in the set" or "This is almost definitely in the set". In this way, you can eliminate most of the obvious duplicates before using a more common means to operate on the remaining elements. <http://stromberg.dnsalias.org/~strombrg/drs-bloom-filter/>
11,459,861
I am a molecular biologist using Biopython to analyze mutations in genes and my problem is this: I have a file containing many different sequences (millions), most of which are duplicates. I need to find the duplicates and discard them, keeping one copy of each unique sequence. I was planning on using the module editdist to calculate the edit distance between them all to determine which ones the duplicates are, but editdist can only work with 2 strings, not files. Anyone know how I can use that module with files instead of strings?
2012/07/12
[ "https://Stackoverflow.com/questions/11459861", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1513202/" ]
Assuming your file consists solely of sequences arranged one sequence per line, I would suggest the following: ``` seq_file = open(#your file) sequences = [seq for seq in seq_file] uniques = list(set(sequences)) ``` Assuming you have the memory for it. How many millions? ETA: Was reading the comments above (but don't have comment privs) - assuming the sequence IDs are the same for any duplicates, this will work. If duplicate sequences can different sequence IDs, then would to know which comes first and what is between them in the file.
Don't be afraid of files! ;-) I'm posting an example by assuming the following: 1. its a text-file 2. one sequence per line - ``` filename = 'sequence.txt' with open(filename, 'r') as sqfile: sequences = sqfile.readlines() # now we have a list of strings #discarding the duplicates: uniques = list(set(sequences)) ``` That's it - by using pythons set-type we eliminate all duplicates automagically. if you have the id and the sequence in the same line like: ``` 423401 ttacguactg ``` you may want to eliminate the ids like: ``` sequences = [s.strip().split()[-1] for s in sequences] ``` with strip we strip the string from leading and trailing whitespaces and with split we split the line/string into 2 components: the id, and the sequence. with the [-1] we select the last component (= the sequence-string) and repack it into our sequence-list.
11,459,861
I am a molecular biologist using Biopython to analyze mutations in genes and my problem is this: I have a file containing many different sequences (millions), most of which are duplicates. I need to find the duplicates and discard them, keeping one copy of each unique sequence. I was planning on using the module editdist to calculate the edit distance between them all to determine which ones the duplicates are, but editdist can only work with 2 strings, not files. Anyone know how I can use that module with files instead of strings?
2012/07/12
[ "https://Stackoverflow.com/questions/11459861", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1513202/" ]
Does it have to be Python? If the sequences are simply text strings one per line then a shell script will be very efficient: ``` sort input-file-name | uniq > output-file-name ``` This will do the job on files up to 2GB on 32 bit Linux. If you are on Windows then install the GNU utils <http://gnuwin32.sourceforge.net/summary.html>.
Don't be afraid of files! ;-) I'm posting an example by assuming the following: 1. its a text-file 2. one sequence per line - ``` filename = 'sequence.txt' with open(filename, 'r') as sqfile: sequences = sqfile.readlines() # now we have a list of strings #discarding the duplicates: uniques = list(set(sequences)) ``` That's it - by using pythons set-type we eliminate all duplicates automagically. if you have the id and the sequence in the same line like: ``` 423401 ttacguactg ``` you may want to eliminate the ids like: ``` sequences = [s.strip().split()[-1] for s in sequences] ``` with strip we strip the string from leading and trailing whitespaces and with split we split the line/string into 2 components: the id, and the sequence. with the [-1] we select the last component (= the sequence-string) and repack it into our sequence-list.