qid
int64
469
74.7M
question
stringlengths
36
37.8k
date
stringlengths
10
10
metadata
sequence
response_j
stringlengths
5
31.5k
response_k
stringlengths
10
31.6k
18,827,951
With python 2.7 I am reading as unicode and writing as utf-16-le. Most characters are correctly interpreted. But some are not, for example, u'\u810a', also known as [unichr(33034)](http://www.unicode.org/cgi-bin/GetUnihanData.pl?codepoint=810A&useutf8=false). The following code code does not write correctly: ``` import codecs with open('temp.txt','w') as temp: temp.write(codecs.BOM_UTF16_LE) text = unichr(33034) # text = u'\u810a' temp.write(text.encode('utf-16-le')) ``` But either of these things, when replaced above, make the code work. 1. unichr(33033) and unichr(33035) work correctly. 2. 'utf-8' encoding (without BOM, [byte-order mark](http://en.wikipedia.org/wiki/Byte_order_mark)). How can I recognize characters that won't write correctly, and how can I write a 'utf-16-le' encoded file with BOM that either prints these characters or some replacement?
2013/09/16
[ "https://Stackoverflow.com/questions/18827951", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2327328/" ]
You are opening the file in text mode, which means that [line-break characters/bytes will be translated to the local convention](http://docs.python.org/2/library/functions.html#open). Unfortunately the character you are trying to write includes a byte, `0A`, that is interpreted as a line break and does not make it to the file correctly. Open the file in binary mode instead: ``` open('temp.txt','wb') ```
@Joni's answer is the root of the problem, but if you use `codecs.open` instead it always opens in binary mode, even if not specified. Using the `utf16` codec also automatically writes the BOM using native endian-ness as well: ``` import codecs with codecs.open('temp.txt','w','utf16') as temp: temp.write(u'\u810a') ``` Hex dump of `temp.txt`: ``` FF FE 0A 81 ``` Reference: [codecs.open](http://docs.python.org/3/library/codecs.html?highlight=codecs.open#codecs.open)
57,968,983
If I commented the `baz = 4` statement in `if False:` block I get the message of `NameError: name 'baz' is not defined` else I get the message of `NameError: free variable 'baz' referenced before assignment in enclosing scope` I running in python `Python 3.7.2+` ``` def foo(): def bar(): return baz + 1 if False: baz = 4 pass return bar() foo() ``` I expect always get the message `NameError: name 'baz' is not defined` as if I commented the `baz = 4` because the `baz = 4` statement never executed, but the actual message is `NameError: free variable 'baz' referenced before assignment in enclosing scope`
2019/09/17
[ "https://Stackoverflow.com/questions/57968983", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9377221/" ]
The `to_Description` property starts off as `List<Dictionary<string,string>>` and you want to take the first element from the `List`. So, given 2 classes ``` public class Source { public string Product {get;set;} public List<Dictionary<string,string>> To_Description{get;set;} } public class Destination { public string Product {get;set;} public Dictionary<string,string> To_Description{get;set;} } ``` You could do it like this: ``` var src = JsonConvert.DeserializeObject<Source>(jsonString); var dest = new Destination { Product = src.Product, To_Description = src.To_Description[0] }; var newJson = JsonConvert.SerializeObject(dest); ``` Note: You might want to check there *really is* just 1 item in the list! Live example: <https://dotnetfiddle.net/vxqumd>
Why not: ``` public class EntityDescription { public string ProductDescription { get; set; } } public class Entity { public string Product { get; set; } } public class Source : Entity { [JsonProperty("to_Description")] public EntityDescription[] Description { get; set; } } public class Target : Entity { [JsonProperty("to_Description")] public EntityDescription Description { get; set; } } var raw = File.ReadAllText(@"output.json"); var source = JsonConvert.DeserializeObject<Source>(raw); var target = new Target { Product = source.Product, Description = source.Description.FirstOrDefault() }; var rawResult = JsonConvert.SerializeObject(target); ``` **Update** For dynamic JSON ``` var jObject = JObject.Parse(File.ReadAllText(@"output.json")); var newjObject = new JObject(); foreach(var jToken in jObject) { if(jToken.Value is JArray) { List<JToken> l = jToken.Value.ToObject<List<JToken>>(); if(l != null && l.Count > 0) { newjObject.Add(jToken.Key, l.First()); } } else { newjObject.Add(jToken.Key, jToken.Value); } } var newTxt = newjObject.ToString(); ```
57,968,983
If I commented the `baz = 4` statement in `if False:` block I get the message of `NameError: name 'baz' is not defined` else I get the message of `NameError: free variable 'baz' referenced before assignment in enclosing scope` I running in python `Python 3.7.2+` ``` def foo(): def bar(): return baz + 1 if False: baz = 4 pass return bar() foo() ``` I expect always get the message `NameError: name 'baz' is not defined` as if I commented the `baz = 4` because the `baz = 4` statement never executed, but the actual message is `NameError: free variable 'baz' referenced before assignment in enclosing scope`
2019/09/17
[ "https://Stackoverflow.com/questions/57968983", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9377221/" ]
You do not need to create classes for this task. You can modify your object like this: ``` // Load the JSON from a file into a JObject JObject o1 = JObject.Parse(File.ReadAllText(@"output.json")); // Get the desired property whose value is to be replaced var prop = o1.Property("to_Description"); // Replace the property value with the first child JObject of the existing value prop.Value = prop.Value.Children<JObject>().FirstOrDefault(); // write the changed JSON back to the original file File.WriteAllText(@"output.json", o1.ToString()); ``` Fiddle: <https://dotnetfiddle.net/M83zv3>
Why not: ``` public class EntityDescription { public string ProductDescription { get; set; } } public class Entity { public string Product { get; set; } } public class Source : Entity { [JsonProperty("to_Description")] public EntityDescription[] Description { get; set; } } public class Target : Entity { [JsonProperty("to_Description")] public EntityDescription Description { get; set; } } var raw = File.ReadAllText(@"output.json"); var source = JsonConvert.DeserializeObject<Source>(raw); var target = new Target { Product = source.Product, Description = source.Description.FirstOrDefault() }; var rawResult = JsonConvert.SerializeObject(target); ``` **Update** For dynamic JSON ``` var jObject = JObject.Parse(File.ReadAllText(@"output.json")); var newjObject = new JObject(); foreach(var jToken in jObject) { if(jToken.Value is JArray) { List<JToken> l = jToken.Value.ToObject<List<JToken>>(); if(l != null && l.Count > 0) { newjObject.Add(jToken.Key, l.First()); } } else { newjObject.Add(jToken.Key, jToken.Value); } } var newTxt = newjObject.ToString(); ```
57,968,983
If I commented the `baz = 4` statement in `if False:` block I get the message of `NameError: name 'baz' is not defined` else I get the message of `NameError: free variable 'baz' referenced before assignment in enclosing scope` I running in python `Python 3.7.2+` ``` def foo(): def bar(): return baz + 1 if False: baz = 4 pass return bar() foo() ``` I expect always get the message `NameError: name 'baz' is not defined` as if I commented the `baz = 4` because the `baz = 4` statement never executed, but the actual message is `NameError: free variable 'baz' referenced before assignment in enclosing scope`
2019/09/17
[ "https://Stackoverflow.com/questions/57968983", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9377221/" ]
The `to_Description` property starts off as `List<Dictionary<string,string>>` and you want to take the first element from the `List`. So, given 2 classes ``` public class Source { public string Product {get;set;} public List<Dictionary<string,string>> To_Description{get;set;} } public class Destination { public string Product {get;set;} public Dictionary<string,string> To_Description{get;set;} } ``` You could do it like this: ``` var src = JsonConvert.DeserializeObject<Source>(jsonString); var dest = new Destination { Product = src.Product, To_Description = src.To_Description[0] }; var newJson = JsonConvert.SerializeObject(dest); ``` Note: You might want to check there *really is* just 1 item in the list! Live example: <https://dotnetfiddle.net/vxqumd>
I have used [json2csharp](http://json2csharp.com/) to convert the actual and desired output to classes and manipulated the input json.. this will help in the maintenance in future First defined the model ``` public class ToDescription { public string ProductDescription { get; set; } } public class ActualObject { public string Product { get; set; } public List<ToDescription> to_Description { get; set; } } public class ChangedObject { public string Product { get; set; } public ToDescription to_Description { get; set; } } ``` Inject the logic ``` static void Main(string[] args) { string json = "{\"Product\": \"123\", \"to_Description\": [ { \"ProductDescription\": \"Product 1\" } ]} "; ActualObject actualObject = JsonConvert.DeserializeObject<ActualObject>(json); ChangedObject changedObject = new ChangedObject(); changedObject.Product = actualObject.Product; changedObject.to_Description = actualObject.to_Description[0]; string formattedjson = JsonConvert.SerializeObject(changedObject); Console.WriteLine(formattedjson); } ```
57,968,983
If I commented the `baz = 4` statement in `if False:` block I get the message of `NameError: name 'baz' is not defined` else I get the message of `NameError: free variable 'baz' referenced before assignment in enclosing scope` I running in python `Python 3.7.2+` ``` def foo(): def bar(): return baz + 1 if False: baz = 4 pass return bar() foo() ``` I expect always get the message `NameError: name 'baz' is not defined` as if I commented the `baz = 4` because the `baz = 4` statement never executed, but the actual message is `NameError: free variable 'baz' referenced before assignment in enclosing scope`
2019/09/17
[ "https://Stackoverflow.com/questions/57968983", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9377221/" ]
You do not need to create classes for this task. You can modify your object like this: ``` // Load the JSON from a file into a JObject JObject o1 = JObject.Parse(File.ReadAllText(@"output.json")); // Get the desired property whose value is to be replaced var prop = o1.Property("to_Description"); // Replace the property value with the first child JObject of the existing value prop.Value = prop.Value.Children<JObject>().FirstOrDefault(); // write the changed JSON back to the original file File.WriteAllText(@"output.json", o1.ToString()); ``` Fiddle: <https://dotnetfiddle.net/M83zv3>
I have used [json2csharp](http://json2csharp.com/) to convert the actual and desired output to classes and manipulated the input json.. this will help in the maintenance in future First defined the model ``` public class ToDescription { public string ProductDescription { get; set; } } public class ActualObject { public string Product { get; set; } public List<ToDescription> to_Description { get; set; } } public class ChangedObject { public string Product { get; set; } public ToDescription to_Description { get; set; } } ``` Inject the logic ``` static void Main(string[] args) { string json = "{\"Product\": \"123\", \"to_Description\": [ { \"ProductDescription\": \"Product 1\" } ]} "; ActualObject actualObject = JsonConvert.DeserializeObject<ActualObject>(json); ChangedObject changedObject = new ChangedObject(); changedObject.Product = actualObject.Product; changedObject.to_Description = actualObject.to_Description[0]; string formattedjson = JsonConvert.SerializeObject(changedObject); Console.WriteLine(formattedjson); } ```
67,044,019
I'm doing a beginner python course and created a very simple function that tells you how much your holiday costs given the return flight cost, hotel cost (per day) and car rental (weekly). This was fine however the next step is to find out how long you can stay in a country given a budget of 1000. My logic is to start the duration at 1 and try and get the function to run and check if the `cost_of_trip` is less than 1000. if it is it will increment duration and run again. Once the `cost_of_trip` exceeds 1000 it will stop and return the previous duration value. Tried several while loops but can't get it to increment more than once. this is the basic function ``` import math def duration_function (return_flight,hotel_cost,car_rental,): duration = 1 car_rental = math.ceil((duration/7)) * car_rental cost_of_trip = return_flight + (hotel_cost * duration) + car_rental while cost_of_trip <=1000: duration += 1 return duration ``` tried several versions of this while loop an example input would be ``` london_duration = duration_function( hotel_cost = 30, car_rental = 120, return_flight = 250, ) print ("London: " + str(london_duration)) ``` With the function listed above this just runs as an infinite loop.
2021/04/11
[ "https://Stackoverflow.com/questions/67044019", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15591183/" ]
The error message is telling you that you forgot to execute your function definition: `example` happens to be a name of a built-in function that has an argument `topic`. But R allows you to override built-in functions, so that’s not an issue. The fact that you’re getting the error message means that *you didn’t override the function*. When you call `example()` it’s calling the built-in function rather than yours, which means that your function wasn’t defined.
You can access those inner functions you put into a list like this. ``` example <- function(){ prnt1 <- function(){ x=1 print(x) } prnt2 <- function(){ print("my name is x") } list(Print = prnt1, Print2 = prnt2) } example2 <- example() example2$Print() [1] 1 example2$Print2() [1] "my name is x" ```
22,218,557
I have to call a php "build.php" from a button action and the php will call a python script and should return immediately. The action: ``` <form method="get" action="build.php"> <input type="hidden" name="branch" value="master"> <input type="submit" value="Build Master" id="btnMaster"> </form> ``` The build.php ``` <?php if(isset($_GET['branch'])) { $output = shell_exec('/usr/bin/python /Users/testuser/gitroot/buildOnGuest.py --dest /Users/testuser/Builds --branch ' . $_GET['branch'] . ' 2>&1 &'); } elseif (isset($_GET['tag'])) { $output = shell_exec('/usr/bin/python /Users/testuser/gitroot/buildOnGuest.py --dest /Users/testuser/Builds --tag ' . $_GET['tag'] . ' 2>&1 &'); } ?> ``` So the python script is executed in the background (&) and the php should return immediately to the main page. Is it possible and how ?
2014/03/06
[ "https://Stackoverflow.com/questions/22218557", "https://Stackoverflow.com", "https://Stackoverflow.com/users/492034/" ]
If you just want redirect throght PHP, add `header('Location: /path/to/index.php');` after the `if` clause and it will return!!! Logically, this should work like this : ``` <?php if(isset($_GET['branch'])) { $output = shell_exec('/usr/bin/python /Users/testuser/gitroot/buildOnGuest.py --dest /Users/testuser/Builds --branch ' . $_GET['branch'] . ' 2>&1 &'); } elseif (isset($_GET['tag'])) { $output = shell_exec('/usr/bin/python /Users/testuser/gitroot/buildOnGuest.py --dest /Users/testuser/Builds --tag ' . $_GET['tag'] . ' 2>&1 &'); } /* add here, appending $output value for better understanding */ header('Location: /path/to/index.php?status=$output'); ?> ```
``` set_time_limit(0); ignore_user_abort(true); header("Connection: close\r\n"); header("Content-Encoding: none\r\n"); ob_start(); echo json_encode($response); $size = ob_get_length(); header("Content-Length: $size",TRUE); ob_end_flush(); ob_flush(); flush(); session_write_close(); ``` $response is the response that you want to send back to the calling script. Do whatever you want to after session\_write\_close() so that those things don't block the execution of your calling script further
22,218,557
I have to call a php "build.php" from a button action and the php will call a python script and should return immediately. The action: ``` <form method="get" action="build.php"> <input type="hidden" name="branch" value="master"> <input type="submit" value="Build Master" id="btnMaster"> </form> ``` The build.php ``` <?php if(isset($_GET['branch'])) { $output = shell_exec('/usr/bin/python /Users/testuser/gitroot/buildOnGuest.py --dest /Users/testuser/Builds --branch ' . $_GET['branch'] . ' 2>&1 &'); } elseif (isset($_GET['tag'])) { $output = shell_exec('/usr/bin/python /Users/testuser/gitroot/buildOnGuest.py --dest /Users/testuser/Builds --tag ' . $_GET['tag'] . ' 2>&1 &'); } ?> ``` So the python script is executed in the background (&) and the php should return immediately to the main page. Is it possible and how ?
2014/03/06
[ "https://Stackoverflow.com/questions/22218557", "https://Stackoverflow.com", "https://Stackoverflow.com/users/492034/" ]
If you just want redirect throght PHP, add `header('Location: /path/to/index.php');` after the `if` clause and it will return!!! Logically, this should work like this : ``` <?php if(isset($_GET['branch'])) { $output = shell_exec('/usr/bin/python /Users/testuser/gitroot/buildOnGuest.py --dest /Users/testuser/Builds --branch ' . $_GET['branch'] . ' 2>&1 &'); } elseif (isset($_GET['tag'])) { $output = shell_exec('/usr/bin/python /Users/testuser/gitroot/buildOnGuest.py --dest /Users/testuser/Builds --tag ' . $_GET['tag'] . ' 2>&1 &'); } /* add here, appending $output value for better understanding */ header('Location: /path/to/index.php?status=$output'); ?> ```
Try redirecting the output to `/dev/null` we do something almost identical which works perfectly for what you're trying to do. ``` <?php if(isset($_GET['branch'])) { $output = shell_exec('/usr/bin/python /Users/testuser/gitroot/buildOnGuest.py --dest /Users/testuser/Builds --branch ' . $_GET['branch'] . ' > /dev/null 2>/dev/null &'); } elseif (isset($_GET['tag'])) { $output = shell_exec('/usr/bin/python /Users/testuser/gitroot/buildOnGuest.py --dest /Users/testuser/Builds --tag ' . $_GET['tag'] . ' > /dev/null 2>/dev/null &'); } ?> ```
22,218,557
I have to call a php "build.php" from a button action and the php will call a python script and should return immediately. The action: ``` <form method="get" action="build.php"> <input type="hidden" name="branch" value="master"> <input type="submit" value="Build Master" id="btnMaster"> </form> ``` The build.php ``` <?php if(isset($_GET['branch'])) { $output = shell_exec('/usr/bin/python /Users/testuser/gitroot/buildOnGuest.py --dest /Users/testuser/Builds --branch ' . $_GET['branch'] . ' 2>&1 &'); } elseif (isset($_GET['tag'])) { $output = shell_exec('/usr/bin/python /Users/testuser/gitroot/buildOnGuest.py --dest /Users/testuser/Builds --tag ' . $_GET['tag'] . ' 2>&1 &'); } ?> ``` So the python script is executed in the background (&) and the php should return immediately to the main page. Is it possible and how ?
2014/03/06
[ "https://Stackoverflow.com/questions/22218557", "https://Stackoverflow.com", "https://Stackoverflow.com/users/492034/" ]
If you just want redirect throght PHP, add `header('Location: /path/to/index.php');` after the `if` clause and it will return!!! Logically, this should work like this : ``` <?php if(isset($_GET['branch'])) { $output = shell_exec('/usr/bin/python /Users/testuser/gitroot/buildOnGuest.py --dest /Users/testuser/Builds --branch ' . $_GET['branch'] . ' 2>&1 &'); } elseif (isset($_GET['tag'])) { $output = shell_exec('/usr/bin/python /Users/testuser/gitroot/buildOnGuest.py --dest /Users/testuser/Builds --tag ' . $_GET['tag'] . ' 2>&1 &'); } /* add here, appending $output value for better understanding */ header('Location: /path/to/index.php?status=$output'); ?> ```
If my other answer doesn't work you can try this: The quickest/easiest way would probably be to load the script in an iFrame using JavaScript. This will load it without navigating away from the page. Your page: ``` <form method="get" action="build.php"> <input type="hidden" name="branch" value="master"> <input onclick="runBuild()" type="submit" value="Build Master" id="btnMaster"> </form> <iframe id="launch_script" name="launch_script" style="display:none"></iframe> function runBuild(){ document.getElementById('launch_script').src = 'build.php'; } ``` Note you'll probably need to pass your GET parameters along with the iFrame src.
22,218,557
I have to call a php "build.php" from a button action and the php will call a python script and should return immediately. The action: ``` <form method="get" action="build.php"> <input type="hidden" name="branch" value="master"> <input type="submit" value="Build Master" id="btnMaster"> </form> ``` The build.php ``` <?php if(isset($_GET['branch'])) { $output = shell_exec('/usr/bin/python /Users/testuser/gitroot/buildOnGuest.py --dest /Users/testuser/Builds --branch ' . $_GET['branch'] . ' 2>&1 &'); } elseif (isset($_GET['tag'])) { $output = shell_exec('/usr/bin/python /Users/testuser/gitroot/buildOnGuest.py --dest /Users/testuser/Builds --tag ' . $_GET['tag'] . ' 2>&1 &'); } ?> ``` So the python script is executed in the background (&) and the php should return immediately to the main page. Is it possible and how ?
2014/03/06
[ "https://Stackoverflow.com/questions/22218557", "https://Stackoverflow.com", "https://Stackoverflow.com/users/492034/" ]
Try redirecting the output to `/dev/null` we do something almost identical which works perfectly for what you're trying to do. ``` <?php if(isset($_GET['branch'])) { $output = shell_exec('/usr/bin/python /Users/testuser/gitroot/buildOnGuest.py --dest /Users/testuser/Builds --branch ' . $_GET['branch'] . ' > /dev/null 2>/dev/null &'); } elseif (isset($_GET['tag'])) { $output = shell_exec('/usr/bin/python /Users/testuser/gitroot/buildOnGuest.py --dest /Users/testuser/Builds --tag ' . $_GET['tag'] . ' > /dev/null 2>/dev/null &'); } ?> ```
``` set_time_limit(0); ignore_user_abort(true); header("Connection: close\r\n"); header("Content-Encoding: none\r\n"); ob_start(); echo json_encode($response); $size = ob_get_length(); header("Content-Length: $size",TRUE); ob_end_flush(); ob_flush(); flush(); session_write_close(); ``` $response is the response that you want to send back to the calling script. Do whatever you want to after session\_write\_close() so that those things don't block the execution of your calling script further
22,218,557
I have to call a php "build.php" from a button action and the php will call a python script and should return immediately. The action: ``` <form method="get" action="build.php"> <input type="hidden" name="branch" value="master"> <input type="submit" value="Build Master" id="btnMaster"> </form> ``` The build.php ``` <?php if(isset($_GET['branch'])) { $output = shell_exec('/usr/bin/python /Users/testuser/gitroot/buildOnGuest.py --dest /Users/testuser/Builds --branch ' . $_GET['branch'] . ' 2>&1 &'); } elseif (isset($_GET['tag'])) { $output = shell_exec('/usr/bin/python /Users/testuser/gitroot/buildOnGuest.py --dest /Users/testuser/Builds --tag ' . $_GET['tag'] . ' 2>&1 &'); } ?> ``` So the python script is executed in the background (&) and the php should return immediately to the main page. Is it possible and how ?
2014/03/06
[ "https://Stackoverflow.com/questions/22218557", "https://Stackoverflow.com", "https://Stackoverflow.com/users/492034/" ]
Try redirecting the output to `/dev/null` we do something almost identical which works perfectly for what you're trying to do. ``` <?php if(isset($_GET['branch'])) { $output = shell_exec('/usr/bin/python /Users/testuser/gitroot/buildOnGuest.py --dest /Users/testuser/Builds --branch ' . $_GET['branch'] . ' > /dev/null 2>/dev/null &'); } elseif (isset($_GET['tag'])) { $output = shell_exec('/usr/bin/python /Users/testuser/gitroot/buildOnGuest.py --dest /Users/testuser/Builds --tag ' . $_GET['tag'] . ' > /dev/null 2>/dev/null &'); } ?> ```
If my other answer doesn't work you can try this: The quickest/easiest way would probably be to load the script in an iFrame using JavaScript. This will load it without navigating away from the page. Your page: ``` <form method="get" action="build.php"> <input type="hidden" name="branch" value="master"> <input onclick="runBuild()" type="submit" value="Build Master" id="btnMaster"> </form> <iframe id="launch_script" name="launch_script" style="display:none"></iframe> function runBuild(){ document.getElementById('launch_script').src = 'build.php'; } ``` Note you'll probably need to pass your GET parameters along with the iFrame src.
43,056,088
I'm trying to open a stereo stream and convert it to mono, using the [wave module](https://docs.python.org/2/library/wave.html) in python. So far I was able to write a single (left or right) channel from a 16bit stereo little endian file: ``` LEFT, RIGHT = 0, 1 def mono_single(cont, chan=LEFT): a = iter(cont) mono_cont = '' if chan: a.next(); a.next() while True: try: mono_cont += a.next() + a.next() a.next(); a.next() except StopIteration: return mono_cont stereo = wave.open('stereofile.wav', 'rb') mono = wave.open('monofile.wav', 'wb') mono.setparams(stereo.getparams()) mono.setnchannels(1) mono.writeframes(mono_single(stereo.readframes(stereo.getnframes()))) mono.close() ``` This works as expected. The problem comes when I try to downmix the two stereo channels to a single mono channel. I thought that a simple average between left and right would have been enough, and this is what I tried so far: ``` def mono_mix(cont): a = iter(cont) mono_cont = '' while True: try: left = ord(a.next()) + (ord(a.next()) << 8) right = ord(a.next()) + (ord(a.next()) << 8) value = (left + right) / 2 mono_cont += chr(value & 255) + chr(value >> 8) except StopIteration: return mono_cont stereo = wave.open('stereofile.wav', 'rb') mono = wave.open('monofile.wav', 'wb') mono.setparams(stereo.getparams()) mono.setnchannels(1) mono.writeframes(mono_mix(stereo.readframes(stereo.getnframes()))) mono.close() ``` What I get from this is a "crackled" version of the source. I tried different combinations (I might have misunderstood the whole endianness thing), but with no luck so far.
2017/03/27
[ "https://Stackoverflow.com/questions/43056088", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2001654/" ]
In your first code, when you call i/0 for the second time in shell the process loop/0 mailbox is not empty (num = 1) meaning that your stop message is not matched by the process loop/0 Make sure your order of operation is correct: 1. compile 2. spawn 3. i/0 4. send stop message 5. i/0
Okay, it looks like `stop` is maybe a reserved atom in erlang. My code works as expected if I substitute the atom `finished` or `xyz`: ``` loop() -> receive {circle, R} -> io:format("The area of a circle with radius ~w is: ~w~n", [R, math:pi()*R*R]), loop(); {rectangle, H, W} -> io:format("The are of a rectangle with sides ~w x ~w is: ~w~n", [H, W, H*W]), loop(); finished -> ok end. ```
11,084,710
I have algorithm of calculating average speed in pure python: ``` speed = [...] avg_speed = 0.0 speed_count = 0 for i in speed: if i > 0: # I dont need zeros avg_speed += i speed_count += 1 if speed_count == 0: return 0.0 return avg_speed / speed_count ``` Is there any way to rewrite this functions with Numpy?
2012/06/18
[ "https://Stackoverflow.com/questions/11084710", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1325846/" ]
``` import numpy as np def avg_positive_speed(speed): s = np.array(speed) positives = s > 0 if positives.any(): return s[positives].mean() else: return 0. speed = [1., 2., 0., 3.] print avg_positive_speed(speed) # prints 2.0 print avg_positive_speed([0., 0.]) # prints 0.0 ```
I know you want a `numpy` solution, so this doesn't meet that criteria (@eumiro's earlier post certainly does), but just as an alternative, here's an optimized Python version which surprisingly (to me at least) turned out to be quite speedy! ``` speeds = [i for i in speed if i > 0] return sum(speeds) / (1.0 * len(speeds)) if sum(speeds) > 0 else 0.0 ``` Might be interesting to compare this with the numpy (or the original) implementation in terms of speed. ``` In [14]: timeit original(speed) # original code 1000 loops, best of 3: 1.13 ms per loop In [15]: timeit python_opt(speed) # above Python 2 liner 1000 loops, best of 3: 582 us per loop In [16]: timeit avg_positive_speed(speed) # numpy code 1000 loops, best of 3: 1.2 ms per loop ``` where ``` speed = range(10000) ``` I would have thought that `numpy` would have the edge here .. anyone know why it trails? Update: with `speed = range(100000)`: ``` In [19]: timeit original(speed) 100 loops, best of 3: 12.2 ms per loop In [20]: timeit python_opt(speed) 100 loops, best of 3: 11 ms per loop In [21]: timeit avg_positive_speed(speed) 100 loops, best of 3: 12.5 ms per loop ``` Still not convinced that `numpy` is a good tool for ***this*** particular problem, unless there are a **huge** number of speeds :) How does numpy handle memory? list comprehension will at some point bump into some limitations.
11,084,710
I have algorithm of calculating average speed in pure python: ``` speed = [...] avg_speed = 0.0 speed_count = 0 for i in speed: if i > 0: # I dont need zeros avg_speed += i speed_count += 1 if speed_count == 0: return 0.0 return avg_speed / speed_count ``` Is there any way to rewrite this functions with Numpy?
2012/06/18
[ "https://Stackoverflow.com/questions/11084710", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1325846/" ]
The function `numpy.average` can receive a `weights` argument, where you can put a boolean array generated from some condition applied to the array itself - in this case, an element being greater than 0: ``` average_speed = numpy.average(speeds, weights=(speeds > 0)) ``` Hope this helps
``` import numpy as np def avg_positive_speed(speed): s = np.array(speed) positives = s > 0 if positives.any(): return s[positives].mean() else: return 0. speed = [1., 2., 0., 3.] print avg_positive_speed(speed) # prints 2.0 print avg_positive_speed([0., 0.]) # prints 0.0 ```
11,084,710
I have algorithm of calculating average speed in pure python: ``` speed = [...] avg_speed = 0.0 speed_count = 0 for i in speed: if i > 0: # I dont need zeros avg_speed += i speed_count += 1 if speed_count == 0: return 0.0 return avg_speed / speed_count ``` Is there any way to rewrite this functions with Numpy?
2012/06/18
[ "https://Stackoverflow.com/questions/11084710", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1325846/" ]
I'm surprised no one has suggested the shortest solution: ``` speeds_np = np.array(speeds) speeds_np[speeds_np>0].mean() ``` **Explanation:** `speedsNp > 0` creates a boolean array of the same size satisfying the (in)equality. If fed into `speedsNp`, it yields only the corresponding values of `speedNp` where the value of the boolean array is `True`. All you need to do then, is just take the `mean()` of the result.
``` import numpy as np def avg_positive_speed(speed): s = np.array(speed) positives = s > 0 if positives.any(): return s[positives].mean() else: return 0. speed = [1., 2., 0., 3.] print avg_positive_speed(speed) # prints 2.0 print avg_positive_speed([0., 0.]) # prints 0.0 ```
11,084,710
I have algorithm of calculating average speed in pure python: ``` speed = [...] avg_speed = 0.0 speed_count = 0 for i in speed: if i > 0: # I dont need zeros avg_speed += i speed_count += 1 if speed_count == 0: return 0.0 return avg_speed / speed_count ``` Is there any way to rewrite this functions with Numpy?
2012/06/18
[ "https://Stackoverflow.com/questions/11084710", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1325846/" ]
``` import numpy as np def avg_positive_speed(speed): s = np.array(speed) positives = s > 0 if positives.any(): return s[positives].mean() else: return 0. speed = [1., 2., 0., 3.] print avg_positive_speed(speed) # prints 2.0 print avg_positive_speed([0., 0.]) # prints 0.0 ```
As of [v1.20](https://numpy.org/doc/stable/release/1.20.0-notes.html#where-keyword-argument-for-numpy-functions-mean-std-var) numpy's `mean` etc functions support a `where` argument: ```py speed.mean(where=speed>0) ```
11,084,710
I have algorithm of calculating average speed in pure python: ``` speed = [...] avg_speed = 0.0 speed_count = 0 for i in speed: if i > 0: # I dont need zeros avg_speed += i speed_count += 1 if speed_count == 0: return 0.0 return avg_speed / speed_count ``` Is there any way to rewrite this functions with Numpy?
2012/06/18
[ "https://Stackoverflow.com/questions/11084710", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1325846/" ]
The function `numpy.average` can receive a `weights` argument, where you can put a boolean array generated from some condition applied to the array itself - in this case, an element being greater than 0: ``` average_speed = numpy.average(speeds, weights=(speeds > 0)) ``` Hope this helps
I know you want a `numpy` solution, so this doesn't meet that criteria (@eumiro's earlier post certainly does), but just as an alternative, here's an optimized Python version which surprisingly (to me at least) turned out to be quite speedy! ``` speeds = [i for i in speed if i > 0] return sum(speeds) / (1.0 * len(speeds)) if sum(speeds) > 0 else 0.0 ``` Might be interesting to compare this with the numpy (or the original) implementation in terms of speed. ``` In [14]: timeit original(speed) # original code 1000 loops, best of 3: 1.13 ms per loop In [15]: timeit python_opt(speed) # above Python 2 liner 1000 loops, best of 3: 582 us per loop In [16]: timeit avg_positive_speed(speed) # numpy code 1000 loops, best of 3: 1.2 ms per loop ``` where ``` speed = range(10000) ``` I would have thought that `numpy` would have the edge here .. anyone know why it trails? Update: with `speed = range(100000)`: ``` In [19]: timeit original(speed) 100 loops, best of 3: 12.2 ms per loop In [20]: timeit python_opt(speed) 100 loops, best of 3: 11 ms per loop In [21]: timeit avg_positive_speed(speed) 100 loops, best of 3: 12.5 ms per loop ``` Still not convinced that `numpy` is a good tool for ***this*** particular problem, unless there are a **huge** number of speeds :) How does numpy handle memory? list comprehension will at some point bump into some limitations.
11,084,710
I have algorithm of calculating average speed in pure python: ``` speed = [...] avg_speed = 0.0 speed_count = 0 for i in speed: if i > 0: # I dont need zeros avg_speed += i speed_count += 1 if speed_count == 0: return 0.0 return avg_speed / speed_count ``` Is there any way to rewrite this functions with Numpy?
2012/06/18
[ "https://Stackoverflow.com/questions/11084710", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1325846/" ]
I'm surprised no one has suggested the shortest solution: ``` speeds_np = np.array(speeds) speeds_np[speeds_np>0].mean() ``` **Explanation:** `speedsNp > 0` creates a boolean array of the same size satisfying the (in)equality. If fed into `speedsNp`, it yields only the corresponding values of `speedNp` where the value of the boolean array is `True`. All you need to do then, is just take the `mean()` of the result.
I know you want a `numpy` solution, so this doesn't meet that criteria (@eumiro's earlier post certainly does), but just as an alternative, here's an optimized Python version which surprisingly (to me at least) turned out to be quite speedy! ``` speeds = [i for i in speed if i > 0] return sum(speeds) / (1.0 * len(speeds)) if sum(speeds) > 0 else 0.0 ``` Might be interesting to compare this with the numpy (or the original) implementation in terms of speed. ``` In [14]: timeit original(speed) # original code 1000 loops, best of 3: 1.13 ms per loop In [15]: timeit python_opt(speed) # above Python 2 liner 1000 loops, best of 3: 582 us per loop In [16]: timeit avg_positive_speed(speed) # numpy code 1000 loops, best of 3: 1.2 ms per loop ``` where ``` speed = range(10000) ``` I would have thought that `numpy` would have the edge here .. anyone know why it trails? Update: with `speed = range(100000)`: ``` In [19]: timeit original(speed) 100 loops, best of 3: 12.2 ms per loop In [20]: timeit python_opt(speed) 100 loops, best of 3: 11 ms per loop In [21]: timeit avg_positive_speed(speed) 100 loops, best of 3: 12.5 ms per loop ``` Still not convinced that `numpy` is a good tool for ***this*** particular problem, unless there are a **huge** number of speeds :) How does numpy handle memory? list comprehension will at some point bump into some limitations.
11,084,710
I have algorithm of calculating average speed in pure python: ``` speed = [...] avg_speed = 0.0 speed_count = 0 for i in speed: if i > 0: # I dont need zeros avg_speed += i speed_count += 1 if speed_count == 0: return 0.0 return avg_speed / speed_count ``` Is there any way to rewrite this functions with Numpy?
2012/06/18
[ "https://Stackoverflow.com/questions/11084710", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1325846/" ]
The function `numpy.average` can receive a `weights` argument, where you can put a boolean array generated from some condition applied to the array itself - in this case, an element being greater than 0: ``` average_speed = numpy.average(speeds, weights=(speeds > 0)) ``` Hope this helps
As of [v1.20](https://numpy.org/doc/stable/release/1.20.0-notes.html#where-keyword-argument-for-numpy-functions-mean-std-var) numpy's `mean` etc functions support a `where` argument: ```py speed.mean(where=speed>0) ```
11,084,710
I have algorithm of calculating average speed in pure python: ``` speed = [...] avg_speed = 0.0 speed_count = 0 for i in speed: if i > 0: # I dont need zeros avg_speed += i speed_count += 1 if speed_count == 0: return 0.0 return avg_speed / speed_count ``` Is there any way to rewrite this functions with Numpy?
2012/06/18
[ "https://Stackoverflow.com/questions/11084710", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1325846/" ]
I'm surprised no one has suggested the shortest solution: ``` speeds_np = np.array(speeds) speeds_np[speeds_np>0].mean() ``` **Explanation:** `speedsNp > 0` creates a boolean array of the same size satisfying the (in)equality. If fed into `speedsNp`, it yields only the corresponding values of `speedNp` where the value of the boolean array is `True`. All you need to do then, is just take the `mean()` of the result.
As of [v1.20](https://numpy.org/doc/stable/release/1.20.0-notes.html#where-keyword-argument-for-numpy-functions-mean-std-var) numpy's `mean` etc functions support a `where` argument: ```py speed.mean(where=speed>0) ```
71,441,054
For example this piece of code creates a label with some content which can be text, bitmap etc ```py label1 = tk.Label(text="Hello", font=("Arial",32,"bold"), bg="grey", fg="red",width=200,height=200) #and so on... ``` I cannot really refactor how `tk.Label` constructor handle things nor change it. In my case I would like to configure text / font in a separate object. All styling in another object and width/height in again another object. Like this: ```py size_config = ElementSize(width=200,height=200) styling_config = Style(bg="red",fg="blue") text_config = ... # then create label with this objects ``` And this is just an example. I very often get frustrated from how badly python libraries uses constructors and I don't know what to do with this over 9000 parameters that have no structure in them at all. Maybe there is something wrong with me? There is a way to hide all of this stuff in some sort of scoped objects and then map them to all of this nightmare in constructors?
2022/03/11
[ "https://Stackoverflow.com/questions/71441054", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14586554/" ]
Use `**kwargs` unpacking. ``` label_args = dict( text="Hello", font=("Arial",32,"bold"), bg="grey", fg="red", width=200, height=200, ) #and so on... label1 = tk.Label(**label_args) ``` If you want to apply static typing to these things, you can define `TypedDict`s for each function signature.
Why don't you use your own function? ``` from inspect import getfullargspec class Label: def __init__(self, text, font, bg, fg, width, height): self.text = text self.font = font self.bg = bg self.fg = fg self.width = width self.height = height print("created a label") class ElementSize: def __init__(self, width, height): self.width = width self.height = height class Background: def __init__(self, bg): self.bg = bg def map_arguments(argument_sources, cls, **kwargs): arguments = dict() for argument_source in argument_sources: argument_source_signature = getfullargspec(argument_source.__init__).args for arg in argument_source_signature: if arg == "self": continue arguments[arg] = argument_source.__getattribute__(arg) arguments.update(kwargs) return cls(**arguments) if __name__ == '__main__': label = map_arguments([ElementSize(2, 4), Background("grey")], Label, fg="red", text="Hello", font=("Arial", 32, "bold")) ```
59,925,284
I have tried other answers on stackoverflow and github, but none of them are working. When I run the container [I get this as the output(Which is the standard "flask run" output)](https://i.stack.imgur.com/EmkR8.png) This is my Dockerfile ``` FROM alpine:latest RUN apk add --no-cache python3-dev && pip3 install --upgrade pip WORKDIR /app COPY . /app RUN pip3 install Flask && pip3 install requests && pip3 install simplejson EXPOSE 5000 CMD [ "flask", "run" ] ``` I have tried 0.0.0.0:5000 too but its not working.
2020/01/27
[ "https://Stackoverflow.com/questions/59925284", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6870913/" ]
Pop from the priority queue here without peeking: ``` let v = d.get_mut(&key).unwrap(); mat[i][j] = v.pop().unwrap().0; ``` Result (output): ``` input: 9 8 7 6 5 4 3 2 1 output: 1 4 7 2 5 8 3 6 9 ``` And you may use [`std::cmp::Reverse`](https://doc.rust-lang.org/std/cmp/struct.Reverse.html) for reverse ordering: ``` use std::cmp::Reverse; use std::collections::BinaryHeap; use std::collections::HashMap; pub fn diagonal_sort(mat: Vec<Vec<i32>>) -> Vec<Vec<i32>> { let mut mat = mat.clone(); let mut mh: HashMap<i32, BinaryHeap<_>> = HashMap::new(); for i in 0..mat.len() { for j in 0..mat[0].len() { let key = i as i32 - j as i32; mh.entry(key) .or_insert(BinaryHeap::new()) .push(Reverse(mat[i][j])); } } for i in 0..mat.len() { for j in 0..mat[0].len() { let key = i as i32 - j as i32; let q = mh.get_mut(&key).unwrap(); match q.pop().unwrap() { Reverse(v) => mat[i][j] = v, } } } mat } fn main() { let m = vec![vec![9, 8, 7], vec![6, 5, 4], vec![3, 2, 1]]; show("input:", &m); let s = diagonal_sort(m); show("output:", &s); } fn show(s: &str, mat: &Vec<Vec<i32>>) { println!("{}", s); let m = mat.len(); let n = mat[0].len(); for i in 0..m { for j in 0..n { print!("{} ", mat[i][j]); } println!(); } } ``` --- Result (output): ``` input: 9 6 3 8 5 2 7 4 1 output: 1 2 3 4 5 6 7 8 9 ``` Try this (for **diagonally distinct** elements): ``` use priority_queue::PriorityQueue; use std::collections::HashMap; fn diagonal_sort(mat: Vec<Vec<i32>>) -> Vec<Vec<i32>> { let mut d: HashMap<i32, PriorityQueue<i32, i32>> = HashMap::new(); let mut mat = mat.clone(); let m = mat.len(); let n = mat[0].len(); for i in 0..m { for j in 0..n { let key = i as i32 - j as i32; let v = -mat[i][j]; d.entry(key).or_insert(PriorityQueue::new()).push(v, v); } } for i in 0..m { for j in 0..n { let key = i as i32 - j as i32; let v = d.get_mut(&key).unwrap(); mat[i][j] = -v.pop().unwrap().0; } } mat } fn main() { let m = vec![vec![9, 6, 3], vec![8, 5, 2], vec![7, 4, 1]]; show("input:", &m); let s = diagonal_sort(m); show("output:", &s); } fn show(s: &str, mat: &Vec<Vec<i32>>) { println!("{}", s); let m = mat.len(); let n = mat[0].len(); for i in 0..m { for j in 0..n { print!("{} ", mat[i][j]); } println!(); } } ```
I figured out that it's actually better to use std::collections::BinaryHeap to solve this problem. ``` use std::collections::BinaryHeap; use std::collections::HashMap; pub fn main() { let mat = vec![vec![3,3,1,1],vec![2,2,1,2],vec![1,1,1,2]]; let res = diagonal_sort(mat); println!("{:?}", res); } pub fn diagonal_sort(mat: Vec<Vec<i32>>) -> Vec<Vec<i32>> { let mut mat = mat.clone(); let mut mh: HashMap<i32, BinaryHeap<i32>> = HashMap::new(); for i in 0..mat.len() { for j in 0..mat[0].len() { let key = i as i32 - j as i32; mh.entry(key) .or_insert(BinaryHeap::new()) .push(-mat[i][j]); } } for i in 0..mat.len() { for j in 0..mat[0].len() { let key = i as i32 - j as i32; let q = mh.get_mut(&key).unwrap(); mat[i][j] = -(q.pop().unwrap()); } } mat } ```
36,152,435
`sys.getsizeof` is returning different size for a unicode string on different versions of python. `sys.getsizeof(u'Hello World')` return `96` on `Python 2.7.3` and returns `72` on `Python 2.7.11`
2016/03/22
[ "https://Stackoverflow.com/questions/36152435", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1665862/" ]
`sys.getsizeof` is giving you implementation details by definition, and none of those details are guaranteed to remain stable between versions or even builds. It's unlikely that anything significant changed between 2.7.3 and 2.7.11 though; YOU's comment on character width likely explains the discrepancy; including the internally stored NUL terminator, there are 12 characters in `Hello World`, and UCS4 encoding would require 24 more bytes to store them than UCS2 encoding (but in exchange, it could handle non-[BMP](https://en.wikipedia.org/wiki/Plane_%28Unicode%29#Basic_Multilingual_Plane) characters). Other things that could change size (in other circumstances) would be 32 vs. 64 bit builds (all pointers and `ssize_t`s double in size on 64 bit builds, as do `long`s on non-Windows machines), Python 2 vs. Python 3 (Python 3 removed a single pointer width field from the common object header), and for `str`, Python 3.2 (which uses build option specified fixed width UCS2 or UCS4 `str`, same as Py2 `unicode`) vs. Python 3.3+ (which [uses one of three different fixed widths depending on the largest ordinal in the `str`](https://docs.python.org/3/whatsnew/3.3.html#pep-393-flexible-string-representation), so an ASCII/latin-1 `str` uses one byte per character, a BMP `str` uses two, and a non-BMP `str` uses four, but can also cache alternate representations, so the same `str` can grow or shrink in "real" size based on usage).
> > sys.getsizeof > Can differ on different computers. However I think this can solve your issues. Take the size of a string for example and subtract the size of an empty string. > > > `import sys def get_size_of_string(s): return sys.getsizeof(s)-sys.getsizeof("") a=get_size_of_string("abc") print (a)`
22,282,316
I'm building a website using the [Flask Framework](http://flask.pocoo.org/), in which I've got a folder in which I have some python files and an `__init__.py` script (I guess you would call this folder a module?). In the **init**.py file I've got a line saying: ``` db = Database(app) ``` I now want to use `db` in a different script which is in this folder. Normally I would do this using `from __init__ import db`, but that just doesn't seem right to do, let alone pythonic. Furthermore, since it is in the `__init__.py` file, I suppose it should somehow be initialised for the whole folder/module. Does anybody know how I can use `db` from the `__init__.py` file? All tips are welcome!
2014/03/09
[ "https://Stackoverflow.com/questions/22282316", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1650012/" ]
Try [relative imports](http://docs.python.org/2/tutorial/modules.html#intra-package-references) ``` from . import db ```
The **init**.py files are automatically imported when you import the package they are in. For example, say the **init**.py is in a package called **foo**. When you import foo ``` import foo from foo import * import foo as ... ``` the **init** file gets called. And are you using python 2 or 3? If you are using python 3, you will need to use dynamic imports: ``` from * import __init__ ``` In general, it is a bad practice to import from **init**, I would suggest trying to place this code in another file in the package.
62,698,250
I am looking for an algorithm that could factorize numbers based on numbers it already factorized. In other words, I am searching for a fast algorithm to factorise all numbers up to a given number, and store them in a (I guess this is the easiest data structure to use) list / tuple of tuples. I am looking for an "up to n" algorithm because I need all numbers up to "n", and I guess it's faster than just checking one by one. I want this algorithm to work within a reasonable time (less than an hour) for 2\*10^8, for a program I am running. I have tried one of the more naive approaches in python, finding all primes up to "n" first, and then for each number "k" finding it's prime factorization by checking each prime until one divides it (we will call it p), then it's factorization is the factorization of k/p + p. ``` from math import * max=1000000 # We will check all numbers up to this number, lst = [True] * (max - 2) # This is an algorithm I found online that will make the "PRIMES" list all the primes up to "max", very efficent for i in range(2, int(sqrt(max) + 1)): if lst[i - 2]: for j in range(i ** 2, max, i): lst[j - 2] = False PRIMES = tuple([m + 2 for m in range(len(lst)) if lst[m]]) # (all primes up to "max") FACTORS = [(0,),(1,)] #This will be a list of tuples where FACTORS[i] = the prime factors of i for c in range(2,max): #check all numbers until max if c in PRIMES: FACTORS.append((c,)) #If it's a prime just add it in else: #if it's not a prime... i=0 while PRIMES[i]<= c: #Run through all primes until you find one that divides it, if c%PRIMES[i] ==0: FACTORS.append(FACTORS[c//PRIMES[i]] + (PRIMES[i],)) #If it does, add the prime with the factors of the division break i+=1 ``` From testing, the vast majority of time is wasted on the else section AFTER checking if the candidate is prime or not. This takes more than an our for max = 200000000 --- **P.S. - WHAT I'M USING THIS FOR - NOT IMPORTANT** -------------------------------------------------- The program I am running this for is to find the smallest "n" such that for a certain "a" such that (2n)!/((n+a)!^2) is a whole number. Basically, I defined a\_n = smallest k such that (2k)!/((k+n)!^2) is an integer. turns out, a\_1 =0, a\_2 = 208, a\_3 = 3475, a\_4 = 8174, a\_5 = 252965, a\_6 = 3648835, a\_7 = 72286092. By the way, I noticed that a\_n + n is squarefree, although can't prove it mathematically. Using Legendre's formula: <https://en.wikipedia.org/wiki/Legendre%27s_formula>, I wrote this code: ``` from math import * from bisect import bisect_right max=100000000 # We will check all numbers up to this number, lst = [True] * (max - 2) # This is an algorithm I found online that will make the "PRIMES" list all the primes up to "max", very efficent for i in range(2, int(sqrt(max) + 1)): if lst[i - 2]: for j in range(i ** 2, max, i): lst[j - 2] = False PRIMES = tuple([m + 2 for m in range(len(lst)) if lst[m]]) # (all primes up to "max") print("START") def v(p,m): return sum([ (floor(m/(p**i))) for i in range(1,1+ceil(log(m,p)))]) #This checks for the max power of prime p, so that p**(v(p,m)) divides factorial(m) def check(a,n): #This function checks if a number n competes the criteria for a certain a if PRIMES[bisect_right(PRIMES, n)]<= n + a: #First, it is obvious that if there is a prime between n+1 and n+a the criteria isn't met return False i=0 while PRIMES[i] <= n: #We will run through the primes smaller than n... THIS IS THE ROOM FOR IMPROVEMENT - instead of checking all the primes, check all primes that divide (n+1),(n+2),...,(n+a) if v(PRIMES[i],2*n)<2*v(PRIMES[i],n+a): # If any prime divides the denominator more than the numerator, the fraction is obviously not a whole number return False i+=1 return True #If for all primes less than n, the numerator has a bigger max power of p than the denominator, the fraction is a whole number. #Next, is a code that will just make sure that the program runs all numbers in order, and won't repeat anything. start = 0 #start checking from this value for a in range(1,20): #check for these values of a. j=start while not check(a,j): if j%100000==0: print("LOADING ", j) #just so i know how far the program has gotten. j+=1 print("a-",a," ",j) #We found a number. great. print the result. start=j #start from this value again, because the check obviously won't work for smaller values with a higher "a" ```
2020/07/02
[ "https://Stackoverflow.com/questions/62698250", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13653390/" ]
You can use the first part of your script in order to do that! Code: ```py from math import * import time MAX = 40000000 t = time.time() # factors[i] = all the prime factors of i factors = {} # Running over all the numbers smaller than sqrt(MAX) since they can be the factors of MAX for i in range(2, int(sqrt(MAX) + 1)): # If this number has already been factored - it is not prime if i not in factors: # Find all the future numbers that this number will factor for j in range(i * 2, MAX, i): if j not in factors: factors[j] = [i] else: factors[j].append(i) print(time.time() - t) for i in range(3, 15): if i not in factors: print(f"{i} is prime") else: print(f"{i}: {factors[i]}") ``` Result: > > 3: is prime > > 4: [2] > > 5: is prime > > 6: [2, 3] > > 7: is prime > > 8: [2] > > 9: [3] > > 10: [2, 5] > > 11: is prime > > 12: [2, 3] > > 13: is prime > > 14: [2, 7] > > > Explanation: As mentioned in the comments it is a modification of the [Sieve of Eratosthenes](https://en.wikipedia.org/wiki/Sieve_of_Eratosthenes) algorithm. For each number we find all the numbers it can factorize in the future. If the number does not appear in the result dictionary it is a prime since no number factorize it. We are using dictionary instead of list so the prime numbers will not need to be saved at all - which is a bit more memory friendly but also a bit slower. Time: According to a simple check for `MAX = 40000000` with `time.time()`: `110.14351892471313` seconds. For `MAX = 1000000`: `1.0785243511199951` seconds. For `MAX = 200000000` with `time.time()`: Not finished after 1.5 hours... It has reached the 111th item in the main loop out of 6325 items (This is not so bad since the farther the loops go they become shorter). I do believe however that a well written C code could do it in half an hour (If you are willing to consider it I might write another answer). Some more optimization that can be done is use multithreading and some Primality test like Miller–Rabin. Of course it is worth mentioning that these results are on my laptop and maybe on a PC or a dedicated machine it will run faster or slower.
Edit: I actually asked a [question in code review](https://codereview.stackexchange.com/questions/245211/factorize-all-numbers-up-to-a-given-number) about this answer and it has some cool graphs about the runtime! Edit #2: Someone answered my question and now the code can run in 2.5 seconds with some modifications. --- Since the previous answer was written in `Python` it was slow. The following code is doing the exact same but in `C++`, it has a thread that is monitoring to which prime it got every 10 seconds. ```cpp #include <math.h> #include <unistd.h> #include <list> #include <vector> #include <ctime> #include <thread> #include <iostream> #include <atomic> #ifndef MAX #define MAX 200000000 #define TIME 10 #endif std::atomic<bool> exit_thread_flag{false}; void timer(int *i_ptr) { for (int i = 1; !exit_thread_flag; i++) { sleep(TIME); if (exit_thread_flag) { break; } std::cout << "i = " << *i_ptr << std::endl; std::cout << "Time elapsed since start: " << i * TIME << " Seconds" << std::endl; } } int main(int argc, char const *argv[]) { int i, upper_bound, j; std::time_t start_time; std::thread timer_thread; std::vector< std::list< int > > factors; std::cout << "Initiallizating" << std::endl; start_time = std::time(nullptr); timer_thread = std::thread(timer, &i); factors.resize(MAX); std::cout << "Initiallization took " << std::time(nullptr) - start_time << " Seconds" << std::endl; std::cout << "Starting calculation" << std::endl; start_time = std::time(nullptr); upper_bound = sqrt(MAX) + 1; for (i = 2; i < upper_bound; ++i) { if (factors[i].empty()) { for (j = i * 2; j < MAX; j += i) { factors[j].push_back(i); } } } std::cout << "Calculation took " << std::time(nullptr) - start_time << " Seconds" << std::endl; // Closing timer thread exit_thread_flag = true; std::cout << "Validating results" << std::endl; for (i = 2; i < 20; ++i) { std::cout << i << ": "; if (factors[i].empty()) { std::cout << "Is prime"; } else { for (int v : factors[i]) { std::cout << v << ", "; } } std::cout << std::endl; } timer_thread.join(); return 0; } ``` It needs to be compiled with the line: ``` g++ main.cpp -std=c++0x -pthread ``` If you do not want to turn your entire code to C++ you can use the subprocess library in Python. --- Time: Well I tried my best but it still runs in over an hour... it has reached `6619` which is the 855th prime (Much better!) in 1.386111 hours (4990 seconds). So it is an improvement but there is still some way to go! (It might be faster without another thread)
50,866,472
I have a AWS Lambda function that creates an object from a s3 call in cold start. I then hold the object in the cache while the function is warm to keep load times down. When files are changed in s3, I have a trigger to run the lambda, but not all the running instances of lambda restart and pull from s3. Is there a way to bring down all instances of lambda forcing a full cold start? Also, I don't want to use python.
2018/06/14
[ "https://Stackoverflow.com/questions/50866472", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9943906/" ]
Use the [`UpdateFunctionCode`](https://docs.aws.amazon.com/lambda/latest/dg/API_UpdateFunctionCode.html) API endpoint to force a refresh of all containers. [AWS SDKs](https://aws.amazon.com/tools/#sdk) wrap this up to make it easier for you to call the API using your preferred language.
Following [Renato Byrro's](https://stackoverflow.com/a/50901941/11896010) answer I made a lambda function using [JavaScript AWS SDK](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-lambda/) to restart another lambda function by updating the description. ``` import { LambdaClient, UpdateFunctionConfigurationCommand } from '@aws-sdk/client-lambda'; const forceLambdaRestart = async event => { try { const client = new LambdaClient({ region: 'your region here', credentials: { accessKeyId: 'your access key id', secretAccessKey: 'your secret access key', }, }); const command = new UpdateFunctionConfigurationCommand({ FunctionName: event.functionName, Description: `forced update ${Date.now()}`, }); const data = await client.send(command); console.log(data); return data; } catch (error) { console.error(error); return error; } }; forceLambdaRestart(); ``` It seems like that is enough to restart the lambda and clear in-memory cache.
50,866,472
I have a AWS Lambda function that creates an object from a s3 call in cold start. I then hold the object in the cache while the function is warm to keep load times down. When files are changed in s3, I have a trigger to run the lambda, but not all the running instances of lambda restart and pull from s3. Is there a way to bring down all instances of lambda forcing a full cold start? Also, I don't want to use python.
2018/06/14
[ "https://Stackoverflow.com/questions/50866472", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9943906/" ]
Currently, there is no way to force restarts on running Lambda containers. You can, however, redeploy the function so that it will start using new containers from that point onwards.
The only way force lambda to discard existing containers is to redeploy the function with something different. Check out my answer here: [Force Discard AWS Lambda Container](https://stackoverflow.com/questions/47445815/force-discard-aws-lambda-container/47447475#47447475) Good luck, Moe
50,866,472
I have a AWS Lambda function that creates an object from a s3 call in cold start. I then hold the object in the cache while the function is warm to keep load times down. When files are changed in s3, I have a trigger to run the lambda, but not all the running instances of lambda restart and pull from s3. Is there a way to bring down all instances of lambda forcing a full cold start? Also, I don't want to use python.
2018/06/14
[ "https://Stackoverflow.com/questions/50866472", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9943906/" ]
Easiest way I found was changing something in Basic Settings like timeout: [![Basic Settings](https://i.stack.imgur.com/pDdGi.png)](https://i.stack.imgur.com/pDdGi.png) I've upped+1 by a second, saved, and the function got refreshed [![Memory and timeout settings](https://i.stack.imgur.com/fHFCy.png)](https://i.stack.imgur.com/fHFCy.png)
The only way force lambda to discard existing containers is to redeploy the function with something different. Check out my answer here: [Force Discard AWS Lambda Container](https://stackoverflow.com/questions/47445815/force-discard-aws-lambda-container/47447475#47447475) Good luck, Moe
50,866,472
I have a AWS Lambda function that creates an object from a s3 call in cold start. I then hold the object in the cache while the function is warm to keep load times down. When files are changed in s3, I have a trigger to run the lambda, but not all the running instances of lambda restart and pull from s3. Is there a way to bring down all instances of lambda forcing a full cold start? Also, I don't want to use python.
2018/06/14
[ "https://Stackoverflow.com/questions/50866472", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9943906/" ]
In addition to some of the valid answers above: I happened to run an experiment on the (average) AWS Lambda instance lifetime. I could not find instances that ran for much longer than (on average) two hours: <https://xebia.com/blog/til-that-aws-lambda-terminates-instances-preemptively/>. TL;DR: AWS Lambda is preemptively terminating instances (even those handling traffic) after two hours, with a standard deviation of 30 minutes.
The simplest answer for this question I found is. Make some changes in function like adding a simple comment line or removing any white space and then redeploy the function. It will clear the cache while deploying.
50,866,472
I have a AWS Lambda function that creates an object from a s3 call in cold start. I then hold the object in the cache while the function is warm to keep load times down. When files are changed in s3, I have a trigger to run the lambda, but not all the running instances of lambda restart and pull from s3. Is there a way to bring down all instances of lambda forcing a full cold start? Also, I don't want to use python.
2018/06/14
[ "https://Stackoverflow.com/questions/50866472", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9943906/" ]
The only way force lambda to discard existing containers is to redeploy the function with something different. Check out my answer here: [Force Discard AWS Lambda Container](https://stackoverflow.com/questions/47445815/force-discard-aws-lambda-container/47447475#47447475) Good luck, Moe
The simplest answer for this question I found is. Make some changes in function like adding a simple comment line or removing any white space and then redeploy the function. It will clear the cache while deploying.
50,866,472
I have a AWS Lambda function that creates an object from a s3 call in cold start. I then hold the object in the cache while the function is warm to keep load times down. When files are changed in s3, I have a trigger to run the lambda, but not all the running instances of lambda restart and pull from s3. Is there a way to bring down all instances of lambda forcing a full cold start? Also, I don't want to use python.
2018/06/14
[ "https://Stackoverflow.com/questions/50866472", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9943906/" ]
Easiest way I found was changing something in Basic Settings like timeout: [![Basic Settings](https://i.stack.imgur.com/pDdGi.png)](https://i.stack.imgur.com/pDdGi.png) I've upped+1 by a second, saved, and the function got refreshed [![Memory and timeout settings](https://i.stack.imgur.com/fHFCy.png)](https://i.stack.imgur.com/fHFCy.png)
If you are using the Lambda versioning system, another way to do this is by publishing a new version and using an alias to direct all traffic to it. Here's an example: [Publish version](https://docs.aws.amazon.com/cli/latest/reference/lambda/publish-version.html): `aws lambda publish-version --function-name your-function-name-here` [Update the alias](https://docs.aws.amazon.com/cli/latest/reference/lambda/update-alias.html) pointing to the new version: `aws lambda update-alias --function-name your-function-name-here --name alias-name-here --function-version 123` (use the function version in the output message from the first command above)
50,866,472
I have a AWS Lambda function that creates an object from a s3 call in cold start. I then hold the object in the cache while the function is warm to keep load times down. When files are changed in s3, I have a trigger to run the lambda, but not all the running instances of lambda restart and pull from s3. Is there a way to bring down all instances of lambda forcing a full cold start? Also, I don't want to use python.
2018/06/14
[ "https://Stackoverflow.com/questions/50866472", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9943906/" ]
If you are using the Lambda versioning system, another way to do this is by publishing a new version and using an alias to direct all traffic to it. Here's an example: [Publish version](https://docs.aws.amazon.com/cli/latest/reference/lambda/publish-version.html): `aws lambda publish-version --function-name your-function-name-here` [Update the alias](https://docs.aws.amazon.com/cli/latest/reference/lambda/update-alias.html) pointing to the new version: `aws lambda update-alias --function-name your-function-name-here --name alias-name-here --function-version 123` (use the function version in the output message from the first command above)
Following [Renato Byrro's](https://stackoverflow.com/a/50901941/11896010) answer I made a lambda function using [JavaScript AWS SDK](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-lambda/) to restart another lambda function by updating the description. ``` import { LambdaClient, UpdateFunctionConfigurationCommand } from '@aws-sdk/client-lambda'; const forceLambdaRestart = async event => { try { const client = new LambdaClient({ region: 'your region here', credentials: { accessKeyId: 'your access key id', secretAccessKey: 'your secret access key', }, }); const command = new UpdateFunctionConfigurationCommand({ FunctionName: event.functionName, Description: `forced update ${Date.now()}`, }); const data = await client.send(command); console.log(data); return data; } catch (error) { console.error(error); return error; } }; forceLambdaRestart(); ``` It seems like that is enough to restart the lambda and clear in-memory cache.
50,866,472
I have a AWS Lambda function that creates an object from a s3 call in cold start. I then hold the object in the cache while the function is warm to keep load times down. When files are changed in s3, I have a trigger to run the lambda, but not all the running instances of lambda restart and pull from s3. Is there a way to bring down all instances of lambda forcing a full cold start? Also, I don't want to use python.
2018/06/14
[ "https://Stackoverflow.com/questions/50866472", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9943906/" ]
I made an Answer based on my comment and verification from @DejanVasic ``` aws lambda update-function-configuration --function-name "myLambda" --description "foo" ``` This will force the next invokation of the lambda to "cold start". To verify: ``` @timestamp, @message | sort @timestamp desc | limit 1000 | filter @message like "cold_start:true" ```
The simplest answer for this question I found is. Make some changes in function like adding a simple comment line or removing any white space and then redeploy the function. It will clear the cache while deploying.
50,866,472
I have a AWS Lambda function that creates an object from a s3 call in cold start. I then hold the object in the cache while the function is warm to keep load times down. When files are changed in s3, I have a trigger to run the lambda, but not all the running instances of lambda restart and pull from s3. Is there a way to bring down all instances of lambda forcing a full cold start? Also, I don't want to use python.
2018/06/14
[ "https://Stackoverflow.com/questions/50866472", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9943906/" ]
Use the [`UpdateFunctionCode`](https://docs.aws.amazon.com/lambda/latest/dg/API_UpdateFunctionCode.html) API endpoint to force a refresh of all containers. [AWS SDKs](https://aws.amazon.com/tools/#sdk) wrap this up to make it easier for you to call the API using your preferred language.
The only way force lambda to discard existing containers is to redeploy the function with something different. Check out my answer here: [Force Discard AWS Lambda Container](https://stackoverflow.com/questions/47445815/force-discard-aws-lambda-container/47447475#47447475) Good luck, Moe
50,866,472
I have a AWS Lambda function that creates an object from a s3 call in cold start. I then hold the object in the cache while the function is warm to keep load times down. When files are changed in s3, I have a trigger to run the lambda, but not all the running instances of lambda restart and pull from s3. Is there a way to bring down all instances of lambda forcing a full cold start? Also, I don't want to use python.
2018/06/14
[ "https://Stackoverflow.com/questions/50866472", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9943906/" ]
Use the [`UpdateFunctionCode`](https://docs.aws.amazon.com/lambda/latest/dg/API_UpdateFunctionCode.html) API endpoint to force a refresh of all containers. [AWS SDKs](https://aws.amazon.com/tools/#sdk) wrap this up to make it easier for you to call the API using your preferred language.
Easiest way I found was changing something in Basic Settings like timeout: [![Basic Settings](https://i.stack.imgur.com/pDdGi.png)](https://i.stack.imgur.com/pDdGi.png) I've upped+1 by a second, saved, and the function got refreshed [![Memory and timeout settings](https://i.stack.imgur.com/fHFCy.png)](https://i.stack.imgur.com/fHFCy.png)
28,346,000
I'm looking for method/way which is similar to python's startswith. What I would like to do is link some fields in table which start with "i-". My steps: 1. I have created filter, which return True/False: ``` @app.template_filter('startswith') def starts_with(field): if field.startswith("i-"): return True return False ``` then linked it to template: ``` {% for field in row %} {% if {{ field | startswith }} %} <td><a href="{{ url_for('munin') }}">{{ field | table_field | safe }}</a></td> {% else %} <td>{{ field | table_field | safe}}</td> {% endif %} {% endfor %} ``` Unfortunatetly, it doesn't work. Second step. I did it without filter, but in template ``` {% for field in row %} {% if field[:2] == 'i-' %} <td><a href="{{ url_for('munin') }}">{{ field | table_field | safe }}</a></td> {% else %} <td>{{ field | table_field | safe}}</td> {% endif %} {% endfor %} ``` That works, but to that template are sending different datas, and it works only for this case. I'm thinking that [:2] could be buggy a little bit. So I try to write filter or maybe there is some method which I skip in documentation.
2015/02/05
[ "https://Stackoverflow.com/questions/28346000", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4175690/" ]
A better solution.... You can use startswith directly in *field* because *field* is a python String. ``` {% if field.startswith('i-') %} ``` More, you can use any String function, including `str.endswith()`, for example.
The expression `{% if {{ field | startswith }} %}` will not work because you cannot nest blocks inside each other. You can probably get away with `{% if (field|startswith) %}` but a [custom test](http://jinja.pocoo.org/docs/dev/api/#custom-tests) rather than a filter, would be a better solution. Something like ``` def is_link_field(field): return field.startswith("i-"): environment.tests['link_field'] = is_link_field ``` Then in your template, you can write `{% if field is link_field %}`
28,346,000
I'm looking for method/way which is similar to python's startswith. What I would like to do is link some fields in table which start with "i-". My steps: 1. I have created filter, which return True/False: ``` @app.template_filter('startswith') def starts_with(field): if field.startswith("i-"): return True return False ``` then linked it to template: ``` {% for field in row %} {% if {{ field | startswith }} %} <td><a href="{{ url_for('munin') }}">{{ field | table_field | safe }}</a></td> {% else %} <td>{{ field | table_field | safe}}</td> {% endif %} {% endfor %} ``` Unfortunatetly, it doesn't work. Second step. I did it without filter, but in template ``` {% for field in row %} {% if field[:2] == 'i-' %} <td><a href="{{ url_for('munin') }}">{{ field | table_field | safe }}</a></td> {% else %} <td>{{ field | table_field | safe}}</td> {% endif %} {% endfor %} ``` That works, but to that template are sending different datas, and it works only for this case. I'm thinking that [:2] could be buggy a little bit. So I try to write filter or maybe there is some method which I skip in documentation.
2015/02/05
[ "https://Stackoverflow.com/questions/28346000", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4175690/" ]
The expression `{% if {{ field | startswith }} %}` will not work because you cannot nest blocks inside each other. You can probably get away with `{% if (field|startswith) %}` but a [custom test](http://jinja.pocoo.org/docs/dev/api/#custom-tests) rather than a filter, would be a better solution. Something like ``` def is_link_field(field): return field.startswith("i-"): environment.tests['link_field'] = is_link_field ``` Then in your template, you can write `{% if field is link_field %}`
In Jinja2, you can use regex\_search() to do a regular expression test of your string: ``` field | regex_search("^i-") ``` will return true if your string has "i-" and the beginning of the line represented by the caret "^" symbol.
28,346,000
I'm looking for method/way which is similar to python's startswith. What I would like to do is link some fields in table which start with "i-". My steps: 1. I have created filter, which return True/False: ``` @app.template_filter('startswith') def starts_with(field): if field.startswith("i-"): return True return False ``` then linked it to template: ``` {% for field in row %} {% if {{ field | startswith }} %} <td><a href="{{ url_for('munin') }}">{{ field | table_field | safe }}</a></td> {% else %} <td>{{ field | table_field | safe}}</td> {% endif %} {% endfor %} ``` Unfortunatetly, it doesn't work. Second step. I did it without filter, but in template ``` {% for field in row %} {% if field[:2] == 'i-' %} <td><a href="{{ url_for('munin') }}">{{ field | table_field | safe }}</a></td> {% else %} <td>{{ field | table_field | safe}}</td> {% endif %} {% endfor %} ``` That works, but to that template are sending different datas, and it works only for this case. I'm thinking that [:2] could be buggy a little bit. So I try to write filter or maybe there is some method which I skip in documentation.
2015/02/05
[ "https://Stackoverflow.com/questions/28346000", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4175690/" ]
A better solution.... You can use startswith directly in *field* because *field* is a python String. ``` {% if field.startswith('i-') %} ``` More, you can use any String function, including `str.endswith()`, for example.
In Jinja2, you can use regex\_search() to do a regular expression test of your string: ``` field | regex_search("^i-") ``` will return true if your string has "i-" and the beginning of the line represented by the caret "^" symbol.
46,310,157
this python script supposed to shut down a computer at the desired time, but the problem is that on the 9th line if 'hour' is equal to the current hour than the program doesn't look at the minutes. What should I add or change? Thanks. ``` import subprocess import datetime as dt import time hour = dt.datetime.now().hour minute = dt.datetime.now().minute print("Your computer is about to get shutdowned") while hour != 23 and minute != 28: time.sleep(2) print("not yet!") subprocess.call(["shutdown", "/s"]) ```
2017/09/19
[ "https://Stackoverflow.com/questions/46310157", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5716810/" ]
As `user2357112` said, you set `hour` and `minute` when you start the program, but never updates it again. This should work: ``` while hour != 23 and minute != 28: print("not yet!") time.sleep(50) hour = dt.datetime.now().hour minute = dt.datetime.now().minute print("Your computer is about to get shutdowned") subprocess.call(["shutdown", "/s"]) ``` And you don't need to check every 2 seconds if it's time to shutdown. Twice a minute is enough.
You should get hour and minutes inside the loop other way you get just one value for each when you start your script and values are never update... try something like : ``` while hour != 23 and minute != 28: hour = dt.datetime.now().hour minute = dt.datetime.now().minute time.sleep(2) print("not yet!") ```
46,310,157
this python script supposed to shut down a computer at the desired time, but the problem is that on the 9th line if 'hour' is equal to the current hour than the program doesn't look at the minutes. What should I add or change? Thanks. ``` import subprocess import datetime as dt import time hour = dt.datetime.now().hour minute = dt.datetime.now().minute print("Your computer is about to get shutdowned") while hour != 23 and minute != 28: time.sleep(2) print("not yet!") subprocess.call(["shutdown", "/s"]) ```
2017/09/19
[ "https://Stackoverflow.com/questions/46310157", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5716810/" ]
As `user2357112` said, you set `hour` and `minute` when you start the program, but never updates it again. This should work: ``` while hour != 23 and minute != 28: print("not yet!") time.sleep(50) hour = dt.datetime.now().hour minute = dt.datetime.now().minute print("Your computer is about to get shutdowned") subprocess.call(["shutdown", "/s"]) ``` And you don't need to check every 2 seconds if it's time to shutdown. Twice a minute is enough.
Change 'and' to 'or'. Also you have to update hour and minute inside the loop.
46,310,157
this python script supposed to shut down a computer at the desired time, but the problem is that on the 9th line if 'hour' is equal to the current hour than the program doesn't look at the minutes. What should I add or change? Thanks. ``` import subprocess import datetime as dt import time hour = dt.datetime.now().hour minute = dt.datetime.now().minute print("Your computer is about to get shutdowned") while hour != 23 and minute != 28: time.sleep(2) print("not yet!") subprocess.call(["shutdown", "/s"]) ```
2017/09/19
[ "https://Stackoverflow.com/questions/46310157", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5716810/" ]
As `user2357112` said, you set `hour` and `minute` when you start the program, but never updates it again. This should work: ``` while hour != 23 and minute != 28: print("not yet!") time.sleep(50) hour = dt.datetime.now().hour minute = dt.datetime.now().minute print("Your computer is about to get shutdowned") subprocess.call(["shutdown", "/s"]) ``` And you don't need to check every 2 seconds if it's time to shutdown. Twice a minute is enough.
Actually, you need to calculate the next due date at 23:28: ``` import datetime import time now = datetime.datetime.now() due_date = now.replace(hour=23, minute=28) if due_date < now: due_date += datetime.timedelta(days=1) ``` Then, you can do your countdown: ``` print("Your computer is about to get shutdowned...") while datetime.datetime.now() < due_date: time.sleep(2) duration = due_date - datetime.datetime.now() print("... in {:d} seconds".format(int(duration.seconds))) ```
57,268,310
I've made a audio detection model and saved it as h5 file. But when I load model and parameters, the error occurred. I tried restarting spyder and saving model once again, but nothing changed. This is the line that makes this 'TypeError' ``` model = load_model('.\\Data\\Keras_Model\\(19.07.30).h5') TypeError: tuple indices must be integers or slices, not list ``` Here is my traceback ``` File "<ipython-input-32-5bad2284b610>", line 1, in <module> runfile('C:/Project/Test.py', wdir='C:/Project') File "C:\Users\DSP\Anaconda3\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 678, in runfile execfile(filename, namespace) File "C:\Users\DSP\Anaconda3\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 106, in execfile exec(compile(f.read(), filename, 'exec'), namespace) File "C:/Project/Test.py", line 15, in <module> model = load_model('.\\Data\\Keras_Model\\(19.07.30).h5') File "C:\Users\DSP\Anaconda3\lib\site-packages\keras\models.py", line 264, in load_model model = model_from_config(model_config, custom_objects=custom_objects) File "C:\Users\DSP\Anaconda3\lib\site-packages\keras\models.py", line 341, in model_from_config return layer_module.deserialize(config, custom_objects=custom_objects) File "C:\Users\DSP\Anaconda3\lib\site-packages\keras\layers\__init__.py", line 55, in deserialize printable_module_name='layer') File "C:\Users\DSP\Anaconda3\lib\site-packages\keras\utils\generic_utils.py", line 144, in deserialize_keras_object list(custom_objects.items()))) File "C:\Users\DSP\Anaconda3\lib\site-packages\keras\engine\topology.py", line 2535, in from_config process_node(layer, node_data) File "C:\Users\DSP\Anaconda3\lib\site-packages\keras\engine\topology.py", line 2492, in process_node layer(input_tensors[0], **kwargs) File "C:\Users\DSP\Anaconda3\lib\site-packages\keras\engine\topology.py", line 592, in __call__ self.build(input_shapes[0]) File "C:\Users\DSP\Anaconda3\lib\site-packages\keras\layers\normalization.py", line 92, in build dim = input_shape[self.axis] TypeError: tuple indices must be integers or slices, not list ``` This is my model ``` Input_Tr = Input(shape=(600, 128, 1), dtype = 'float', name = 'Input_Tr') conv_layer1 = Conv2D(32, kernel_size = 3, strides = 1, padding = 'SAME')(Input_Tr) batch_layer1 = BatchNormalization(axis=-1)(conv_layer1) conv_layer1_out = Activation('relu')(batch_layer1) pooling_layer1 = MaxPooling2D((1, 4))(conv_layer1_out) dropout_layer1 = Dropout(0.5)(pooling_layer1) conv_layer2 = Conv2D(64, kernel_size = 3, strides = 1, padding = 'SAME')(dropout_layer1) batch_layer2 = BatchNormalization(axis=-1)(conv_layer2) conv_layer2_out = Activation('relu')(batch_layer2) pooling_layer2 = MaxPooling2D((1, 4))(conv_layer2_out) dropout_layer2 = Dropout(0.5)(pooling_layer2) reshape_layer3 = Reshape((600, 64*8))(dropout_layer2) bidir_layer3 = Bidirectional(GRU(64, return_sequences = True, activation = 'tanh'))(reshape_layer3) output = TimeDistributed(Dense(1, activation = 'sigmoid'))(bidir_layer3) model = Model(inputs = [Input_Tr], outputs = [output]) adam = Adam(lr = 0.01, beta_1=0.9, beta_2=0.999,decay=0.0) model.compile(loss="binary_crossentropy", optimizer = adam, metrics=["accuracy"]) model.fit(x_train, x_label, epochs = 30, batch_size = 30, validation_split = 0.1) model.summary() loss_and_metrics = model.evaluate(y_train, y_label, batch_size = 15) print("evaluation Result") print(loss_and_metrics) model.save('.\\Data\\Keras_Model\\(19.07.30).h5') ``` What should I do in this situation.
2019/07/30
[ "https://Stackoverflow.com/questions/57268310", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11789846/" ]
This could happen because you trained the model using `tf.keras` but you are not loading it with the standalone `keras`, both modules aren't compatible and you should only use one of them for your whole pipeline.
``` model = load_model('.\\Data\\Keras_Model/(19.07.30).h5') ```
64,282,248
I have just learned the index and slicing in python. After I learned, I got a good idea to make. The idea briefly is that instead of writing the sequence in the code, I want the user to choose a start and an end and print the result. I have written the code and it showed no problems, but when I ran it, it didn't work :( So I need help to make it run as I imagined. ` ``` mystring = "Omar Marouf Zaki" print("Choose First Number") x = input() print("Choose Second Number") y = input() print(mystring[x:y]) ```
2020/10/09
[ "https://Stackoverflow.com/questions/64282248", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14379623/" ]
I found this hint veeery useful: > > β€œright click -> Show Packages -> Resources -> Mac OS”, and execute your app directly do help, because it shows specific errors on the console > > > Source: <http://uxcrepe.com> It might help you a lot, as it shows you typical python errors you are already familiar with! Enjoy!
It was the encoding, because of course it was. Just open the file like this. ```py fin = open("/absolute/path/to/file.txt", "r", encoding="utf-8") ```
6,630,170
I suspect this is a common problem, but I counldn't seem to locate the answer. I am trying to remove all commas from a csv file and replace them with colons. I would normally use sed or vi for this, but I need to use a purely python implementation. Here is what I have come up with so far: ``` import csv with open("temp.csv", mode="rU") as infile: reader = csv.reader(infile, dialect="excel") with open("temp2.txt", mode="w") as outfile: writer = csv.writer(outfile) for rows in reader: for parsed_item in rows: parsed_item = rows.replace(',', ':') # I can't do this with a list! writer.writerow(parsed_item) ``` Can anyone help me out with how to do this? Thanks in advance for your help.
2011/07/08
[ "https://Stackoverflow.com/questions/6630170", "https://Stackoverflow.com", "https://Stackoverflow.com/users/636493/" ]
I will build my answer on @Sylhare's answer. In python3, the 'U' mode is deprecated. So, the following solution worked for me: ``` import csv reader = csv.reader(open("input.csv", newline=None), delimiter=',') writer = csv.writer(open("output.csv", 'w'), delimiter=':') writer.writerows(reader) ```
Assuming that the CSV is comma delimited, and you want to replace commas in each entry, I believe the issue is replacing the wrong item: ``` for rows in reader: for parsed_item in rows: parsed_item = parsed_item.replace(',', ':') # Change rows to parsed_item writer.writerow(parsed_item) ```
6,630,170
I suspect this is a common problem, but I counldn't seem to locate the answer. I am trying to remove all commas from a csv file and replace them with colons. I would normally use sed or vi for this, but I need to use a purely python implementation. Here is what I have come up with so far: ``` import csv with open("temp.csv", mode="rU") as infile: reader = csv.reader(infile, dialect="excel") with open("temp2.txt", mode="w") as outfile: writer = csv.writer(outfile) for rows in reader: for parsed_item in rows: parsed_item = rows.replace(',', ':') # I can't do this with a list! writer.writerow(parsed_item) ``` Can anyone help me out with how to do this? Thanks in advance for your help.
2011/07/08
[ "https://Stackoverflow.com/questions/6630170", "https://Stackoverflow.com", "https://Stackoverflow.com/users/636493/" ]
If you are looking to read a csv with comma delimiter and write it in **another file** with semicolon delimiters. I think a more straightforward way would be: ```py reader = csv.reader(open("input.csv", "r"), delimiter=',') writer = csv.writer(open("output.csv", 'w'), delimiter=';') writer.writerows(reader) ``` I find this example much easier to understand than with the `with open(...)`. Also if you work with file using comma and semicolon as delimiters. You can use the [Sniffer](https://docs.python.org/2/library/csv.html) of the csv file to detect which delimiter is used before reading the file (example in the link). Also if you want to rewrite in the same file, check this [stackoverflow answer](https://stackoverflow.com/a/49414740/7747942).
Assuming that the CSV is comma delimited, and you want to replace commas in each entry, I believe the issue is replacing the wrong item: ``` for rows in reader: for parsed_item in rows: parsed_item = parsed_item.replace(',', ':') # Change rows to parsed_item writer.writerow(parsed_item) ```
6,630,170
I suspect this is a common problem, but I counldn't seem to locate the answer. I am trying to remove all commas from a csv file and replace them with colons. I would normally use sed or vi for this, but I need to use a purely python implementation. Here is what I have come up with so far: ``` import csv with open("temp.csv", mode="rU") as infile: reader = csv.reader(infile, dialect="excel") with open("temp2.txt", mode="w") as outfile: writer = csv.writer(outfile) for rows in reader: for parsed_item in rows: parsed_item = rows.replace(',', ':') # I can't do this with a list! writer.writerow(parsed_item) ``` Can anyone help me out with how to do this? Thanks in advance for your help.
2011/07/08
[ "https://Stackoverflow.com/questions/6630170", "https://Stackoverflow.com", "https://Stackoverflow.com/users/636493/" ]
The answer is easier than you think. You just need to set the delimiter for `csv.writer`: ``` import csv row = #your data with open("temp.csv", mode="rU") as infile: reader = csv.reader(infile, dialect="excel") with open("temp2.txt", mode="w") as outfile: writer = csv.writer(outfile, delimiter=':') writer.writerows(rows) ``` You're line trying to replace `,` with `:` wasn't going to do anything because the row had already been processed by `csv.reader`.
If you are looking to read a csv with comma delimiter and write it in **another file** with semicolon delimiters. I think a more straightforward way would be: ```py reader = csv.reader(open("input.csv", "r"), delimiter=',') writer = csv.writer(open("output.csv", 'w'), delimiter=';') writer.writerows(reader) ``` I find this example much easier to understand than with the `with open(...)`. Also if you work with file using comma and semicolon as delimiters. You can use the [Sniffer](https://docs.python.org/2/library/csv.html) of the csv file to detect which delimiter is used before reading the file (example in the link). Also if you want to rewrite in the same file, check this [stackoverflow answer](https://stackoverflow.com/a/49414740/7747942).
6,630,170
I suspect this is a common problem, but I counldn't seem to locate the answer. I am trying to remove all commas from a csv file and replace them with colons. I would normally use sed or vi for this, but I need to use a purely python implementation. Here is what I have come up with so far: ``` import csv with open("temp.csv", mode="rU") as infile: reader = csv.reader(infile, dialect="excel") with open("temp2.txt", mode="w") as outfile: writer = csv.writer(outfile) for rows in reader: for parsed_item in rows: parsed_item = rows.replace(',', ':') # I can't do this with a list! writer.writerow(parsed_item) ``` Can anyone help me out with how to do this? Thanks in advance for your help.
2011/07/08
[ "https://Stackoverflow.com/questions/6630170", "https://Stackoverflow.com", "https://Stackoverflow.com/users/636493/" ]
The answer is easier than you think. You just need to set the delimiter for `csv.writer`: ``` import csv row = #your data with open("temp.csv", mode="rU") as infile: reader = csv.reader(infile, dialect="excel") with open("temp2.txt", mode="w") as outfile: writer = csv.writer(outfile, delimiter=':') writer.writerows(rows) ``` You're line trying to replace `,` with `:` wasn't going to do anything because the row had already been processed by `csv.reader`.
If you're just replacing commas with colons, you don't need to use a csv parser at all. ``` with open("file.csv", 'r') as f: with open("temp.csv", 'w') as t: for lines in f: new_line = line.replace(",",":") t.write(new_line) ``` The only caveat is that you can't have commas elsewhere in the csv file.
6,630,170
I suspect this is a common problem, but I counldn't seem to locate the answer. I am trying to remove all commas from a csv file and replace them with colons. I would normally use sed or vi for this, but I need to use a purely python implementation. Here is what I have come up with so far: ``` import csv with open("temp.csv", mode="rU") as infile: reader = csv.reader(infile, dialect="excel") with open("temp2.txt", mode="w") as outfile: writer = csv.writer(outfile) for rows in reader: for parsed_item in rows: parsed_item = rows.replace(',', ':') # I can't do this with a list! writer.writerow(parsed_item) ``` Can anyone help me out with how to do this? Thanks in advance for your help.
2011/07/08
[ "https://Stackoverflow.com/questions/6630170", "https://Stackoverflow.com", "https://Stackoverflow.com/users/636493/" ]
If you are looking to read a csv with comma delimiter and write it in **another file** with semicolon delimiters. I think a more straightforward way would be: ```py reader = csv.reader(open("input.csv", "r"), delimiter=',') writer = csv.writer(open("output.csv", 'w'), delimiter=';') writer.writerows(reader) ``` I find this example much easier to understand than with the `with open(...)`. Also if you work with file using comma and semicolon as delimiters. You can use the [Sniffer](https://docs.python.org/2/library/csv.html) of the csv file to detect which delimiter is used before reading the file (example in the link). Also if you want to rewrite in the same file, check this [stackoverflow answer](https://stackoverflow.com/a/49414740/7747942).
I'm writing csv files from JSON raw data and noticed that the `DictWriter` module also supports different delimiters. Example: ``` with open('file_1.csv', 'w', encoding="utf-8-sig", newline = '') as myfile: wr = csv.DictWriter(myfile, fieldnames = table_fields, delimiter=';') wr.writeheader() wr.writerows(# my data #) ```
6,630,170
I suspect this is a common problem, but I counldn't seem to locate the answer. I am trying to remove all commas from a csv file and replace them with colons. I would normally use sed or vi for this, but I need to use a purely python implementation. Here is what I have come up with so far: ``` import csv with open("temp.csv", mode="rU") as infile: reader = csv.reader(infile, dialect="excel") with open("temp2.txt", mode="w") as outfile: writer = csv.writer(outfile) for rows in reader: for parsed_item in rows: parsed_item = rows.replace(',', ':') # I can't do this with a list! writer.writerow(parsed_item) ``` Can anyone help me out with how to do this? Thanks in advance for your help.
2011/07/08
[ "https://Stackoverflow.com/questions/6630170", "https://Stackoverflow.com", "https://Stackoverflow.com/users/636493/" ]
I'm writing csv files from JSON raw data and noticed that the `DictWriter` module also supports different delimiters. Example: ``` with open('file_1.csv', 'w', encoding="utf-8-sig", newline = '') as myfile: wr = csv.DictWriter(myfile, fieldnames = table_fields, delimiter=';') wr.writeheader() wr.writerows(# my data #) ```
If you're just replacing commas with colons, you don't need to use a csv parser at all. ``` with open("file.csv", 'r') as f: with open("temp.csv", 'w') as t: for lines in f: new_line = line.replace(",",":") t.write(new_line) ``` The only caveat is that you can't have commas elsewhere in the csv file.
6,630,170
I suspect this is a common problem, but I counldn't seem to locate the answer. I am trying to remove all commas from a csv file and replace them with colons. I would normally use sed or vi for this, but I need to use a purely python implementation. Here is what I have come up with so far: ``` import csv with open("temp.csv", mode="rU") as infile: reader = csv.reader(infile, dialect="excel") with open("temp2.txt", mode="w") as outfile: writer = csv.writer(outfile) for rows in reader: for parsed_item in rows: parsed_item = rows.replace(',', ':') # I can't do this with a list! writer.writerow(parsed_item) ``` Can anyone help me out with how to do this? Thanks in advance for your help.
2011/07/08
[ "https://Stackoverflow.com/questions/6630170", "https://Stackoverflow.com", "https://Stackoverflow.com/users/636493/" ]
The answer is easier than you think. You just need to set the delimiter for `csv.writer`: ``` import csv row = #your data with open("temp.csv", mode="rU") as infile: reader = csv.reader(infile, dialect="excel") with open("temp2.txt", mode="w") as outfile: writer = csv.writer(outfile, delimiter=':') writer.writerows(rows) ``` You're line trying to replace `,` with `:` wasn't going to do anything because the row had already been processed by `csv.reader`.
I'm writing csv files from JSON raw data and noticed that the `DictWriter` module also supports different delimiters. Example: ``` with open('file_1.csv', 'w', encoding="utf-8-sig", newline = '') as myfile: wr = csv.DictWriter(myfile, fieldnames = table_fields, delimiter=';') wr.writeheader() wr.writerows(# my data #) ```
6,630,170
I suspect this is a common problem, but I counldn't seem to locate the answer. I am trying to remove all commas from a csv file and replace them with colons. I would normally use sed or vi for this, but I need to use a purely python implementation. Here is what I have come up with so far: ``` import csv with open("temp.csv", mode="rU") as infile: reader = csv.reader(infile, dialect="excel") with open("temp2.txt", mode="w") as outfile: writer = csv.writer(outfile) for rows in reader: for parsed_item in rows: parsed_item = rows.replace(',', ':') # I can't do this with a list! writer.writerow(parsed_item) ``` Can anyone help me out with how to do this? Thanks in advance for your help.
2011/07/08
[ "https://Stackoverflow.com/questions/6630170", "https://Stackoverflow.com", "https://Stackoverflow.com/users/636493/" ]
If you are looking to read a csv with comma delimiter and write it in **another file** with semicolon delimiters. I think a more straightforward way would be: ```py reader = csv.reader(open("input.csv", "r"), delimiter=',') writer = csv.writer(open("output.csv", 'w'), delimiter=';') writer.writerows(reader) ``` I find this example much easier to understand than with the `with open(...)`. Also if you work with file using comma and semicolon as delimiters. You can use the [Sniffer](https://docs.python.org/2/library/csv.html) of the csv file to detect which delimiter is used before reading the file (example in the link). Also if you want to rewrite in the same file, check this [stackoverflow answer](https://stackoverflow.com/a/49414740/7747942).
If you're just replacing commas with colons, you don't need to use a csv parser at all. ``` with open("file.csv", 'r') as f: with open("temp.csv", 'w') as t: for lines in f: new_line = line.replace(",",":") t.write(new_line) ``` The only caveat is that you can't have commas elsewhere in the csv file.
6,630,170
I suspect this is a common problem, but I counldn't seem to locate the answer. I am trying to remove all commas from a csv file and replace them with colons. I would normally use sed or vi for this, but I need to use a purely python implementation. Here is what I have come up with so far: ``` import csv with open("temp.csv", mode="rU") as infile: reader = csv.reader(infile, dialect="excel") with open("temp2.txt", mode="w") as outfile: writer = csv.writer(outfile) for rows in reader: for parsed_item in rows: parsed_item = rows.replace(',', ':') # I can't do this with a list! writer.writerow(parsed_item) ``` Can anyone help me out with how to do this? Thanks in advance for your help.
2011/07/08
[ "https://Stackoverflow.com/questions/6630170", "https://Stackoverflow.com", "https://Stackoverflow.com/users/636493/" ]
I will build my answer on @Sylhare's answer. In python3, the 'U' mode is deprecated. So, the following solution worked for me: ``` import csv reader = csv.reader(open("input.csv", newline=None), delimiter=',') writer = csv.writer(open("output.csv", 'w'), delimiter=':') writer.writerows(reader) ```
I'm writing csv files from JSON raw data and noticed that the `DictWriter` module also supports different delimiters. Example: ``` with open('file_1.csv', 'w', encoding="utf-8-sig", newline = '') as myfile: wr = csv.DictWriter(myfile, fieldnames = table_fields, delimiter=';') wr.writeheader() wr.writerows(# my data #) ```
6,630,170
I suspect this is a common problem, but I counldn't seem to locate the answer. I am trying to remove all commas from a csv file and replace them with colons. I would normally use sed or vi for this, but I need to use a purely python implementation. Here is what I have come up with so far: ``` import csv with open("temp.csv", mode="rU") as infile: reader = csv.reader(infile, dialect="excel") with open("temp2.txt", mode="w") as outfile: writer = csv.writer(outfile) for rows in reader: for parsed_item in rows: parsed_item = rows.replace(',', ':') # I can't do this with a list! writer.writerow(parsed_item) ``` Can anyone help me out with how to do this? Thanks in advance for your help.
2011/07/08
[ "https://Stackoverflow.com/questions/6630170", "https://Stackoverflow.com", "https://Stackoverflow.com/users/636493/" ]
If you are looking to read a csv with comma delimiter and write it in **another file** with semicolon delimiters. I think a more straightforward way would be: ```py reader = csv.reader(open("input.csv", "r"), delimiter=',') writer = csv.writer(open("output.csv", 'w'), delimiter=';') writer.writerows(reader) ``` I find this example much easier to understand than with the `with open(...)`. Also if you work with file using comma and semicolon as delimiters. You can use the [Sniffer](https://docs.python.org/2/library/csv.html) of the csv file to detect which delimiter is used before reading the file (example in the link). Also if you want to rewrite in the same file, check this [stackoverflow answer](https://stackoverflow.com/a/49414740/7747942).
I will build my answer on @Sylhare's answer. In python3, the 'U' mode is deprecated. So, the following solution worked for me: ``` import csv reader = csv.reader(open("input.csv", newline=None), delimiter=',') writer = csv.writer(open("output.csv", 'w'), delimiter=':') writer.writerows(reader) ```
67,788,131
There is an array with some numbers. All numbers are equal except for one. How i can found only that number in python? uniq([ 1, 1, 1, 2, 1, 1 ]) == 2 I use 'set' but it print me [1, 2]...
2021/06/01
[ "https://Stackoverflow.com/questions/67788131", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16094238/" ]
you can convert the list to a set to get all of items in the list and then by using `count` function find out which one of them only exists in the list only once. ``` for i in set(arr): if arr.count(i) == 1: print(i) ``` or list comprehension: ``` [i for i in set(arr) if arr.count(i) == 1] ``` this way you can get all the items in list that occur once. [this link](https://www.geeksforgeeks.org/find-the-element-that-appears-once/) might also help you.
I would use `collections.Counter` for this as follows: ``` import collections x = [ 1, 1, 1, 2, 1, 1 ] occurs = collections.Counter(x) for k, v in occurs.items(): if v == 1: print(k) ``` output: ``` 2 ``` Explanation: `occurs` is dict with pairs of value - how many times it appears, I print all values which appeared exactly once.
67,788,131
There is an array with some numbers. All numbers are equal except for one. How i can found only that number in python? uniq([ 1, 1, 1, 2, 1, 1 ]) == 2 I use 'set' but it print me [1, 2]...
2021/06/01
[ "https://Stackoverflow.com/questions/67788131", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16094238/" ]
This is the simplest way to implement your problem: > > > ``` > def count_unique(s,lst): > for i in lst: > if(lst.count(i)==1): > return i > lst=list(map(int,input().split())) > s=list(set(lst)) > print(count_unique(s,lst)) > > ``` > >
I would use `collections.Counter` for this as follows: ``` import collections x = [ 1, 1, 1, 2, 1, 1 ] occurs = collections.Counter(x) for k, v in occurs.items(): if v == 1: print(k) ``` output: ``` 2 ``` Explanation: `occurs` is dict with pairs of value - how many times it appears, I print all values which appeared exactly once.
67,788,131
There is an array with some numbers. All numbers are equal except for one. How i can found only that number in python? uniq([ 1, 1, 1, 2, 1, 1 ]) == 2 I use 'set' but it print me [1, 2]...
2021/06/01
[ "https://Stackoverflow.com/questions/67788131", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16094238/" ]
``` from collections import Counter def find_unique(list1): count = Counter(list1) if len(list1) >= 3: for key, value in count.items(): if value == 1: return key ```
I would use `collections.Counter` for this as follows: ``` import collections x = [ 1, 1, 1, 2, 1, 1 ] occurs = collections.Counter(x) for k, v in occurs.items(): if v == 1: print(k) ``` output: ``` 2 ``` Explanation: `occurs` is dict with pairs of value - how many times it appears, I print all values which appeared exactly once.
67,788,131
There is an array with some numbers. All numbers are equal except for one. How i can found only that number in python? uniq([ 1, 1, 1, 2, 1, 1 ]) == 2 I use 'set' but it print me [1, 2]...
2021/06/01
[ "https://Stackoverflow.com/questions/67788131", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16094238/" ]
This is the simplest way to implement your problem: > > > ``` > def count_unique(s,lst): > for i in lst: > if(lst.count(i)==1): > return i > lst=list(map(int,input().split())) > s=list(set(lst)) > print(count_unique(s,lst)) > > ``` > >
you can convert the list to a set to get all of items in the list and then by using `count` function find out which one of them only exists in the list only once. ``` for i in set(arr): if arr.count(i) == 1: print(i) ``` or list comprehension: ``` [i for i in set(arr) if arr.count(i) == 1] ``` this way you can get all the items in list that occur once. [this link](https://www.geeksforgeeks.org/find-the-element-that-appears-once/) might also help you.
67,788,131
There is an array with some numbers. All numbers are equal except for one. How i can found only that number in python? uniq([ 1, 1, 1, 2, 1, 1 ]) == 2 I use 'set' but it print me [1, 2]...
2021/06/01
[ "https://Stackoverflow.com/questions/67788131", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16094238/" ]
This is the simplest way to implement your problem: > > > ``` > def count_unique(s,lst): > for i in lst: > if(lst.count(i)==1): > return i > lst=list(map(int,input().split())) > s=list(set(lst)) > print(count_unique(s,lst)) > > ``` > >
``` from collections import Counter def find_unique(list1): count = Counter(list1) if len(list1) >= 3: for key, value in count.items(): if value == 1: return key ```
58,214,361
raise ValueError("Missing staticfiles manifest entry for '%s'" % clean\_name) ValueError: Missing staticfiles manifest entry for 'favicon.png' when DEBUG = False I only get this error when DEBUG = False, I do not get any error when DEBUG = True To fix this issue while keeping DEBUG = False, I must add back in favicon.png (which I had deleted a while back) to the static\_root folder and then run python manage.py collectstatic I checked all my files and all my html documents have the link favicon.png line commented out, so that is not the issue. settings.py has the following: ``` STATIC_URL = '/static/' STATICFILES_DIRS =[ os.path.join(BASE_DIR, 'static_root'), ] VENV_PATH = os.path.dirname(BASE_DIR) STATIC_ROOT = os.path.join(BASE_DIR, 'static/') ``` urls.py has following: ``` if settings.DEBUG: urlpatterns += static(settings.STATIC_URL, document_root=settings.STATIC_ROOT) urlpatterns += static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT) ```
2019/10/03
[ "https://Stackoverflow.com/questions/58214361", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8422698/" ]
Yep, I had just the same issue. Try to run `manage.py collectstatic`. And by the way you should turn on logging in the console even if the DEBUG is set to False. There is a documentation article for that [Django documentation](https://docs.djangoproject.com/en/3.0/topics/logging/#examples). And if you are not using different staticfiles storage on production, and running you staticfiles locally, you should probably remove `if settings.DEBUG:` from your urls.py. Hope this helped! Updated: **Also** I've removed `STATICFILES_STORAGE = 'django.contrib.staticfiles.storage.ManifestStaticFilesStorage'` from `settings.py` in production version of my project.
I followed the below configuration settings to resolve the issue. ``` DEBUG = False ALLOWED_HOSTS = ['testnewapp.herokuapp.com'] INSTALLED_APPS = [ 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'whitenoise.runserver_nostatic', 'django.contrib.staticfiles', 'widget_tweaks', 'phonenumber_field', 'django_extensions', ] MIDDLEWARE = [ 'django.middleware.security.SecurityMiddleware', 'whitenoise.middleware.WhiteNoiseMiddleware', ... ] # Static files (CSS, JavaScript, Images) # https://docs.djangoproject.com/en/2.2/howto/static-files/ STATIC_ROOT = os.path.join(BASE_DIR, 'staticfiles') STATIC_URL = '/static/' STATICFILES_DIRS = [ os.path.join(BASE_DIR, "static"), ] # Whitenoise Storage Class - Apply compression but don’t want the caching behaviour STATICFILES_STORAGE = 'whitenoise.storage.CompressedStaticFilesStorage' # Comment the below line # django_heroku.settings(locals()) ``` Things to Remember ------------------ 1. Make sure you’re using the static template tag to refer to your static files, rather that writing the URL directly. For example: ``` {% load static %} <img src="{% static "images/error.jpg" %}" alt="OOps!" /> <!-- DON'T WRITE THIS --> <img src="/static/images/error.jpg" alt="OOps!" /> ``` 2. If you get an error message with collectstatic, simply disable it by instructing Heroku to ignore running the manage.py collecstatic command during the deployment process.
58,214,361
raise ValueError("Missing staticfiles manifest entry for '%s'" % clean\_name) ValueError: Missing staticfiles manifest entry for 'favicon.png' when DEBUG = False I only get this error when DEBUG = False, I do not get any error when DEBUG = True To fix this issue while keeping DEBUG = False, I must add back in favicon.png (which I had deleted a while back) to the static\_root folder and then run python manage.py collectstatic I checked all my files and all my html documents have the link favicon.png line commented out, so that is not the issue. settings.py has the following: ``` STATIC_URL = '/static/' STATICFILES_DIRS =[ os.path.join(BASE_DIR, 'static_root'), ] VENV_PATH = os.path.dirname(BASE_DIR) STATIC_ROOT = os.path.join(BASE_DIR, 'static/') ``` urls.py has following: ``` if settings.DEBUG: urlpatterns += static(settings.STATIC_URL, document_root=settings.STATIC_ROOT) urlpatterns += static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT) ```
2019/10/03
[ "https://Stackoverflow.com/questions/58214361", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8422698/" ]
Yep, I had just the same issue. Try to run `manage.py collectstatic`. And by the way you should turn on logging in the console even if the DEBUG is set to False. There is a documentation article for that [Django documentation](https://docs.djangoproject.com/en/3.0/topics/logging/#examples). And if you are not using different staticfiles storage on production, and running you staticfiles locally, you should probably remove `if settings.DEBUG:` from your urls.py. Hope this helped! Updated: **Also** I've removed `STATICFILES_STORAGE = 'django.contrib.staticfiles.storage.ManifestStaticFilesStorage'` from `settings.py` in production version of my project.
I was getting a `ValueError: Missing staticfiles manifest entry` so I decided to check the actual manifest of the staticfiles which is in a file named `staticfiles.json` in your staticfiles directory. There you can see the actual entry for the path of the image you're looking for. Example: my src in the img tag was `src="{% static '/images/logo-1.png' %}"` and when I checked the json I found this entry: `"images/logo-1.png": "images/logo-1.cb60e34d7e84.png",` So I updated the src to `src="{% static 'images/logo-1.png' %}"` and it worked.
8,334,614
I know absolutely nothing about Django, but I am needing to get an existing project running in OSX. From the project's directory I run `python manage.py runserver` and get the error: `Error: No module named cms`. Seems like the INSTALLED\_APPS constant (in settings.py) defines the required modules... but how do I install the dang things? Is there a standard way of installing dependencies in bulk (like Ruby's Bundler)?
2011/11/30
[ "https://Stackoverflow.com/questions/8334614", "https://Stackoverflow.com", "https://Stackoverflow.com/users/678350/" ]
you can install all dependencies in once, if there is a requirements.txt file! you just have to run the follow command: ``` pip install -r requirements.txt ``` otherwise you can install one by one: ``` pip install django-cms ``` Here is the PIP documentation: <http://pypi.python.org/pypi/pip> if you are used to ruby, you can compared to ruby GEM
The entries in [`INSTALLED_APPS`](https://docs.djangoproject.com/en/dev/ref/settings/#std:setting-INSTALLED_APPS) are package designations. [Packages](http://docs.python.org/tutorial/modules.html#packages) are a way of structuring Python’s module namespace. > > When importing a package, Python searches through the directories on `sys.path` looking for the package subdirectory. > > > So python has some designated places to look for packages. To install packages by name to the right location on your system, you could download some python source code and run the `setup.py` script (usually provided by libraries and applications). ``` $ cd /tmp $ wget http://pypi.python.org/packages/source/p/pytz/pytz-2011n.tar.bz2 $ tar xvfj pytz-2011n.tar.bz2 $ cd pytz-2011n $ python setup.py install ``` There are, however, shortcuts to this, namely [easy\_install](http://peak.telecommunity.com/DevCenter/EasyInstall) and it's successor [pip](http://www.pip-installer.org/en/latest/index.html). With these tools, installation of a third-party package (or django app) boils down to: ``` $ pip install pytz ``` Or, if you use the systems default Python installation: ``` $ sudo pip install pytz ``` That's it. You can now use this library, whereever you want. To check, if it installed correctly, just try it in the console: ``` $ python Python 2.7.2 (default, Aug 20 2011, 05:03:24) ... >>> import pytz # you would get an ImportError, if pytz could not be found >>> pytz.__version__ '2011n' ``` Now for the sake of brevity (this post is much to long already), let's assume pytz were some third party django application. You would just write: ``` INSTALLED_APPS = ( 'pytz', ) ``` And pytz would be available in your project. Note: I you have the time, please take a look at [Tools of the Modern Python Hacker: Virtualenv, Fabric and Pip](http://www.clemesha.org/blog/modern-python-hacker-tools-virtualenv-fabric-pip/) blog post, which highlights some great python infrastructure tools.
28,564,976
When I run the following simple NLOPT example in python : ``` import numpy as np import nlopt n = 2 localopt_feval_max = 10 lb = np.array([-1, -1]) ub = np.array([1, 1]) def myfunc(x, grad): return -1 opt = nlopt.opt(nlopt.LN_NELDERMEAD, n) opt.set_lower_bounds(lb) opt.set_upper_bounds(ub) opt.set_maxeval(localopt_feval_max) opt.set_min_objective(myfunc) opt.set_xtol_rel(1e-8) x0 = np.array([0,0]) x = opt.optimize(x0) ``` I get an error: ``` "ValueError: nlopt invalid argument" ``` The only suggestion given by the reference here: <http://ab-initio.mit.edu/wiki/index.php/NLopt_Python_Reference> is that the lower bounds might be bigger than the upper bounds, or there is an unknown algorithm (neither of which is the case here). I am running the following versions of Python, NLOPT, and NumPy ``` >>> sys.version '3.4.0 (default, Apr 11 2014, 13:05:11) \n[GCC 4.8.2]' >>> nlopt.__version__ '2.4.2' >>> np.__version__ '1.8.2' ```
2015/02/17
[ "https://Stackoverflow.com/questions/28564976", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3385432/" ]
By changing the function declaration to ``` def myfunc(x, grad): return -1.0 ``` everything works. So NLopt can not handle objectives that return python `integer` instead of `float` I feel like NLopt should be able to cast integer objective function values as `float`. If not this, then at least a `TypeError` should be raised instead of a `ValueError: nlopt invalid argument`.
I got this error because of my objective function return type was numpy.float128 so I fixed the error by changing the return type to **numpy.float64**
66,977,307
I am currently new to neovim and still adjusting to all of the keybindings but something that has me a bit stuck is being able to easily run my code. The text editor I used before trying out vim was Sublime Text 3 and in that text editor, all I had to do was press cmd + B and it would use a build system that either came with the text editor or one that I made myself. I haven't found a way to do this within vim and the closet I have gotten to doing something similar to this is by adding this to my init.vim "command PYrun :!python3 %" but I have only gotten this to work with python and it is no where near as good as the build systems in Sublime. Is there some way that neovim can read the file's extension (eg., .py, .asm, .cs) and use a preconfigured build system to run the code? Default Python Build System: ``` "cmd": ["/usr/local/bin/python3", "-u", "$file"], "file_regex": "^[ ]*File \"(...*?)\", line ([0-9]*)", "quiet": true ``` My Customized Assembly Build System: ``` "shell": true, "cmd": ["nasm -f macho64 ${file} && ld -macosx_version_min 10.12 -lSystem -o ${file_base_name} ${file_base_name}.o && ./${file_base_name}"], "file_regex": "^(.+):([0-9]+)()?: error: (.*)$", "working_dir": "${file_path}", "selector": "source.assembly" ```
2021/04/06
[ "https://Stackoverflow.com/questions/66977307", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15362444/" ]
this way ```js const headers = [ { label: 'First Name', field: 'firstName' } , { label: 'Last Name', field: 'lastName' } ] const data = [ { firstName: 'John', lastName: 'Doe' } , { firstName: 'ABC', lastName: 'DEF' } ] const res = data.map( o => headers.reduce((a,c)=> { a[c.label] = o[c.field] return a },{})) console.log( res) ``` ```css .as-console-wrapper {max-height: 100%!important;top:0;} ```
Here's a relatively simple way of achieving this using the map method: ```js const headers = [{ label: 'First Name', field: 'firstName' }, { label: 'Last Name', field: 'lastName' }] const data = [{ firstName: 'John', lastName: 'Doe' }, { firstName: 'ABC', lastName: 'DEF' }] const newData = data.map(e => Object.fromEntries(Object.entries(e).map(f => f.map((g,i) => i ? g : headers.find(h => h.field === g) ? headers.find(h => h.field === g).label : g)))); console.log(newData); ```
7,262,604
Here is the problem... I'm writing very small plugin for Blender, I have 10 python scripts, they parsing different file formats by using command-line, and I have a Main Python script to run all other scripts with proper commands... for example, "Main.py" include: txt2cfg.py -inFile -outFile... ma2lxo.py -inFile -outFile... Blender already include Python, so I can run "Main.py" from Blender, But I need it to work with both PC and MAC, and also doesn't require Python installation, so I can't use: * execfile(' txt2cfg.py -inFile -outFile ') * os.system(' ma2lxo.py -inFile -outFile ') * or even import subprocess because they required Python installation in order to run \*.py files. Sorry for language Thanks
2011/08/31
[ "https://Stackoverflow.com/questions/7262604", "https://Stackoverflow.com", "https://Stackoverflow.com/users/877276/" ]
If you really need to execute a python script in a new process and you don't know where the interpreter you want is located then use the sys module to help out. ``` import sys import subprocess subprocess.Popen((sys.executable, "script.py")) ``` Though importing the module (dynamically if need be) and then running its main method in another script is probably a better idea.
Two options: 1. Use py2exe to bundle the interpreter with the scripts. 2. Import the modules and call the functions automatically.
7,262,604
Here is the problem... I'm writing very small plugin for Blender, I have 10 python scripts, they parsing different file formats by using command-line, and I have a Main Python script to run all other scripts with proper commands... for example, "Main.py" include: txt2cfg.py -inFile -outFile... ma2lxo.py -inFile -outFile... Blender already include Python, so I can run "Main.py" from Blender, But I need it to work with both PC and MAC, and also doesn't require Python installation, so I can't use: * execfile(' txt2cfg.py -inFile -outFile ') * os.system(' ma2lxo.py -inFile -outFile ') * or even import subprocess because they required Python installation in order to run \*.py files. Sorry for language Thanks
2011/08/31
[ "https://Stackoverflow.com/questions/7262604", "https://Stackoverflow.com", "https://Stackoverflow.com/users/877276/" ]
> > for example, "Main.py" include: > > > txt2cfg.py -inFile -outFile... > ma2lxo.py -inFile -outFile... > > > Two things. 1. Each other script needs a main() function and a "main-import switch". See <http://docs.python.org/tutorial/modules.html#executing-modules-as-scripts> for hints on how this must look. 2. Import and execute the other scripts. ``` import txt2cfg import ma2lxo txt2cfg.main( inFile, outFile ) ma2lxo.main( inFile, outFile ) ``` This is the simplest way to do things.
Two options: 1. Use py2exe to bundle the interpreter with the scripts. 2. Import the modules and call the functions automatically.
7,262,604
Here is the problem... I'm writing very small plugin for Blender, I have 10 python scripts, they parsing different file formats by using command-line, and I have a Main Python script to run all other scripts with proper commands... for example, "Main.py" include: txt2cfg.py -inFile -outFile... ma2lxo.py -inFile -outFile... Blender already include Python, so I can run "Main.py" from Blender, But I need it to work with both PC and MAC, and also doesn't require Python installation, so I can't use: * execfile(' txt2cfg.py -inFile -outFile ') * os.system(' ma2lxo.py -inFile -outFile ') * or even import subprocess because they required Python installation in order to run \*.py files. Sorry for language Thanks
2011/08/31
[ "https://Stackoverflow.com/questions/7262604", "https://Stackoverflow.com", "https://Stackoverflow.com/users/877276/" ]
> > for example, "Main.py" include: > > > txt2cfg.py -inFile -outFile... > ma2lxo.py -inFile -outFile... > > > Two things. 1. Each other script needs a main() function and a "main-import switch". See <http://docs.python.org/tutorial/modules.html#executing-modules-as-scripts> for hints on how this must look. 2. Import and execute the other scripts. ``` import txt2cfg import ma2lxo txt2cfg.main( inFile, outFile ) ma2lxo.main( inFile, outFile ) ``` This is the simplest way to do things.
If you really need to execute a python script in a new process and you don't know where the interpreter you want is located then use the sys module to help out. ``` import sys import subprocess subprocess.Popen((sys.executable, "script.py")) ``` Though importing the module (dynamically if need be) and then running its main method in another script is probably a better idea.
53,226,642
I have a Flask server Running on Azure provided by Azure App services with sqlite3 as a database. I am unable to update sqlite3 as it is showing that database is locked ``` 2018-11-09T13:21:53.854367947Z [2018-11-09 13:21:53,835] ERROR in app: Exception on /borrow [POST] 2018-11-09T13:21:53.854407246Z Traceback (most recent call last): 2018-11-09T13:21:53.854413046Z File "/home/site/wwwroot/antenv/lib/python3.7/site-packages/flask/app.py", line 2292, in wsgi_app 2018-11-09T13:21:53.854417846Z response = self.full_dispatch_request() 2018-11-09T13:21:53.854422246Z File "/home/site/wwwroot/antenv/lib/python3.7/site-packages/flask/app.py", line 1815, in full_dispatch_request 2018-11-09T13:21:53.854427146Z rv = self.handle_user_exception(e) 2018-11-09T13:21:53.854431646Z File "/home/site/wwwroot/antenv/lib/python3.7/site-packages/flask/app.py", line 1718, in handle_user_exception 2018-11-09T13:21:53.854436146Z reraise(exc_type, exc_value, tb) 2018-11-09T13:21:53.854440346Z File "/home/site/wwwroot/antenv/lib/python3.7/site-packages/flask/_compat.py", line 35, in reraise 2018-11-09T13:21:53.854444746Z raise value 2018-11-09T13:21:53.854448846Z File "/home/site/wwwroot/antenv/lib/python3.7/site-packages/flask/app.py", line 1813, in full_dispatch_request 2018-11-09T13:21:53.854453246Z rv = self.dispatch_request() 2018-11-09T13:21:53.854457546Z File "/home/site/wwwroot/antenv/lib/python3.7/site-packages/flask/app.py", line 1799, in dispatch_request 2018-11-09T13:21:53.854461846Z return self.view_functions[rule.endpoint](**req.view_args) 2018-11-09T13:21:53.854466046Z File "/home/site/wwwroot/application.py", line 282, in borrow 2018-11-09T13:21:53.854480146Z cursor.execute("UPDATE books SET stock = stock - 1 WHERE bookid = ?",(bookid,)) 2018-11-09T13:21:53.854963942Z sqlite3.OperationalError: database is locked ``` Here is the route - ``` @app.route('/borrow',methods=["POST"]) def borrow(): # import pdb; pdb.set_trace() body = request.get_json() user_id = body["userid"] bookid = body["bookid"] conn = sqlite3.connect("database.db") cursor = conn.cursor() date = datetime.now() expiry_date = date + timedelta(days=30) cursor.execute("UPDATE books SET stock = stock - 1 WHERE bookid = ?",(bookid,)) # conn.commit() cursor.execute("INSERT INTO borrowed (issuedate,returndate,memberid,bookid) VALUES (?,?,?,?)",("xxx","xxx",user_id,bookid,)) conn.commit() cursor.close() conn.close() return json.dumps({"status":200,"conn":"working with datess update"}) ``` I tried checking the database integrity using pragma. There was no integrity loss. So I don't know what might be causing that error. Any help is Appreciated :)
2018/11/09
[ "https://Stackoverflow.com/questions/53226642", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7624222/" ]
I use Azure app service on Docker on Linux, and have the same issue. If you are using Azure app service on Windows, the problem is different from mine. The problem is that /home is mounted as CIFS filesystem which can not deal with SQLite3 lock. My workaround is to copy db.sqlite3 file to some directory other than /home, and properly set permissions and ownerships of the db.sqlite3 file and its directory as well. Then, let my project read/write it. However, this workaround is pretty awkward. I don't recommned.
I got it by setting up the azure mount options with the following configuration: ``` dir_mode=0777,file_mode=0777,uid=0,gid=0,mfsymlinks,nobrl,cache=strict ``` But the real solution is to add the flag **nobrl** (Byte-Range Lock). Add storageclass example for kubernetes: ``` --- kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: azureclass provisioner: kubernetes.io/azure-file mountOptions: - dir_mode=0777 - file_mode=0777 - uid=0 - gid=0 - mfsymlinks - nobrl - cache=strict parameters: skuName: Standard_LRS ```
53,226,642
I have a Flask server Running on Azure provided by Azure App services with sqlite3 as a database. I am unable to update sqlite3 as it is showing that database is locked ``` 2018-11-09T13:21:53.854367947Z [2018-11-09 13:21:53,835] ERROR in app: Exception on /borrow [POST] 2018-11-09T13:21:53.854407246Z Traceback (most recent call last): 2018-11-09T13:21:53.854413046Z File "/home/site/wwwroot/antenv/lib/python3.7/site-packages/flask/app.py", line 2292, in wsgi_app 2018-11-09T13:21:53.854417846Z response = self.full_dispatch_request() 2018-11-09T13:21:53.854422246Z File "/home/site/wwwroot/antenv/lib/python3.7/site-packages/flask/app.py", line 1815, in full_dispatch_request 2018-11-09T13:21:53.854427146Z rv = self.handle_user_exception(e) 2018-11-09T13:21:53.854431646Z File "/home/site/wwwroot/antenv/lib/python3.7/site-packages/flask/app.py", line 1718, in handle_user_exception 2018-11-09T13:21:53.854436146Z reraise(exc_type, exc_value, tb) 2018-11-09T13:21:53.854440346Z File "/home/site/wwwroot/antenv/lib/python3.7/site-packages/flask/_compat.py", line 35, in reraise 2018-11-09T13:21:53.854444746Z raise value 2018-11-09T13:21:53.854448846Z File "/home/site/wwwroot/antenv/lib/python3.7/site-packages/flask/app.py", line 1813, in full_dispatch_request 2018-11-09T13:21:53.854453246Z rv = self.dispatch_request() 2018-11-09T13:21:53.854457546Z File "/home/site/wwwroot/antenv/lib/python3.7/site-packages/flask/app.py", line 1799, in dispatch_request 2018-11-09T13:21:53.854461846Z return self.view_functions[rule.endpoint](**req.view_args) 2018-11-09T13:21:53.854466046Z File "/home/site/wwwroot/application.py", line 282, in borrow 2018-11-09T13:21:53.854480146Z cursor.execute("UPDATE books SET stock = stock - 1 WHERE bookid = ?",(bookid,)) 2018-11-09T13:21:53.854963942Z sqlite3.OperationalError: database is locked ``` Here is the route - ``` @app.route('/borrow',methods=["POST"]) def borrow(): # import pdb; pdb.set_trace() body = request.get_json() user_id = body["userid"] bookid = body["bookid"] conn = sqlite3.connect("database.db") cursor = conn.cursor() date = datetime.now() expiry_date = date + timedelta(days=30) cursor.execute("UPDATE books SET stock = stock - 1 WHERE bookid = ?",(bookid,)) # conn.commit() cursor.execute("INSERT INTO borrowed (issuedate,returndate,memberid,bookid) VALUES (?,?,?,?)",("xxx","xxx",user_id,bookid,)) conn.commit() cursor.close() conn.close() return json.dumps({"status":200,"conn":"working with datess update"}) ``` I tried checking the database integrity using pragma. There was no integrity loss. So I don't know what might be causing that error. Any help is Appreciated :)
2018/11/09
[ "https://Stackoverflow.com/questions/53226642", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7624222/" ]
I use Azure app service on Docker on Linux, and have the same issue. If you are using Azure app service on Windows, the problem is different from mine. The problem is that /home is mounted as CIFS filesystem which can not deal with SQLite3 lock. My workaround is to copy db.sqlite3 file to some directory other than /home, and properly set permissions and ownerships of the db.sqlite3 file and its directory as well. Then, let my project read/write it. However, this workaround is pretty awkward. I don't recommned.
This answer appears toward the top of a typical Google search for this issue so I thought I'd add a couple of additional tips: For those running JavaScript and using Sequelize as the interface to your SQLite DB, running `await sequelize.query('PRAGMA journal_mode=WAL;')` prior to creating your database will allow you to read/write the DB file in an Azure web app running under a Linux service plan. I have a separate script that creates one via a call to `sequelize.sync()`. I'm storing the DB file in a separate directory under /home within the file system for the Linux container. It seems to run fine and my workload is expected to be very light. Note that you don't need to set the journal mode again when your app starts and you try to connect to the database, that mode will be set in the file itself (this wasn't obvious from the SQLite docs).
53,226,642
I have a Flask server Running on Azure provided by Azure App services with sqlite3 as a database. I am unable to update sqlite3 as it is showing that database is locked ``` 2018-11-09T13:21:53.854367947Z [2018-11-09 13:21:53,835] ERROR in app: Exception on /borrow [POST] 2018-11-09T13:21:53.854407246Z Traceback (most recent call last): 2018-11-09T13:21:53.854413046Z File "/home/site/wwwroot/antenv/lib/python3.7/site-packages/flask/app.py", line 2292, in wsgi_app 2018-11-09T13:21:53.854417846Z response = self.full_dispatch_request() 2018-11-09T13:21:53.854422246Z File "/home/site/wwwroot/antenv/lib/python3.7/site-packages/flask/app.py", line 1815, in full_dispatch_request 2018-11-09T13:21:53.854427146Z rv = self.handle_user_exception(e) 2018-11-09T13:21:53.854431646Z File "/home/site/wwwroot/antenv/lib/python3.7/site-packages/flask/app.py", line 1718, in handle_user_exception 2018-11-09T13:21:53.854436146Z reraise(exc_type, exc_value, tb) 2018-11-09T13:21:53.854440346Z File "/home/site/wwwroot/antenv/lib/python3.7/site-packages/flask/_compat.py", line 35, in reraise 2018-11-09T13:21:53.854444746Z raise value 2018-11-09T13:21:53.854448846Z File "/home/site/wwwroot/antenv/lib/python3.7/site-packages/flask/app.py", line 1813, in full_dispatch_request 2018-11-09T13:21:53.854453246Z rv = self.dispatch_request() 2018-11-09T13:21:53.854457546Z File "/home/site/wwwroot/antenv/lib/python3.7/site-packages/flask/app.py", line 1799, in dispatch_request 2018-11-09T13:21:53.854461846Z return self.view_functions[rule.endpoint](**req.view_args) 2018-11-09T13:21:53.854466046Z File "/home/site/wwwroot/application.py", line 282, in borrow 2018-11-09T13:21:53.854480146Z cursor.execute("UPDATE books SET stock = stock - 1 WHERE bookid = ?",(bookid,)) 2018-11-09T13:21:53.854963942Z sqlite3.OperationalError: database is locked ``` Here is the route - ``` @app.route('/borrow',methods=["POST"]) def borrow(): # import pdb; pdb.set_trace() body = request.get_json() user_id = body["userid"] bookid = body["bookid"] conn = sqlite3.connect("database.db") cursor = conn.cursor() date = datetime.now() expiry_date = date + timedelta(days=30) cursor.execute("UPDATE books SET stock = stock - 1 WHERE bookid = ?",(bookid,)) # conn.commit() cursor.execute("INSERT INTO borrowed (issuedate,returndate,memberid,bookid) VALUES (?,?,?,?)",("xxx","xxx",user_id,bookid,)) conn.commit() cursor.close() conn.close() return json.dumps({"status":200,"conn":"working with datess update"}) ``` I tried checking the database integrity using pragma. There was no integrity loss. So I don't know what might be causing that error. Any help is Appreciated :)
2018/11/09
[ "https://Stackoverflow.com/questions/53226642", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7624222/" ]
Presumably this solution is not safe for production workloads but at least I got it working by executing the following command: ``` sqlite3 <database-file> 'PRAGMA journal_mode=wal;' ``` After running the above command, my database stored on an Azure File share works inside a container Web App.
I got it by setting up the azure mount options with the following configuration: ``` dir_mode=0777,file_mode=0777,uid=0,gid=0,mfsymlinks,nobrl,cache=strict ``` But the real solution is to add the flag **nobrl** (Byte-Range Lock). Add storageclass example for kubernetes: ``` --- kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: azureclass provisioner: kubernetes.io/azure-file mountOptions: - dir_mode=0777 - file_mode=0777 - uid=0 - gid=0 - mfsymlinks - nobrl - cache=strict parameters: skuName: Standard_LRS ```
53,226,642
I have a Flask server Running on Azure provided by Azure App services with sqlite3 as a database. I am unable to update sqlite3 as it is showing that database is locked ``` 2018-11-09T13:21:53.854367947Z [2018-11-09 13:21:53,835] ERROR in app: Exception on /borrow [POST] 2018-11-09T13:21:53.854407246Z Traceback (most recent call last): 2018-11-09T13:21:53.854413046Z File "/home/site/wwwroot/antenv/lib/python3.7/site-packages/flask/app.py", line 2292, in wsgi_app 2018-11-09T13:21:53.854417846Z response = self.full_dispatch_request() 2018-11-09T13:21:53.854422246Z File "/home/site/wwwroot/antenv/lib/python3.7/site-packages/flask/app.py", line 1815, in full_dispatch_request 2018-11-09T13:21:53.854427146Z rv = self.handle_user_exception(e) 2018-11-09T13:21:53.854431646Z File "/home/site/wwwroot/antenv/lib/python3.7/site-packages/flask/app.py", line 1718, in handle_user_exception 2018-11-09T13:21:53.854436146Z reraise(exc_type, exc_value, tb) 2018-11-09T13:21:53.854440346Z File "/home/site/wwwroot/antenv/lib/python3.7/site-packages/flask/_compat.py", line 35, in reraise 2018-11-09T13:21:53.854444746Z raise value 2018-11-09T13:21:53.854448846Z File "/home/site/wwwroot/antenv/lib/python3.7/site-packages/flask/app.py", line 1813, in full_dispatch_request 2018-11-09T13:21:53.854453246Z rv = self.dispatch_request() 2018-11-09T13:21:53.854457546Z File "/home/site/wwwroot/antenv/lib/python3.7/site-packages/flask/app.py", line 1799, in dispatch_request 2018-11-09T13:21:53.854461846Z return self.view_functions[rule.endpoint](**req.view_args) 2018-11-09T13:21:53.854466046Z File "/home/site/wwwroot/application.py", line 282, in borrow 2018-11-09T13:21:53.854480146Z cursor.execute("UPDATE books SET stock = stock - 1 WHERE bookid = ?",(bookid,)) 2018-11-09T13:21:53.854963942Z sqlite3.OperationalError: database is locked ``` Here is the route - ``` @app.route('/borrow',methods=["POST"]) def borrow(): # import pdb; pdb.set_trace() body = request.get_json() user_id = body["userid"] bookid = body["bookid"] conn = sqlite3.connect("database.db") cursor = conn.cursor() date = datetime.now() expiry_date = date + timedelta(days=30) cursor.execute("UPDATE books SET stock = stock - 1 WHERE bookid = ?",(bookid,)) # conn.commit() cursor.execute("INSERT INTO borrowed (issuedate,returndate,memberid,bookid) VALUES (?,?,?,?)",("xxx","xxx",user_id,bookid,)) conn.commit() cursor.close() conn.close() return json.dumps({"status":200,"conn":"working with datess update"}) ``` I tried checking the database integrity using pragma. There was no integrity loss. So I don't know what might be causing that error. Any help is Appreciated :)
2018/11/09
[ "https://Stackoverflow.com/questions/53226642", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7624222/" ]
I got it by setting up the azure mount options with the following configuration: ``` dir_mode=0777,file_mode=0777,uid=0,gid=0,mfsymlinks,nobrl,cache=strict ``` But the real solution is to add the flag **nobrl** (Byte-Range Lock). Add storageclass example for kubernetes: ``` --- kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: azureclass provisioner: kubernetes.io/azure-file mountOptions: - dir_mode=0777 - file_mode=0777 - uid=0 - gid=0 - mfsymlinks - nobrl - cache=strict parameters: skuName: Standard_LRS ```
This answer appears toward the top of a typical Google search for this issue so I thought I'd add a couple of additional tips: For those running JavaScript and using Sequelize as the interface to your SQLite DB, running `await sequelize.query('PRAGMA journal_mode=WAL;')` prior to creating your database will allow you to read/write the DB file in an Azure web app running under a Linux service plan. I have a separate script that creates one via a call to `sequelize.sync()`. I'm storing the DB file in a separate directory under /home within the file system for the Linux container. It seems to run fine and my workload is expected to be very light. Note that you don't need to set the journal mode again when your app starts and you try to connect to the database, that mode will be set in the file itself (this wasn't obvious from the SQLite docs).
53,226,642
I have a Flask server Running on Azure provided by Azure App services with sqlite3 as a database. I am unable to update sqlite3 as it is showing that database is locked ``` 2018-11-09T13:21:53.854367947Z [2018-11-09 13:21:53,835] ERROR in app: Exception on /borrow [POST] 2018-11-09T13:21:53.854407246Z Traceback (most recent call last): 2018-11-09T13:21:53.854413046Z File "/home/site/wwwroot/antenv/lib/python3.7/site-packages/flask/app.py", line 2292, in wsgi_app 2018-11-09T13:21:53.854417846Z response = self.full_dispatch_request() 2018-11-09T13:21:53.854422246Z File "/home/site/wwwroot/antenv/lib/python3.7/site-packages/flask/app.py", line 1815, in full_dispatch_request 2018-11-09T13:21:53.854427146Z rv = self.handle_user_exception(e) 2018-11-09T13:21:53.854431646Z File "/home/site/wwwroot/antenv/lib/python3.7/site-packages/flask/app.py", line 1718, in handle_user_exception 2018-11-09T13:21:53.854436146Z reraise(exc_type, exc_value, tb) 2018-11-09T13:21:53.854440346Z File "/home/site/wwwroot/antenv/lib/python3.7/site-packages/flask/_compat.py", line 35, in reraise 2018-11-09T13:21:53.854444746Z raise value 2018-11-09T13:21:53.854448846Z File "/home/site/wwwroot/antenv/lib/python3.7/site-packages/flask/app.py", line 1813, in full_dispatch_request 2018-11-09T13:21:53.854453246Z rv = self.dispatch_request() 2018-11-09T13:21:53.854457546Z File "/home/site/wwwroot/antenv/lib/python3.7/site-packages/flask/app.py", line 1799, in dispatch_request 2018-11-09T13:21:53.854461846Z return self.view_functions[rule.endpoint](**req.view_args) 2018-11-09T13:21:53.854466046Z File "/home/site/wwwroot/application.py", line 282, in borrow 2018-11-09T13:21:53.854480146Z cursor.execute("UPDATE books SET stock = stock - 1 WHERE bookid = ?",(bookid,)) 2018-11-09T13:21:53.854963942Z sqlite3.OperationalError: database is locked ``` Here is the route - ``` @app.route('/borrow',methods=["POST"]) def borrow(): # import pdb; pdb.set_trace() body = request.get_json() user_id = body["userid"] bookid = body["bookid"] conn = sqlite3.connect("database.db") cursor = conn.cursor() date = datetime.now() expiry_date = date + timedelta(days=30) cursor.execute("UPDATE books SET stock = stock - 1 WHERE bookid = ?",(bookid,)) # conn.commit() cursor.execute("INSERT INTO borrowed (issuedate,returndate,memberid,bookid) VALUES (?,?,?,?)",("xxx","xxx",user_id,bookid,)) conn.commit() cursor.close() conn.close() return json.dumps({"status":200,"conn":"working with datess update"}) ``` I tried checking the database integrity using pragma. There was no integrity loss. So I don't know what might be causing that error. Any help is Appreciated :)
2018/11/09
[ "https://Stackoverflow.com/questions/53226642", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7624222/" ]
Presumably this solution is not safe for production workloads but at least I got it working by executing the following command: ``` sqlite3 <database-file> 'PRAGMA journal_mode=wal;' ``` After running the above command, my database stored on an Azure File share works inside a container Web App.
This answer appears toward the top of a typical Google search for this issue so I thought I'd add a couple of additional tips: For those running JavaScript and using Sequelize as the interface to your SQLite DB, running `await sequelize.query('PRAGMA journal_mode=WAL;')` prior to creating your database will allow you to read/write the DB file in an Azure web app running under a Linux service plan. I have a separate script that creates one via a call to `sequelize.sync()`. I'm storing the DB file in a separate directory under /home within the file system for the Linux container. It seems to run fine and my workload is expected to be very light. Note that you don't need to set the journal mode again when your app starts and you try to connect to the database, that mode will be set in the file itself (this wasn't obvious from the SQLite docs).
71,237,738
I looked up a lot of threads but none solved my issue I have the following dir in my project ``` my_proj folder_one __init__.py file_one.py folder_two __init__.py file_two.py __init__.py main.py ``` I am trying to import from `file_two.py` the `FileTwo` class: file\_one.py: ``` from folder_two.file_two import FileTwo ``` But i get the following error when I try to run the `file_one.py` file: ``` ModuleNotFoundError: No module named 'folder_two' ``` I tried the following: * Adding `PYTHONPATH` to .env file * Adding `"python.envFile": "${workspaceFolder}/.env"` to settings.json * Adding `"terminal.integrated.env.osx"` to settings.json * Adding `"python.analysis.extraPaths"` to settings.json * Adding `"python.autoComplete.extraPaths"` to settings.json * Adding `__init__.py` to every folder exists in my project When I print the `sys.path` from `file_one.py` I get the path: `/my_proj/folder_one` instead of `/my_proj` How can I add the `/my_proj` root directory of my project to sys.path (not permanently) so I could run any python file in my project and still access all files?
2022/02/23
[ "https://Stackoverflow.com/questions/71237738", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9607072/" ]
The import line be modified as `from folder_two.file_two import FileTwo`, notice that `.py` is removed. Then in the root directory, `python main.py` should work. Also, if file\_one.py has a main function, then you can launch with `python -m folder_one.file_one`. Noticed that `.py` is not used. Be aware to test the problems in terminal first. There are more tricky parts in IDEs, e.g. `Pycharm`, since they might automatically change working directories or do something else that is not so obvious.
How do you modified the `"terminal.integrated.env.osx"`? Like this? ``` "terminal.integrated.env.osx": { "PYTHONPATH": "${workspaceFolder};", }, ``` And what you have added in the `.env` file? Like this(just an example, an absolute path)? ``` PYTHONPATH=c:\\Work\\python3.10 ``` > > The PYTHONPATH environment variable specifies additional locations > where the Python interpreter should look for modules. In VS Code, > PYTHONPATH can be set through the terminal settings > (terminal.integrated.env.\*) and/or within an .env file. > > > When the terminal settings are used, PYTHONPATH affects any tools that > are run within the terminal by a user, as well as any action the > extension performs for a user that is routed through the terminal such > as debugging. However, in this case when the extension is performing > an action that isn't routed through the terminal, such as the use of a > linter or formatter, then this setting will not have an effect on > module look-up. > > > When PYTHONPATH is set using an .env file, it will affect anything the > extension does on your behalf and actions performed by the debugger, > but it will not affect tools run in the terminal. > > > If needed, you can set PYTHONPATH using both methods. > > > You can refer to the [official docs](https://code.visualstudio.com/docs/python/environments#_use-of-the-pythonpath-variable).
1,247,486
Is there a reason to prefer using `map()` over list comprehension or vice versa? Is either of them generally more efficient or considered generally more pythonic than the other?
2009/08/07
[ "https://Stackoverflow.com/questions/1247486", "https://Stackoverflow.com", "https://Stackoverflow.com/users/43818/" ]
I tried the code by @alex-martelli but found some discrepancies ``` python -mtimeit -s "xs=range(123456)" "map(hex, xs)" 1000000 loops, best of 5: 218 nsec per loop python -mtimeit -s "xs=range(123456)" "[hex(x) for x in xs]" 10 loops, best of 5: 19.4 msec per loop ``` map takes the same amount of time even for very large ranges while using list comprehension takes a lot of time as is evident from my code. So apart from being considered "unpythonic", I have not faced any performance issues relating to usage of map.
My use case: ``` def sum_items(*args): return sum(args) list_a = [1, 2, 3] list_b = [1, 2, 3] list_of_sums = list(map(sum_items, list_a, list_b)) >>> [3, 6, 9] comprehension = [sum(items) for items in iter(zip(list_a, list_b))] ``` I found myself starting to use more map, I thought map could be slower than comp due to pass and return arguments, that's why I found this post. I believe using map could be much more readable and flexible, especially when I need to construct the values of the list. You actually understand it when you read it if you used map. ``` def pair_list_items(*args): return args packed_list = list(map(pair_list_items, lista, *listb, listc.....listn)) ``` Plus the flexibility bonus. And thank for all other answers, plus the performance bonus.
1,247,486
Is there a reason to prefer using `map()` over list comprehension or vice versa? Is either of them generally more efficient or considered generally more pythonic than the other?
2009/08/07
[ "https://Stackoverflow.com/questions/1247486", "https://Stackoverflow.com", "https://Stackoverflow.com/users/43818/" ]
Actually, `map` and list comprehensions behave quite differently in the Python 3 language. Take a look at the following Python 3 program: ``` def square(x): return x*x squares = map(square, [1, 2, 3]) print(list(squares)) print(list(squares)) ``` You might expect it to print the line "[1, 4, 9]" twice, but instead it prints "[1, 4, 9]" followed by "[]". The first time you look at `squares` it seems to behave as a sequence of three elements, but the second time as an empty one. In the Python 2 language `map` returns a plain old list, just like list comprehensions do in both languages. The crux is that the return value of `map` in Python 3 (and `imap` in Python 2) is not a list - it's an iterator! The elements are consumed when you iterate over an iterator unlike when you iterate over a list. This is why `squares` looks empty in the last `print(list(squares))` line. To summarize: * **When dealing with iterators you have to remember that they are stateful and that they mutate as you traverse them.** * Lists are more predictable since they only change when you explicitly mutate them; they are *containers*. * And a bonus: numbers, strings, and tuples are even more predictable since they cannot change at all; they are *values*.
[![enter image description here](https://i.stack.imgur.com/ZPBTx.png)](https://i.stack.imgur.com/ZPBTx.png) Image Source: Experfy You can see for yourself which is better between - List Comprehension and the Map Function (List Comprehension takes lesser time to process 1 million records when compared to a map function) Hope it helps! Good Luck :)
1,247,486
Is there a reason to prefer using `map()` over list comprehension or vice versa? Is either of them generally more efficient or considered generally more pythonic than the other?
2009/08/07
[ "https://Stackoverflow.com/questions/1247486", "https://Stackoverflow.com", "https://Stackoverflow.com/users/43818/" ]
I timed some of the results with [perfplot](https://github.com/nschloe/perfplot) (a project of mine). As others have noted, `map` really only returns an iterator so it's a constant-time operation. When realizing the iterator by `list()`, it's on par with list comprehensions. Depending on the expression, either one might have a slight edge but it's hardly significant. Note that arithmetic operations like `x ** 2` are *much* faster in NumPy, especially if the input data is already a NumPy array. `hex`: [![enter image description here](https://i.stack.imgur.com/5PGSU.png)](https://i.stack.imgur.com/5PGSU.png) `x ** 2`: [![enter image description here](https://i.stack.imgur.com/Ujs5b.png)](https://i.stack.imgur.com/Ujs5b.png) --- Code to reproduce the plots: ```py import perfplot def standalone_map(data): return map(hex, data) def list_map(data): return list(map(hex, data)) def comprehension(data): return [hex(x) for x in data] b = perfplot.bench( setup=lambda n: list(range(n)), kernels=[standalone_map, list_map, comprehension], n_range=[2 ** k for k in range(20)], equality_check=None, ) b.save("out.png") b.show() ``` ```py import perfplot import numpy as np def standalone_map(data): return map(lambda x: x ** 2, data[0]) def list_map(data): return list(map(lambda x: x ** 2, data[0])) def comprehension(data): return [x ** 2 for x in data[0]] def numpy_asarray(data): return np.asarray(data[0]) ** 2 def numpy_direct(data): return data[1] ** 2 b = perfplot.bench( setup=lambda n: (list(range(n)), np.arange(n)), kernels=[standalone_map, list_map, comprehension, numpy_direct, numpy_asarray], n_range=[2 ** k for k in range(20)], equality_check=None, ) b.save("out2.png") b.show() ```
I tried the code by @alex-martelli but found some discrepancies ``` python -mtimeit -s "xs=range(123456)" "map(hex, xs)" 1000000 loops, best of 5: 218 nsec per loop python -mtimeit -s "xs=range(123456)" "[hex(x) for x in xs]" 10 loops, best of 5: 19.4 msec per loop ``` map takes the same amount of time even for very large ranges while using list comprehension takes a lot of time as is evident from my code. So apart from being considered "unpythonic", I have not faced any performance issues relating to usage of map.
1,247,486
Is there a reason to prefer using `map()` over list comprehension or vice versa? Is either of them generally more efficient or considered generally more pythonic than the other?
2009/08/07
[ "https://Stackoverflow.com/questions/1247486", "https://Stackoverflow.com", "https://Stackoverflow.com/users/43818/" ]
If you plan on writing any asynchronous, parallel, or distributed code, you will probably prefer `map` over a list comprehension -- as most asynchronous, parallel, or distributed packages provide a `map` function to overload python's `map`. Then by passing the appropriate `map` function to the rest of your code, you may not have to modify your original serial code to have it run in parallel (etc).
I consider that the most Pythonic way is to use a list comprehension instead of `map` and `filter`. The reason is that list comprehensions are clearer than `map` and `filter`. ``` In [1]: odd_cubes = [x ** 3 for x in range(10) if x % 2 == 1] # using a list comprehension In [2]: odd_cubes_alt = list(map(lambda x: x ** 3, filter(lambda x: x % 2 == 1, range(10)))) # using map and filter In [3]: odd_cubes == odd_cubes_alt Out[3]: True ``` As you an see, a comprehension does not require extra `lambda` expressions as `map` needs. Furthermore, a comprehension also allows filtering easily, while `map` requires `filter` to allow filtering.
1,247,486
Is there a reason to prefer using `map()` over list comprehension or vice versa? Is either of them generally more efficient or considered generally more pythonic than the other?
2009/08/07
[ "https://Stackoverflow.com/questions/1247486", "https://Stackoverflow.com", "https://Stackoverflow.com/users/43818/" ]
`map` may be microscopically faster in some cases (when you're NOT making a lambda for the purpose, but using the same function in map and a listcomp). List comprehensions may be faster in other cases and most (not all) pythonistas consider them more direct and clearer. An example of the tiny speed advantage of map when using exactly the same function: ``` $ python -m timeit -s'xs=range(10)' 'map(hex, xs)' 100000 loops, best of 3: 4.86 usec per loop $ python -m timeit -s'xs=range(10)' '[hex(x) for x in xs]' 100000 loops, best of 3: 5.58 usec per loop ``` An example of how performance comparison gets completely reversed when map needs a lambda: ``` $ python -m timeit -s'xs=range(10)' 'map(lambda x: x+2, xs)' 100000 loops, best of 3: 4.24 usec per loop $ python -m timeit -s'xs=range(10)' '[x+2 for x in xs]' 100000 loops, best of 3: 2.32 usec per loop ```
I timed some of the results with [perfplot](https://github.com/nschloe/perfplot) (a project of mine). As others have noted, `map` really only returns an iterator so it's a constant-time operation. When realizing the iterator by `list()`, it's on par with list comprehensions. Depending on the expression, either one might have a slight edge but it's hardly significant. Note that arithmetic operations like `x ** 2` are *much* faster in NumPy, especially if the input data is already a NumPy array. `hex`: [![enter image description here](https://i.stack.imgur.com/5PGSU.png)](https://i.stack.imgur.com/5PGSU.png) `x ** 2`: [![enter image description here](https://i.stack.imgur.com/Ujs5b.png)](https://i.stack.imgur.com/Ujs5b.png) --- Code to reproduce the plots: ```py import perfplot def standalone_map(data): return map(hex, data) def list_map(data): return list(map(hex, data)) def comprehension(data): return [hex(x) for x in data] b = perfplot.bench( setup=lambda n: list(range(n)), kernels=[standalone_map, list_map, comprehension], n_range=[2 ** k for k in range(20)], equality_check=None, ) b.save("out.png") b.show() ``` ```py import perfplot import numpy as np def standalone_map(data): return map(lambda x: x ** 2, data[0]) def list_map(data): return list(map(lambda x: x ** 2, data[0])) def comprehension(data): return [x ** 2 for x in data[0]] def numpy_asarray(data): return np.asarray(data[0]) ** 2 def numpy_direct(data): return data[1] ** 2 b = perfplot.bench( setup=lambda n: (list(range(n)), np.arange(n)), kernels=[standalone_map, list_map, comprehension, numpy_direct, numpy_asarray], n_range=[2 ** k for k in range(20)], equality_check=None, ) b.save("out2.png") b.show() ```
1,247,486
Is there a reason to prefer using `map()` over list comprehension or vice versa? Is either of them generally more efficient or considered generally more pythonic than the other?
2009/08/07
[ "https://Stackoverflow.com/questions/1247486", "https://Stackoverflow.com", "https://Stackoverflow.com/users/43818/" ]
Here is one possible case: ``` map(lambda op1,op2: op1*op2, list1, list2) ``` versus: ``` [op1*op2 for op1,op2 in zip(list1,list2)] ``` I am guessing the zip() is an unfortunate and unnecessary overhead you need to indulge in if you insist on using list comprehensions instead of the map. Would be great if someone clarifies this whether affirmatively or negatively.
[![enter image description here](https://i.stack.imgur.com/ZPBTx.png)](https://i.stack.imgur.com/ZPBTx.png) Image Source: Experfy You can see for yourself which is better between - List Comprehension and the Map Function (List Comprehension takes lesser time to process 1 million records when compared to a map function) Hope it helps! Good Luck :)
1,247,486
Is there a reason to prefer using `map()` over list comprehension or vice versa? Is either of them generally more efficient or considered generally more pythonic than the other?
2009/08/07
[ "https://Stackoverflow.com/questions/1247486", "https://Stackoverflow.com", "https://Stackoverflow.com/users/43818/" ]
Here is one possible case: ``` map(lambda op1,op2: op1*op2, list1, list2) ``` versus: ``` [op1*op2 for op1,op2 in zip(list1,list2)] ``` I am guessing the zip() is an unfortunate and unnecessary overhead you need to indulge in if you insist on using list comprehensions instead of the map. Would be great if someone clarifies this whether affirmatively or negatively.
I ran a quick test comparing three methods for invoking the method of an object. The time difference, in this case, is negligible and is a matter of the function in question (see @Alex Martelli's [response](https://stackoverflow.com/a/1247490/6557588)). Here, I looked at the following methods: ```py # map_lambda list(map(lambda x: x.add(), vals)) # map_operator from operator import methodcaller list(map(methodcaller("add"), vals)) # map_comprehension [x.add() for x in vals] ``` I looked at lists (stored in the variable `vals`) of both integers (Python `int`) and floating point numbers (Python `float`) for increasing list sizes. The following dummy class `DummyNum` is considered: ``` class DummyNum(object): """Dummy class""" __slots__ = 'n', def __init__(self, n): self.n = n def add(self): self.n += 5 ``` Specifically, the `add` method. The `__slots__` attribute is a simple optimization in Python to define the total memory needed by the class (attributes), reducing memory size. Here are the resulting plots. [![Performance of mapping Python object methods](https://i.stack.imgur.com/PIm1z.png)](https://i.stack.imgur.com/PIm1z.png) As stated previously, the technique used makes a minimal difference and you should code in a way that is most readable to you, or in the particular circumstance. In this case, the list comprehension (`map_comprehension` technique) is fastest for both types of additions in an object, especially with shorter lists. Visit [this pastebin](https://pastebin.com/cikfyFGh) for the source used to generate the plot and data.
1,247,486
Is there a reason to prefer using `map()` over list comprehension or vice versa? Is either of them generally more efficient or considered generally more pythonic than the other?
2009/08/07
[ "https://Stackoverflow.com/questions/1247486", "https://Stackoverflow.com", "https://Stackoverflow.com/users/43818/" ]
Actually, `map` and list comprehensions behave quite differently in the Python 3 language. Take a look at the following Python 3 program: ``` def square(x): return x*x squares = map(square, [1, 2, 3]) print(list(squares)) print(list(squares)) ``` You might expect it to print the line "[1, 4, 9]" twice, but instead it prints "[1, 4, 9]" followed by "[]". The first time you look at `squares` it seems to behave as a sequence of three elements, but the second time as an empty one. In the Python 2 language `map` returns a plain old list, just like list comprehensions do in both languages. The crux is that the return value of `map` in Python 3 (and `imap` in Python 2) is not a list - it's an iterator! The elements are consumed when you iterate over an iterator unlike when you iterate over a list. This is why `squares` looks empty in the last `print(list(squares))` line. To summarize: * **When dealing with iterators you have to remember that they are stateful and that they mutate as you traverse them.** * Lists are more predictable since they only change when you explicitly mutate them; they are *containers*. * And a bonus: numbers, strings, and tuples are even more predictable since they cannot change at all; they are *values*.
Here is one possible case: ``` map(lambda op1,op2: op1*op2, list1, list2) ``` versus: ``` [op1*op2 for op1,op2 in zip(list1,list2)] ``` I am guessing the zip() is an unfortunate and unnecessary overhead you need to indulge in if you insist on using list comprehensions instead of the map. Would be great if someone clarifies this whether affirmatively or negatively.
1,247,486
Is there a reason to prefer using `map()` over list comprehension or vice versa? Is either of them generally more efficient or considered generally more pythonic than the other?
2009/08/07
[ "https://Stackoverflow.com/questions/1247486", "https://Stackoverflow.com", "https://Stackoverflow.com/users/43818/" ]
Python 2: You should use `map` and `filter` instead of list comprehensions. --------------------------------------------------------------------------- An **objective** reason why you should prefer them even though they're not "Pythonic" is this: They require functions/lambdas as arguments, which **introduce a new scope**. I've gotten bitten by this more than once: ``` for x, y in somePoints: # (several lines of code here) squared = [x ** 2 for x in numbers] # Oops, x was silently overwritten! ``` but if instead I had said: ``` for x, y in somePoints: # (several lines of code here) squared = map(lambda x: x ** 2, numbers) ``` then everything would've been fine. You could say I was being silly for using the same variable name in the same scope. I wasn't. The code was fine originally -- the two `x`s weren't in the same scope. It was only after I **moved** the inner block to a different section of the code that the problem came up (read: problem during maintenance, not development), and I didn't expect it. Yes, *if you never make this mistake* then list comprehensions are more elegant. But from personal experience (and from seeing others make the same mistake) I've seen it happen enough times that I think it's not worth the pain you have to go through when these bugs creep into your code. ### Conclusion: Use `map` and `filter`. They prevent subtle hard-to-diagnose scope-related bugs. ### Side note: Don't forget to consider using `imap` and `ifilter` (in `itertools`) if they are appropriate for your situation!
I find list comprehensions are generally more expressive of what I'm trying to do than `map` - they both get it done, but the former saves the mental load of trying to understand what could be a complex `lambda` expression. There's also an interview out there somewhere (I can't find it offhand) where Guido lists `lambda`s and the functional functions as the thing he most regrets about accepting into Python, so you could make the argument that they're un-Pythonic by virtue of that.
1,247,486
Is there a reason to prefer using `map()` over list comprehension or vice versa? Is either of them generally more efficient or considered generally more pythonic than the other?
2009/08/07
[ "https://Stackoverflow.com/questions/1247486", "https://Stackoverflow.com", "https://Stackoverflow.com/users/43818/" ]
Python 2: You should use `map` and `filter` instead of list comprehensions. --------------------------------------------------------------------------- An **objective** reason why you should prefer them even though they're not "Pythonic" is this: They require functions/lambdas as arguments, which **introduce a new scope**. I've gotten bitten by this more than once: ``` for x, y in somePoints: # (several lines of code here) squared = [x ** 2 for x in numbers] # Oops, x was silently overwritten! ``` but if instead I had said: ``` for x, y in somePoints: # (several lines of code here) squared = map(lambda x: x ** 2, numbers) ``` then everything would've been fine. You could say I was being silly for using the same variable name in the same scope. I wasn't. The code was fine originally -- the two `x`s weren't in the same scope. It was only after I **moved** the inner block to a different section of the code that the problem came up (read: problem during maintenance, not development), and I didn't expect it. Yes, *if you never make this mistake* then list comprehensions are more elegant. But from personal experience (and from seeing others make the same mistake) I've seen it happen enough times that I think it's not worth the pain you have to go through when these bugs creep into your code. ### Conclusion: Use `map` and `filter`. They prevent subtle hard-to-diagnose scope-related bugs. ### Side note: Don't forget to consider using `imap` and `ifilter` (in `itertools`) if they are appropriate for your situation!
I ran a quick test comparing three methods for invoking the method of an object. The time difference, in this case, is negligible and is a matter of the function in question (see @Alex Martelli's [response](https://stackoverflow.com/a/1247490/6557588)). Here, I looked at the following methods: ```py # map_lambda list(map(lambda x: x.add(), vals)) # map_operator from operator import methodcaller list(map(methodcaller("add"), vals)) # map_comprehension [x.add() for x in vals] ``` I looked at lists (stored in the variable `vals`) of both integers (Python `int`) and floating point numbers (Python `float`) for increasing list sizes. The following dummy class `DummyNum` is considered: ``` class DummyNum(object): """Dummy class""" __slots__ = 'n', def __init__(self, n): self.n = n def add(self): self.n += 5 ``` Specifically, the `add` method. The `__slots__` attribute is a simple optimization in Python to define the total memory needed by the class (attributes), reducing memory size. Here are the resulting plots. [![Performance of mapping Python object methods](https://i.stack.imgur.com/PIm1z.png)](https://i.stack.imgur.com/PIm1z.png) As stated previously, the technique used makes a minimal difference and you should code in a way that is most readable to you, or in the particular circumstance. In this case, the list comprehension (`map_comprehension` technique) is fastest for both types of additions in an object, especially with shorter lists. Visit [this pastebin](https://pastebin.com/cikfyFGh) for the source used to generate the plot and data.
40,852,174
I'm not sure why I get an error message when I click on get button. I've moved things around and I still get this error message. ``` Exception in Tkinter callback Traceback (most recent call last): File "/usr/lib/python3.4/tkinter/__init__.py", line 1536, in __call__ return self.func(*args) File "address.py", line 5, in getName first_name = first_name.get() UnboundLocalError: local variable 'first_name' referenced before assignment ``` Code: ``` from tkinter import * from tkinter import ttk def getName(event): first_name = first_name.get() fieldEntry.insert(0, first_name) root = Tk() root.title('Address Book') Label(root, text='First Name').grid(row=0, sticky=W, padx=4) first_name = Entry(root).grid(row=0, column=1, sticky=E, pady=4) Label(root, text='Last Name').grid(row=1, sticky=W, padx=4) last_name = Entry(root).grid(row=1, column=1, sticky=E, pady=4) #Button(root, text='Submit').grid(row=3) getNameButton = Button(root, text='Get') getNameButton.bind('<Button-1>', getName) getNameButton.grid(row=3, column=1, sticky=E) fieldEntry = Entry(root) fieldEntry.grid(row=4, column=1, sticky=E) root.mainloop() ```
2016/11/28
[ "https://Stackoverflow.com/questions/40852174", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6812766/" ]
You've got two problems. When you write: ``` first_name = first_name.get() ``` Python sees `first_name` on the left hand side of an assignment and it assumes that `fist_name` is a local variable. Then when it is run, when it looks up `first_name` (on the right hand side), it only looks on the function's locals. Since the local `first_name` hasn't yet been populated in the *local* namespace, it raises an `UnboundLocalError`. I'd recommend changing the name of the global: ``` first_name_entry = Entry(root) first_name_entry.grid(row=0, column=1, sticky=E, pady=4) ``` Notice that to solve the second problem, I split the entry creation from gridding it. The reason is because `Widget.grid` returns `None` which means that (in your handling code), `first_name` was previously `None` instead of being the entry that you were expecting it to be. --- Here's a full script that works on my computer: ``` from tkinter import * from tkinter import ttk def getName(event): first_name = first_name_entry.get() fieldEntry.insert(0, first_name) root = Tk() root.title('Address Book') Label(root, text='First Name').grid(row=0, sticky=W, padx=4) first_name_entry = Entry(root) first_name_entry.grid(row=0, column=1, sticky=E, pady=4) Label(root, text='Last Name').grid(row=1, sticky=W, padx=4) last_name = Entry(root).grid(row=1, column=1, sticky=E, pady=4) #Button(root, text='Submit').grid(row=3) getNameButton = Button(root, text='Get') getNameButton.bind('<Button-1>', getName) getNameButton.grid(row=3, column=1, sticky=E) fieldEntry = Entry(root) fieldEntry.grid(row=4, column=1, sticky=E) root.mainloop() ```
You're referencing `first_name`, in a function before it's been defined later on in the program. You have to define `first_name` before the `getName()` function. You could also put the `getName()` function after you define `first_name`.
70,065,900
I was going to deploy my Django App on Namecheap shared hosting. My app needs Pillow to be able to run perfectly. But while installing pillow using `pip install Pillow` in Namecheap's Terminal, I get the error. I installed Django and other libraries successfully. But while installing Pillow, it gives me this error. ``` Collecting Pillow==8.4.0 Using cached Pillow-8.4.0.tar.gz (49.4 MB) Preparing metadata (setup.py) ... done Building wheels for collected packages: Pillow Building wheel for Pillow (setup.py) ... error ERROR: Command errored out with exit status 1: command: /home/abduxdcv/virtualenv/iffi-store-app/3.8/bin/python3.8_bin -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-u2_5lsia/pillow_aabaeed7df664fd985a82d84f11f5eac/setup.py'"'"'; __file__='"'"'/tmp/pip-install-u2_5lsia/pillow_aabaeed7df664fd985a82d84f11f5eac/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d /tmp/pip-wheel-vyxwq5up cwd: /tmp/pip-install-u2_5lsia/pillow_aabaeed7df664fd985a82d84f11f5eac/ Complete output (143 lines): /opt/alt/python38/lib64/python3.8/distutils/dist.py:274: UserWarning: Unknown distribution option: 'long_description_content_type' warnings.warn(msg) /opt/alt/python38/lib64/python3.8/distutils/dist.py:274: UserWarning: Unknown distribution option: 'project_urls' warnings.warn(msg) running bdist_wheel running build running build_py creating build creating build/lib.linux-x86_64-3.8 creating build/lib.linux-x86_64-3.8/PIL copying src/PIL/ImtImagePlugin.py -> build/lib.linux-x86_64-3.8/PIL copying src/PIL/BmpImagePlugin.py -> build/lib.linux-x86_64-3.8/PIL running egg_info writing src/Pillow.egg-info/PKG-INFO writing dependency_links to src/Pillow.egg-info/dependency_links.txt writing top-level names to src/Pillow.egg-info/top_level.txt reading manifest file 'src/Pillow.egg-info/SOURCES.txt' reading manifest template 'MANIFEST.in' warning: no files found matching '*.c' warning: no files found matching '*.h' warning: no files found matching '*.sh' warning: no previously-included files found matching '.appveyor.yml' warning: no previously-included files found matching '.clang-format' warning: no previously-included files found matching '.coveragerc' warning: no previously-included files found matching '.editorconfig' warning: no previously-included files found matching '.readthedocs.yml' warning: no previously-included files found matching 'codecov.yml' warning: no previously-included files matching '.git*' found anywhere in distribution warning: no previously-included files matching '*.pyc' found anywhere in distribution warning: no previously-included files matching '*.so' found anywhere in distribution no previously-included directories found matching '.ci' writing manifest file 'src/Pillow.egg-info/SOURCES.txt' running build_ext building 'PIL._imaging' extension creating build/temp.linux-x86_64-3.8 creating build/temp.linux-x86_64-3.8/src creating build/temp.linux-x86_64-3.8/src/libImaging building 'PIL._imagingft' extension /opt/rh/devtoolset-7/root/usr/bin/gcc -Wno-unused-result -Wsign-compare -DNDEBUG -D_GNU_SOURCE -fPIC -fwrapv -O2 -fno-semantic-interposition -pthread -Wno-unused-result-Wsign-compare -ffat-lto-objects -flto-partition=none -g -std=c99 -Wextra -Wno-unused-parameter -Wno-missing-field-initializers -Werror=implicit-function-declaration -D_GNU_SOURCE -fPIC -fwrapv -D_GNU_SOURCE -fPIC -fwrapv -O2 -fno-semantic-interposition -pthread -Wno-unused-result -Wsign-compare -ffat-lto-objects -flto-partition=none -g -std=c99 -Wextra -Wno-unused-parameter -Wno-missing-field-initializers -Werror=implicit-function-declaration -fPIC -I/usr/include/freetype2 -I/tmp/pip-install-u2_5lsia/pillow_aabaeed7df664fd985a82d84f11f5eac -I/home/abduxdcv/virtualenv/iffi-store-app/3.8/include -I/usr/include -I/opt/alt/python38/include/python3.8 -c src/_imagingmorph.c -o build/temp.linux-x86_64-3.8/src/_imagingmorph.o unable to execute '/opt/rh/devtoolset-7/root/usr/bin/gcc': No such file or directory unable to execute '/opt/rh/devtoolset-7/root/usr/bin/gcc': No such file or directory unable to execute '/opt/rh/devtoolset-7/root/usr/bin/gcc': No such file or directory error: command '/opt/rh/devtoolset-7/root/usr/bin/gcc' failed with exit status 1 ---------------------------------------- ERROR: Failed building wheel for Pillow Running setup.py clean for Pillow Failed to build Pillow Installing collected packages: Pillow Running setup.py install for Pillow ... error ERROR: Command errored out with exit status 1: command: /home/abduxdcv/virtualenv/iffi-store-app/3.8/bin/python3.8_bin -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-u2_5lsia/pillow_aabaeed7df664fd985a82d84f11f5eac/setup.py'"'"'; __file__='"'"'/tmp/pip-install-u2_5lsia/pillow_aabaeed7df664fd985a82d84f11f5eac/setup.py'"'"';f = getattr(tokenize,'"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-j3nzm_7c/install-record.txt --single-version-externally-managed --compile --install-headers /home/abduxdcv/virtualenv/iffi-store-app/3.8/include/site/python3.8/Pillow cwd: /tmp/pip-install-u2_5lsia/pillow_aabaeed7df664fd985a82d84f11f5eac/ Complete output (143 lines): /opt/alt/python38/lib64/python3.8/distutils/dist.py:274: UserWarning: Unknown distribution option: 'long_description_content_type' warnings.warn(msg) /opt/alt/python38/lib64/python3.8/distutils/dist.py:274: UserWarning: Unknown distribution option: 'project_urls' warnings.warn(msg) running install running build running build_py creating build creating build/lib.linux-x86_64-3.8 creating build/lib.linux-x86_64-3.8/PIL copying src/PIL/ImageWin.py -> build/lib.linux-x86_64-3.8/PIL copying src/PIL/FtexImagePlugin.py -> build/lib.linux-x86_64-3.8/PIL copying src/PIL/EpsImagePlugin.py -> build/lib.linux-x86_64-3.8/PIL copying src/PIL/PdfParser.py -> build/lib.linux-x86_64-3.8/PIL copying src/PIL/ImageCms.py -> build/lib.linux-x86_64-3.8/PIL copying src/PIL/ImageDraw2.py -> build/lib.linux-x86_64-3.8/PIL copying src/PIL/ImageFilter.py -> build/lib.linux-x86_64-3.8/PIL copying src/PIL/CurImagePlugin.py -> build/lib.linux-x86_64-3.8/PIL copying src/PIL/FontFile.py -> build/lib.linux-x86_64-3.8/PIL copying src/PIL/ImageDraw.py -> build/lib.linux-x86_64-3.8/PIL copying src/PIL/McIdasImagePlugin.py -> build/lib.linux-x86_64-3.8/PIL copying src/PIL/ContainerIO.py -> build/lib.linux-x86_64-3.8/PIL copying src/PIL/WmfImagePlugin.py -> build/lib.linux-x86_64-3.8/PIL copying src/PIL/FitsStubImagePlugin.py -> build/lib.linux-x86_64-3.8/PIL copying src/PIL/__main__.py -> build/lib.linux-x86_64-3.8/PIL copying src/PIL/TiffTags.py -> build/lib.linux-x86_64-3.8/PIL copying src/PIL/__init__.py -> build/lib.linux-x86_64-3.8/PIL copying src/PIL/PcdImagePlugin.py -> build/lib.linux-x86_64-3.8/PIL copying src/PIL/Hdf5StubImagePlugin.py -> build/lib.linux-x86_64-3.8/PIL copying src/PIL/_util.py -> build/lib.linux-x86_64-3.8/PIL copying src/PIL/ImageShow.py -> build/lib.linux-x86_64-3.8/PIL copying src/PIL/ImageStat.py -> build/lib.linux-x86_64-3.8/PIL copying src/PIL/SgiImagePlugin.py -> build/lib.linux-x86_64-3.8/PIL copying src/PIL/ImageMode.py -> build/lib.linux-x86_64-3.8/PIL copying src/PIL/IcnsImagePlugin.py -> build/lib.linux-x86_64-3.8/PIL copying src/PIL/TiffImagePlugin.py -> build/lib.linux-x86_64-3.8/PIL copying src/PIL/FliImagePlugin.py -> build/lib.linux-x86_64-3.8/PIL copying src/PIL/PaletteFile.py -> build/lib.linux-x86_64-3.8/PIL copying src/PIL/GbrImagePlugin.py -> build/lib.linux-x86_64-3.8/PIL copying src/PIL/PSDraw.py -> build/lib.linux-x86_64-3.8/PIL copying src/PIL/GimpPaletteFile.py -> build/lib.linux-x86_64-3.8/PIL copying src/PIL/ImageFile.py -> build/lib.linux-x86_64-3.8/PIL copying src/PIL/_binary.py -> build/lib.linux-x86_64-3.8/PIL copying src/PIL/ImageGrab.py -> build/lib.linux-x86_64-3.8/PIL copying src/PIL/MspImagePlugin.py -> build/lib.linux-x86_64-3.8/PIL copying src/PIL/ImagePalette.py -> build/lib.linux-x86_64-3.8/PIL copying src/PIL/SpiderImagePlugin.py -> build/lib.linux-x86_64-3.8/PIL copying src/PIL/GimpGradientFile.py -> build/lib.linux-x86_64-3.8/PIL copying src/PIL/ImageMath.py -> build/lib.linux-x86_64-3.8/PIL copying src/PIL/PsdImagePlugin.py -> build/lib.linux-x86_64-3.8/PIL copying src/PIL/IcoImagePlugin.py -> build/lib.linux-x86_64-3.8/PIL copying src/PIL/PalmImagePlugin.py -> build/lib.linux-x86_64-3.8/PIL copying src/PIL/features.py -> build/lib.linux-x86_64-3.8/PIL copying src/PIL/FpxImagePlugin.py -> build/lib.linux-x86_64-3.8/PIL copying src/PIL/GifImagePlugin.py -> build/lib.linux-x86_64-3.8/PIL copying src/PIL/ExifTags.py -> build/lib.linux-x86_64-3.8/PIL copying src/PIL/ImageEnhance.py -> build/lib.linux-x86_64-3.8/PIL copying src/PIL/ImageSequence.py -> build/lib.linux-x86_64-3.8/PIL copying src/PIL/ImImagePlugin.py -> build/lib.linux-x86_64-3.8/PIL copying src/PIL/PyAccess.py -> build/lib.linux-x86_64-3.8/PIL copying src/PIL/TgaImagePlugin.py -> build/lib.linux-x86_64-3.8/PIL copying src/PIL/Jpeg2KImagePlugin.py -> build/lib.linux-x86_64-3.8/PIL copying src/PIL/BufrStubImagePlugin.py -> build/lib.linux-x86_64-3.8/PIL copying src/PIL/XpmImagePlugin.py -> build/lib.linux-x86_64-3.8/PIL copying src/PIL/XVThumbImagePlugin.py -> build/lib.linux-x86_64-3.8/PIL copying src/PIL/ImageChops.py -> build/lib.linux-x86_64-3.8/PIL copying src/PIL/PdfImagePlugin.py -> build/lib.linux-x86_64-3.8/PIL copying src/PIL/DcxImagePlugin.py -> build/lib.linux-x86_64-3.8/PIL copying src/PIL/ImageTk.py -> build/lib.linux-x86_64-3.8/PIL copying src/PIL/PpmImagePlugin.py -> build/lib.linux-x86_64-3.8/PIL copying src/PIL/ImageQt.py -> build/lib.linux-x86_64-3.8/PIL copying src/PIL/MpoImagePlugin.py -> build/lib.linux-x86_64-3.8/PIL copying src/PIL/MicImagePlugin.py -> build/lib.linux-x86_64-3.8/PIL copying src/PIL/ImageTransform.py -> build/lib.linux-x86_64-3.8/PIL copying src/PIL/PixarImagePlugin.py -> build/lib.linux-x86_64-3.8/PIL copying src/PIL/WebPImagePlugin.py -> build/lib.linux-x86_64-3.8/PIL copying src/PIL/ImageMorph.py -> build/lib.linux-x86_64-3.8/PIL copying src/PIL/TarIO.py -> build/lib.linux-x86_64-3.8/PIL copying src/PIL/PcxImagePlugin.py -> build/lib.linux-x86_64-3.8/PIL copying src/PIL/GdImageFile.py -> build/lib.linux-x86_64-3.8/PIL copying src/PIL/ImageOps.py -> build/lib.linux-x86_64-3.8/PIL copying src/PIL/ImageFont.py -> build/lib.linux-x86_64-3.8/PIL copying src/PIL/JpegImagePlugin.py -> build/lib.linux-x86_64-3.8/PIL copying src/PIL/ImageColor.py -> build/lib.linux-x86_64-3.8/PIL copying src/PIL/Image.py -> build/lib.linux-x86_64-3.8/PIL copying src/PIL/SunImagePlugin.py -> build/lib.linux-x86_64-3.8/PIL copying src/PIL/XbmImagePlugin.py -> build/lib.linux-x86_64-3.8/PIL copying src/PIL/MpegImagePlugin.py -> build/lib.linux-x86_64-3.8/PIL copying src/PIL/_tkinter_finder.py -> build/lib.linux-x86_64-3.8/PIL copying src/PIL/GribStubImagePlugin.py -> build/lib.linux-x86_64-3.8/PIL copying src/PIL/ImagePath.py -> build/lib.linux-x86_64-3.8/PIL copying src/PIL/PngImagePlugin.py -> build/lib.linux-x86_64-3.8/PIL copying src/PIL/PcfFontFile.py -> build/lib.linux-x86_64-3.8/PIL copying src/PIL/DdsImagePlugin.py -> build/lib.linux-x86_64-3.8/PIL copying src/PIL/_version.py -> build/lib.linux-x86_64-3.8/PIL copying src/PIL/WalImageFile.py -> build/lib.linux-x86_64-3.8/PIL copying src/PIL/BdfFontFile.py -> build/lib.linux-x86_64-3.8/PIL copying src/PIL/BlpImagePlugin.py -> build/lib.linux-x86_64-3.8/PIL copying src/PIL/JpegPresets.py -> build/lib.linux-x86_64-3.8/PIL copying src/PIL/IptcImagePlugin.py -> build/lib.linux-x86_64-3.8/PIL copying src/PIL/ImtImagePlugin.py -> build/lib.linux-x86_64-3.8/PIL copying src/PIL/BmpImagePlugin.py -> build/lib.linux-x86_64-3.8/PIL running egg_info writing src/Pillow.egg-info/PKG-INFO writing dependency_links to src/Pillow.egg-info/dependency_links.txt writing top-level names to src/Pillow.egg-info/top_level.txt reading manifest file 'src/Pillow.egg-info/SOURCES.txt' reading manifest template 'MANIFEST.in' warning: no files found matching '*.c' warning: no files found matching '*.h' warning: no files found matching '*.sh' warning: no previously-included files found matching '.appveyor.yml' warning: no previously-included files found matching '.clang-format' warning: no previously-included files found matching '.coveragerc' warning: no previously-included files found matching '.editorconfig' warning: no previously-included files found matching '.readthedocs.yml' warning: no previously-included files found matching 'codecov.yml' warning: no previously-included files matching '.git*' found anywhere in distribution warning: no previously-included files matching '*.pyc' found anywhere in distribution warning: no previously-included files matching '*.so' found anywhere in distribution no previously-included directories found matching '.ci' writing manifest file 'src/Pillow.egg-info/SOURCES.txt' running build_ext building 'PIL._imaging' extension building 'PIL._imagingft' extension creating build/temp.linux-x86_64-3.8 creating build/temp.linux-x86_64-3.8/src building 'PIL._imagingtk' extension /opt/rh/devtoolset-7/root/usr/bin/gcc -Wno-unused-result -Wsign-compare -DNDEBUG -D_GNU_SOURCE -fPIC -fwrapv -O2 -fno-semantic-interposition -pthread -Wno-unused-result -Wsign-compare -ffat-lto-objects -flto-partition=none -g -std=c99 -Wextra -Wno-unused-parameter -Wno-missing-field-initializers -Werror=implicit-function-declaration -D_GNU_SOURCE -fPIC -fwrapv -D_GNU_SOURCE -fPIC -fwrapv -O2 -fno-semantic-interposition -pthread -Wno-unused-result -Wsign-compare -ffat-lto-objects -flto-partition=none -g -std=c99 -Wextra -Wno-unused-parameter -Wno-missing-field-initializers -Werror=implicit-function-declaration -fPIC -I/usr/include/freetype2 -I/tmp/pip-install-u2_5lsia/pillow_aabaeed7df664fd985a82d84f11f5eac -I/home/abduxdcv/virtualenv/iffi-store-app/3.8/include -I/usr/include -I/opt/alt/python38/include/python3.8 -c src/_imagingft.c -o build/temp.linux-x86_64-3.8/src/_imagingft.o creating build/temp.linux-x86_64-3.8/src/libImaging building 'PIL._imagingmath' extension /opt/rh/devtoolset-7/root/usr/bin/gcc -Wno-unused-result -Wsign-compare -DNDEBUG -D_GNU_SOURCE -fPIC -fwrapv -O2 -fno-semantic-interposition -pthread -Wno-unused-result -Wsign-compare -ffat-lto-objects -flto-partition=none -g -std=c99 -Wextra -Wno-unused-parameter -Wno-missing-field-initializers -Werror=implicit-function-declaration -D_GNU_SOURCE -fPIC -fwrapv -D_GNU_SOURCE -fPIC -fwrapv -O2 -fno-semantic-interposition -pthread -Wno-unused-result -Wsign-compare -ffat-lto-objects -flto-partition=none -g -std=c99 -Wextra -Wno-unused-parameter -Wno-missing-field-initializers -Werror=implicit-function-declaration -fPIC -I/usr/include/freetype2 -I/tmp/pip-install-u2_5lsia/pillow_aabaeed7df664fd985a82d84f11f5eac -I/home/abduxdcv/virtualenv/iffi-store-app/3.8/include -I/usr/include -I/opt/alt/python38/include/python3.8 -c src/_imagingmath.c -o build/temp.linux-x86_64-3.8/src/_imagingmath.o unable to execute '/opt/rh/devtoolset-7/root/usr/bin/gcc': No such file or directory /opt/rh/devtoolset-7/root/usr/bin/gcc -Wno-unused-result -Wsign-compare -DNDEBUG -D_GNU_SOURCE -fPIC -fwrapv -O2 -fno-semantic-interposition -pthread -Wno-unused-result -Wsign-compare -ffat-lto-objects -flto-partition=none -g -std=c99 -Wextra -Wno-unused-parameter -Wno-missing-field-initializers -Werror=implicit-function-declaration -D_GNU_SOURCE -fPIC -fwrapv -D_GNU_SOURCE -fPIC -fwrapv -O2 -fno-semantic-interposition -pthread -Wno-unused-result -Wsign-compare -ffat-lto-objects -flto-partition=none -g -std=c99 -Wextra -Wno-unused-parameter -Wno-missing-field-initializers -Werror=implicit-function-declaration -fPIC -DHAVE_LIBJPEG -DHAVE_LIBZ -DHAVE_LIBTIFF -DHAVE_XCB -DPILLOW_VERSION="8.4.0" -I/usr/include/freetype2 -I/tmp/pip-install-u2_5lsia/pillow_aabaeed7df664fd985a82d84f11f5eac -I/home/abduxdcv/virtualenv/iffi-store-app/3.8/include -I/usr/include -I/opt/alt/python38/include/python3.8 -c src/_imaging.c -o build/temp.linux-x86_64-3.8/src/_imaging.o unable to execute '/opt/rh/devtoolset-7/root/usr/bin/gcc': No such file or directory creating build/temp.linux-x86_64-3.8/src/Tk building 'PIL._imagingmorph' extension /opt/rh/devtoolset-7/root/usr/bin/gcc -Wno-unused-result -Wsign-compare -DNDEBUG -D_GNU_SOURCE -fPIC -fwrapv -O2 -fno-semantic-interposition -pthread -Wno-unused-result -Wsign-compare -ffat-lto-objects -flto-partition=none -g -std=c99 -Wextra -Wno-unused-parameter -Wno-missing-field-initializers -Werror=implicit-function-declaration -D_GNU_SOURCE -fPIC -fwrapv -D_GNU_SOURCE -fPIC -fwrapv -O2 -fno-semantic-interposition -pthread -Wno-unused-result -Wsign-compare -ffat-lto-objects -flto-partition=none -g -std=c99 -Wextra -Wno-unused-parameter -Wno-missing-field-initializers -Werror=implicit-function-declaration -fPIC -I/usr/include/freetype2 -I/tmp/pip-install-u2_5lsia/pillow_aabaeed7df664fd985a82d84f11f5eac -I/home/abduxdcv/virtualenv/iffi-store-app/3.8/include -I/usr/include -I/opt/alt/python38/include/python3.8 -c src/_imagingtk.c -o build/temp.linux-x86_64-3.8/src/_imagingtk.o /opt/rh/devtoolset-7/root/usr/bin/gcc -Wno-unused-result -Wsign-compare -DNDEBUG -D_GNU_SOURCE -fPIC -fwrapv -O2 -fno-semantic-interposition -pthread -Wno-unused-result -Wsign-compare -ffat-lto-objects -flto-partition=none -g -std=c99 -Wextra -Wno-unused-parameter -Wno-missing-field-initializers -Werror=implicit-function-declaration -D_GNU_SOURCE -fPIC -fwrapv -D_GNU_SOURCE -fPIC -fwrapv -O2 -fno-semantic-interposition -pthread -Wno-unused-result -Wsign-compare -ffat-lto-objects -flto-partition=none -g -std=c99 -Wextra -Wno-unused-parameter -Wno-missing-field-initializers -Werror=implicit-function-declaration -fPIC -I/usr/include/freetype2 -I/tmp/pip-install-u2_5lsia/pillow_aabaeed7df664fd985a82d84f11f5eac -I/home/abduxdcv/virtualenv/iffi-store-app/3.8/include -I/usr/include -I/opt/alt/python38/include/python3.8 -c src/_imagingmorph.c -o build/temp.linux-x86_64-3.8/src/_imagingmorph.o unable to execute '/opt/rh/devtoolset-7/root/usr/bin/gcc': No such file or directory unable to execute '/opt/rh/devtoolset-7/root/usr/bin/gcc': No such file or directory unable to execute '/opt/rh/devtoolset-7/root/usr/bin/gcc': No such file or directory error: command '/opt/rh/devtoolset-7/root/usr/bin/gcc' failed with exit status 1 ---------------------------------------- ERROR: Command errored out with exit status 1: /home/abduxdcv/virtualenv/iffi-store-app/3.8/bin/python3.8_bin -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0]= '"'"'/tmp/pip-install-u2_5lsia/pillow_aabaeed7df664fd985a82d84f11f5eac/setup.py'"'"'; __file__='"'"'/tmp/pip-install-u2_5lsia/pillow_aabaeed7df664fd985a82d84f11f5eac/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-j3nzm_7c/install-record.txt --single-version-externally-managed --compile --install-headers /home/abduxdcv/virtualenv/iffi-store-app/3.8/include/site/python3.8/Pillow Check the logs for full command output. ``` I have tried to upgrade pip, but that did not solve the issue. Does anyone know whats wrong?
2021/11/22
[ "https://Stackoverflow.com/questions/70065900", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17410285/" ]
Found a solution that solves the problem. So Answering my own question. I think it's more related to mediasoup (WebRTC framework that I am using) which uses OPUS which is set to `channels=2 & rate=48000` but opusenc in gstreamer can use channels 1 or 8. Setting a sample rate of 24000 solves the problem. Just need to add the following line before opusenc: ``` ! audioresample ! audio/x-raw, rate=24000 ```
Sounds like a stereo audio interlacing problem, where every other sample is being skipped. Your provided output sample is a stereo MP3, yet both channels are identical. Try using `channels=1` or playing with or removing `demux` processing.
57,158,092
Error after updates ``` [2019-07-29 12:52:23,301] INFO Initializing writer using SQL dialect: PostgreSqlDatabaseDialect (io.confluent.connect.jdbc.sink.JdbcSinkTask:57) [2019-07-29 12:52:23,303] INFO WorkerSinkTask{id=sink-postgres-0} Sink task finished initialization and start (org.apache.kafka.connect.runtime.WorkerSinkTask:301) [2019-07-29 12:52:23,367] WARN [Consumer clientId=consumer-1, groupId=connect-sink-postgres] Error while fetching metadata with correlation id 2 : {kafkadad=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient:1023) [2019-07-29 12:52:23,368] INFO Cluster ID: _gRuX5-0SUu72wzy6PV0Ag (org.apache.kafka.clients.Metadata:365) [2019-07-29 12:52:23,369] INFO [Consumer clientId=consumer-1, groupId=connect-sink-postgres] Discovered group coordinator INTRIVMPIOT01.xpetize.local:9092 (id: 2147483647 rack: null) (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:675) [2019-07-29 12:52:23,372] INFO [Consumer clientId=consumer-1, groupId=connect-sink-postgres] Revoking previously assigned partitions [] (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:459) [2019-07-29 12:52:23,373] INFO [Consumer clientId=consumer-1, groupId=connect-sink-postgres] (Re-)joining group (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:491) [2019-07-29 12:52:23,383] INFO [Consumer clientId=consumer-1, groupId=connect-sink-postgres] (Re-)joining group (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:491) [2019-07-29 12:52:23,482] INFO [Consumer clientId=consumer-1, groupId=connect-sink-postgres] Successfully joined group with generation 1 (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:455) [2019-07-29 12:52:23,486] INFO [Consumer clientId=consumer-1, groupId=connect-sink-postgres] Setting newly assigned partitions: kafkadad-0 (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:290) [2019-07-29 12:52:23,501] INFO [Consumer clientId=consumer-1, groupId=connect-sink-postgres] Resetting offset for partition kafkadad-0 to offset 0. (org.apache.kafka.clients.consumer.internals.Fetcher:584) [2019-07-29 12:52:35,338] ERROR WorkerSinkTask{id=sink-postgres-0} Task threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask:177) org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in error handler at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:178) at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:104) at org.apache.kafka.connect.runtime.WorkerSinkTask.convertAndTransformRecord(WorkerSinkTask.java:487) at org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:464) at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:320) at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:224) at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:192) at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:175) at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:219) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: java.lang.NullPointerException at org.apache.kafka.connect.json.JsonConverter.convertToConnect(JsonConverter.java:701) at org.apache.kafka.connect.json.JsonConverter.access$000(JsonConverter.java:61) at org.apache.kafka.connect.json.JsonConverter$12.convert(JsonConverter.java:181) at org.apache.kafka.connect.json.JsonConverter.convertToConnect(JsonConverter.java:745) at org.apache.kafka.connect.json.JsonConverter.toConnectData(JsonConverter.java:363) at org.apache.kafka.connect.runtime.WorkerSinkTask.lambda$convertAndTransformRecord$1(WorkerSinkTask.java:487) at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:128) at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:162) ... 13 more [2019-07-29 12:52:35,347] ERROR WorkerSinkTask{id=sink-postgres-0} Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerTask:178) [2019-07-29 12:52:35,347] INFO Stopping task (io.confluent.connect.jdbc.sink.JdbcSinkTask:105) [2019-07-29 12:52:35,349] INFO [Consumer clientId=consumer-1, groupId=connect-sink-postgres] Member consumer-1-bdbc7035-7625-4701-9ca7-c1ffa6863456 sending LeaveGroup request to coordinator INTRIVMPIOT01.xpetize.local:9092 (id: 2147483647 rack: null) (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:822) ``` Producer console: [![enter image description here](https://i.stack.imgur.com/x0wdO.png)](https://i.stack.imgur.com/x0wdO.png) **connect-standalone.properties file** ``` bootstrap.servers=localhost:9092 key.converter=org.apache.kafka.connect.json.JsonConverter value.converter=org.apache.kafka.connect.json.JsonConverter key.converter.schemas.enable=false value.converter.schemas.enable=true offset.storage.file.filename=/tmp/connect.offsets offset.flush.interval.ms=10000 plugin.path=/home/kafka/confluent-5.2.1/share/java ``` **connect-post.properties file** ``` name=sink-postgres connector.class=io.confluent.connect.jdbc.JdbcSinkConnector tasks.max=2 topics=kafkada connection.url=jdbc:postgresql://localhost:5432/kafkadb? user=postgres&password=postgres insert.mode=upsert table.name.format=kafkatable pk.mode=none pk.fields=none auto.create=true auto.evolve=false offset.storage.file.filename=/tmp/post-sink.offsets ``` The above error is caused when I did ./bin/connect-standalone.sh config/connect-standalone.properties config.postgresql.properties through apache kafka. Then, I have tried and achieved the flow mentioned in this link: <https://hellokoding.com/kafka-connect-sinks-data-to-postgres-example-with-avro-schema-registry-and-python> But, here the data is being generated from Python code using avro. But in my case, I already have data coming from sensors(in JSON format) in kafka topic which I want to send to postgreSQL, instead of generating data through code. So, How can I achieve this flow of sending data from kafka topic to postgreSQL. I have shared my properties file Please let me know if corrrection is required. I am sending simple json data like "{"cust\_id": 1313131, "month": 12, "expenses": 1313.13}" and I also tried sending this type of data but still error exists **sample json data** ``` { "schema": { "type": "struct", "fields": [ { "type": "int32", "optional": false, "field": "customer_id" }, { "type": "int32", "optional": true, "field": "month" }, { "type": "string", "optional": true, "field": "amount_paid" } ], "optional": false, "name": "msgschema" }, "payload": { "cust_id": 13, "month": 12, "expenses": 1313.13 } } ``` and I have a table called kafkatable which has column names as (customer\_id, month, amount\_paid) created using "CREATE TABLE kafkatable( customer\_id int8, month int4, amount\_paid decimal(9,2) );"
2019/07/23
[ "https://Stackoverflow.com/questions/57158092", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10248483/" ]
I solved this error by making following changes 1. insert.mode=insert 2. Comment out the table.name.format=kafkatable because table will be created through auto create 3. Remove the question mark from the end of connection.url line. 4. pk.fields should not be kept none here, please make sure to give a column name instead to avoid complications. 5. int32 is not supported by postgresql, so when I changed it to int8 it is working fine. 6. The fields in your schema and payload have different names, please make sure to give same name.
Kafka Connect, which is part of Apache Kafka, is perfectly suited to this. You can learn more about Kafka Connect in general [here](http://rmoff.dev/ksldn19-kafka-connect). To stream data from your Kafka topic to Postgres (or any other database) use the JDBC Sink connector, which you can get from [here](https://www.confluent.io/hub/confluentinc/kafka-connect-jdbc).
51,555,406
I'm trying to write a simple python code to list out the app services in google-cloud-platform , In command prompt i'm able to list it by passing the below command ``` gcloud app services list ``` When i searched in stack overflow and other websites i couldn't find any piece of code that can list me app services . Any information related this will be thankful. Thanks in advance
2018/07/27
[ "https://Stackoverflow.com/questions/51555406", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1485826/" ]
One possibility is to use the [Admin API](https://cloud.google.com/appengine/docs/admin-api/), for example by making HTTP requests to its REST [`apps.services.list` method](https://cloud.google.com/appengine/docs/admin-api/reference/rest/v1beta/apps.services/list): > > Lists all the services in the application. > > > HTTP request > > > > ``` > GET https://appengine.googleapis.com/v1beta/{parent=apps/*}/services > > ``` > > Or you can always fallback to plain invoking from your python code the same commands that you can run manually and process their outputs, for example via `subprocess.Popen()`.
As you have mentioned that you were unable to find a proper documentation to list out the services or the versions for an application in the Google Cloud Platform, for requests regarding documentation modification you can definitely provide a feedback on the documentation page and we would definitely provide as much information as possible. Regarding the question, I would have to agree with the previously mentioned solution posted by Mr. Dan Cornilescu, also for further clarification you would be able to call the Admin API that would create a HTTP requests to the REST and provide the [List of services](https://cloud.google.com/appengine/docs/admin-api/reference/rest/v1beta/apps.services/list) of an application. On the top of that you can also request a call to [List of versions](https://cloud.google.com/appengine/docs/admin-api/reference/rest/v1beta/apps.services/list) under the service of an application. [![Providing Parameters](https://i.stack.imgur.com/07DDR.png)](https://i.stack.imgur.com/07DDR.png) Over there if you could provide the requested parameters under the parent section (for your case your application name after 'app/') and execute (at the bottom) the call using your preferred authentication to request the list of services. [![Try This App](https://i.stack.imgur.com/UV0Cv.png)](https://i.stack.imgur.com/UV0Cv.png) You can also click on the icon on the top right corner to able to go to following page where it provides the functions and the method calls in JavaScript. We would try our best to update the calls in Python as well. Thank you so much. [![JavaScript API codes](https://i.stack.imgur.com/vv7Cn.png)](https://i.stack.imgur.com/vv7Cn.png)
72,113,628
I have a django app (using the django-rest-framework) that needs to serialize and deserialize JSON payloads with the python keyword "import" as one of the fields. To prevent conflicts in the model, I can call the field `import_flag` and use the `source` option in the ModelSerializer. **models.py** ```py class MyModel(model.Model): import_flag = models.BooleanField(verbose_name='Import') ``` But (for obvious reasons) I cannot use the `import` keyword as to create a JSON field named "import" like so. **serializers.py** ```py class MyModelSerializer(serializer.ModelSerializer): import = serializers.BooleanField(source='import_flag') def Meta: model = MyModel ``` This field is present in the JSON I consume from a third-party's RESTful API and I also need to serialize it and send it back to the same third-party. Is there something like a `destination` option for the Django-rest-framework ModelSerializer class? I have been unable to find anything in the documentation.
2022/05/04
[ "https://Stackoverflow.com/questions/72113628", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10686410/" ]
When you say it's "fairly simple in a relational database", what you mean is it's simple to express, not exactly simple to compute. You're pushing a lot of list intersection work to the database. As your data set grows, the response time for your query will get slower and slower. At some point the database will no longer be able to give you the answer. And while it's consuming CPU (before timing out) you're negatively impacting the load on the relational database server for other users. With DynamoDB you can't express queries that take unbounded effort to compute or that depend so much on total data set size for their performance characteristics. You have to design a query system up front that doesn't get exponentially slower as the data set grows. The DynamoDB design then depends on what you know up front. For example, do you know it's always the intersection of an apple and banana? Then during insert of a new food note if the person ate both, and mark them as such on a user metadata item. Use that marker later during the query phase. Sound like a nuisance? Well, if your data set isn't growing large and/or you don't need reliably fast query performance, then a relational database solves this problem well. Different databases for different purposes.
DynamoDB also supports `SCAN` and not only `QUERY`. A simple design for the table is to have the PK to be the name of the person, and the attributes will be the numeric values of the fruits that you can increase every day. ``` UPDATE "FRUIT_COUNTS" SET BANANA=BANANA + 1 WHERE Employee='Bob' ``` Then, at the end of the week, you can run a simple PartiQL query on the table: ``` SELECT * FROM "FRUIT_COUNTS" WHERE BANANA > 0 AND APPLE > 0 ```
54,510,283
Problem ======= I was trying to use fixtures to populate my database, so I started by reading through the documentation for loaddata and by default, it looks in the fixtures directory inside each app for fixtures. My problem is that when I run `python manage.py loaddata` I'm getting an error, but when I give it the path for teams.yaml it works fine. Do I need to setup something for it to work? Documentation ============= <https://docs.djangoproject.com/en/1.11/howto/initial-data/#where-django-finds-fixture-files> Error ===== ``` usage: manage.py loaddata [-h] [--version] [-v {0,1,2,3}] [--settings SETTINGS] [--pythonpath PYTHONPATH] [--traceback] [--no-color] [--database DATABASE] [--app APP_LABEL] [--ignorenonexistent] [-e EXCLUDE] fixture [fixture ...] manage.py loaddata: error: No database fixture specified. Please provide the path of at least one fixture in the command line. ``` Team App Dir ============ ``` team β”œβ”€β”€ admin.py β”œβ”€β”€ apps.py β”œβ”€β”€ fixtures β”‚Β Β  └── teams.yaml β”œβ”€β”€ __init__.py β”œβ”€β”€ migrations β”‚Β Β  β”œβ”€β”€ 0001_initial.py β”‚Β Β  β”œβ”€β”€ 0002_auto_20190204_0438.py β”‚Β Β  β”œβ”€β”€ 0003_remove_team_team_name.py β”‚Β Β  β”œβ”€β”€ `enter code here`0004_team_team_name.py β”‚Β Β  β”œβ”€β”€ __init__.py β”‚Β Β  └── __pycache__ β”‚Β Β  β”œβ”€β”€ 0001_initial.cpython-36.pyc β”‚Β Β  β”œβ”€β”€ 0002_auto_20190204_0438.cpython-36.pyc β”‚Β Β  β”œβ”€β”€ 0003_remove_team_team_name.cpython-36.pyc β”‚Β Β  β”œβ”€β”€ 0004_team_team_name.cpython-36.pyc β”‚Β Β  └── __init__.cpython-36.pyc β”œβ”€β”€ models.py β”œβ”€β”€ __pycache__ β”‚Β Β  β”œβ”€β”€ admin.cpython-36.pyc β”‚Β Β  β”œβ”€β”€ apps.cpython-36.pyc β”‚Β Β  β”œβ”€β”€ __init__.cpython-36.pyc β”‚Β Β  └── models.cpython-36.pyc β”œβ”€β”€ tests.py └── views.py ``` Model ===== ``` from django.db import models class Team(models.Model): team_name = models.CharField(max_length=32) team_description = models.TextField(max_length=512, blank=False) class Meta: permissions = ( ('create_team', 'Can create a team'), ) ``` Fixture (teams.yaml) ==================== ``` - model: team.Team pk: 1 fields: team_name: team_name_example team_descrition: team_description_example ```
2019/02/04
[ "https://Stackoverflow.com/questions/54510283", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9058176/" ]
Should you define `FIXTURE_DIRS` in your settings file so django find your fixtures which there are there. **settings.py** ``` FIXTURE_DIRS = [ os.path.join(BASE_DIR, 'fixtures') ] # statements ``` --- [Reference](https://docs.djangoproject.com/en/2.1/ref/settings/#std:setting-FIXTURE_DIRS)
The error says: `No database fixture specified. Please provide the path of at least one fixture in the command line.` You need to provide a `fixturename` in `loaddata` command, in your case: ```sh python manage.py loaddata team ``` It was specified in the doc that by default Django will look into fixtures directory inside your app with the specified `fixturename`. Also the command accepts `./path/to/fixtures/` which overrides the searching of fixtures directory.
19,117,663
I don't think I can get away with one XPATH so it is just to illustrate the idea. I know I can write a simple python script but I'd prefer to use a tool, e.g. Oxygen (not xmlstarlet if possible!) suppose I have the following xml: ``` <?xml version="1.0" encoding="UTF-8"?> <model> <object name='obj1'> <field type='int' name='fld1'/> <field type='string' name='fld2'/> </object> </model> ``` I want names of all the `int` fields. That's easy: ``` /model/object/field[@type='int']/@name ``` Now say I want to print the object name along with the field name. How can I do it? I guess XSLT is the answer... trouble is, I hardly remember any of it and can't find in Oxygen how to play with it. EDIT: expected output obj1 fld1 obj2 fld7 (supposing I had them in the xml)
2013/10/01
[ "https://Stackoverflow.com/questions/19117663", "https://Stackoverflow.com", "https://Stackoverflow.com/users/219335/" ]
In xpath 2.0 you could make following: ``` for $x in /model/object/field[@type = 'int'] return concat($x/@name, ' ', $x/../@name) ``` It returns ``` fld1 obj1 ```
As an alternative to the posted XPath 2.0 solution using `for .. in` you can also use `/model/object/field[@type='int']/concat(../@name, ':', @name)`.
19,117,663
I don't think I can get away with one XPATH so it is just to illustrate the idea. I know I can write a simple python script but I'd prefer to use a tool, e.g. Oxygen (not xmlstarlet if possible!) suppose I have the following xml: ``` <?xml version="1.0" encoding="UTF-8"?> <model> <object name='obj1'> <field type='int' name='fld1'/> <field type='string' name='fld2'/> </object> </model> ``` I want names of all the `int` fields. That's easy: ``` /model/object/field[@type='int']/@name ``` Now say I want to print the object name along with the field name. How can I do it? I guess XSLT is the answer... trouble is, I hardly remember any of it and can't find in Oxygen how to play with it. EDIT: expected output obj1 fld1 obj2 fld7 (supposing I had them in the xml)
2013/10/01
[ "https://Stackoverflow.com/questions/19117663", "https://Stackoverflow.com", "https://Stackoverflow.com/users/219335/" ]
In xpath 2.0 you could make following: ``` for $x in /model/object/field[@type = 'int'] return concat($x/@name, ' ', $x/../@name) ``` It returns ``` fld1 obj1 ```
I ended up using the following XSL (perhaps it could be useful to somebody): ``` <?xml version="1.0" encoding="UTF-8"?> <xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0"> <xsl:template match="/"> <html> <body> <h2>Integer Fields</h2> <table border="1"> <tr bgcolor="#9acd32"> <th>Object Name</th> <th>Field Name</th> </tr> <xsl:for-each select="//object/field/[@type='int']"> <tr> <td> <xsl:value-of select="parent::node()/@name"/> </td> <td> <xsl:value-of select="@name"/> </td> </tr> </xsl:for-each> </table> </body> </html> </xsl:template> </xsl:stylesheet> ```
19,117,663
I don't think I can get away with one XPATH so it is just to illustrate the idea. I know I can write a simple python script but I'd prefer to use a tool, e.g. Oxygen (not xmlstarlet if possible!) suppose I have the following xml: ``` <?xml version="1.0" encoding="UTF-8"?> <model> <object name='obj1'> <field type='int' name='fld1'/> <field type='string' name='fld2'/> </object> </model> ``` I want names of all the `int` fields. That's easy: ``` /model/object/field[@type='int']/@name ``` Now say I want to print the object name along with the field name. How can I do it? I guess XSLT is the answer... trouble is, I hardly remember any of it and can't find in Oxygen how to play with it. EDIT: expected output obj1 fld1 obj2 fld7 (supposing I had them in the xml)
2013/10/01
[ "https://Stackoverflow.com/questions/19117663", "https://Stackoverflow.com", "https://Stackoverflow.com/users/219335/" ]
As an alternative to the posted XPath 2.0 solution using `for .. in` you can also use `/model/object/field[@type='int']/concat(../@name, ':', @name)`.
I ended up using the following XSL (perhaps it could be useful to somebody): ``` <?xml version="1.0" encoding="UTF-8"?> <xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0"> <xsl:template match="/"> <html> <body> <h2>Integer Fields</h2> <table border="1"> <tr bgcolor="#9acd32"> <th>Object Name</th> <th>Field Name</th> </tr> <xsl:for-each select="//object/field/[@type='int']"> <tr> <td> <xsl:value-of select="parent::node()/@name"/> </td> <td> <xsl:value-of select="@name"/> </td> </tr> </xsl:for-each> </table> </body> </html> </xsl:template> </xsl:stylesheet> ```
41,329,491
I am trying to install PocketSphinx in Python. I am trying to follow the [Uberi speech recognition README](https://github.com/Uberi/speech_recognition#readme), which asks the following: > > PyAudio [wheel packages](https://pypi.python.org/pypi/wheel) for > common 64-bit Python versions on Windows and Linux are included for > convenience, under the `third-party/` > [directory](https://github.com/Uberi/speech_recognition/tree/master/third-party) > in the repository root. To install, simply run `pip install wheel` > followed by `pip install ./third-party/WHEEL_FILENAME` (replace `pip` > with `pip3` if using Python 3) in the repository [root > directory](https://github.com/Uberi/speech_recognition). > > > I do not understand the instruction here. What `FILE_NAME` is this referring to? What is this `wheel` and how does it relate to PocketSphinx?
2016/12/26
[ "https://Stackoverflow.com/questions/41329491", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5530553/" ]
When you downloaded the PocketSphinx package manually, it must have created a wheel file (the file which does the real job) with a `.whl` extension in the said directory. Now, you need to open a terminal in the directory where the `.whl` extension file is stored, which in this case is `third-party`. Once you make sure that the terminal shows the current directory as `third-party`, go ahead and do a pip install of the wheel file. Assuming the name of the wheel file to be `PocketSphinx.whl`, You'd write: ``` pip install PocketSphinx.whl ``` This will do your job if all requirements are met. And I have a feeling that the name of the wheel file wouldn't be simply `PocketSphinx.whl`, it'll be long and informative, nonetheless it will always have the extension `.whl`, by which you shall identify it. **Edit:** I went to the link you provided, and this is the wheel file you need: `pocketsphinx-0.1.3-cp35-cp35m-win_amd64.whl` for python 3.5 Or `pocketsphinx-0.1.3-cp27-cp27m-win_amd64.whl` for python 2.7 So your command becomes ``` pip install pocketsphinx-0.1.3-cp35-cp35m-win_amd64.whl ``` Or ``` pip install pocketsphinx-0.1.3-cp27-cp27m-win_amd64.whl ``` As per your python version.
> > What FILE\_NAME is this referring to? > > > Filename of the binary package file in `third-party` folder of the repository. For example [pocketsphinx-0.0.9-cp27-none-win\_amd64.whl](https://github.com/Uberi/speech_recognition/blob/master/third-party/pocketsphinx-0.0.9-cp27-none-win_amd64.whl). You are supposed to checkout repository first. > > What is this 'wheel' and how does it relate to pocketsphix? > > > wheel is a python package manager to install binary packages. It is used to install python binary package. You can read more here: <https://pypi.python.org/pypi/wheel>
69,440,950
Trying to get to grips with python, I have an activity to do which in is to create 4 triangles one on top of the other while increasing in size to look like the below image. ``` from turtle import * trisize = 80 for triangles in range(1, 5): forward(trisize) left(120) forward(trisize) left(120) forward(trisize) left(120) penup() forward(trisize) left(120) forward(trisize) left(60) forward(20) left(180) pendown() trisize = trisize + 20 ``` Where am I going wrong? I'm also suppose to add a second loop to this somewhere?. Thanks
2021/10/04
[ "https://Stackoverflow.com/questions/69440950", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17073797/" ]
I am still not sure about the purpose of your depth parameter. But it looks like you are looking for something like this: ```java private Map<String,List<String>> listFilesUsingFileWalk(String rootDir, int depth) throws IOException { Path path = Paths.get(rootDir); try (Stream<Path> stream = Files.walk(path, depth)) { return stream .filter(file -> Files.isDirectory(file) && !file.equals(path)) .collect(Collectors.toMap( (p) -> p.getFileName().toString(), (p) -> Arrays.stream(p.toFile().listFiles()).filter(File::isDirectory).map(File::getName).collect(Collectors.toList()) )); } } ``` or (without using the File class) ```java private Map<String,List<String>> listFilesUsingFileWalk(String rootDir, int depth) throws IOException { Path path = Paths.get(rootDir); try (Stream<Path> stream = Files.walk(path, depth)) { return stream .filter(file -> Files.isDirectory(file) && !file.equals(path)) .collect(Collectors.toMap( (p) -> p.getFileName().toString(), (p) -> { try { return Files.list(p).filter(Files::isDirectory).map(Path::getFileName).map(Path::toString).collect(Collectors.toList()); } catch (IOException e) { return Collections.emptyList(); } } )); } } ```
Since your names of the directories are unique, you can follow this approach. **Also you don't need recursion**. ``` Map<String, List<String>> collect = stream .filter(file -> Files.isDirectory(file) && !file.equals(path)) .filter(file -> !file.getParent().equals(path)) .collect(Collectors.groupingBy(p -> p.getParent().getFileName().toString(), Collectors.mapping(p -> p.getFileName().toString(), Collectors.toList()))); ``` *Note: This will give you all the parent child directories.* ``` private Map<String, List<String>> listFilesUsingFileWalk(String rootDir, int depth) throws IOException { Path path = Paths.get(rootDir); Stream<Path> stream = Files.walk(path, depth); return stream .filter(file -> Files.isDirectory(file) && !file.equals(path)) .filter(file -> !file.getParent().equals(path)) .collect(Collectors.groupingBy(p -> p.getParent().getFileName().toString(), Collectors.mapping(p -> p.getFileName().toString(), Collectors.toList()))); } ```
26,902,359
I have example list like this: ``` example_list = [['aaa'], ['fff', 'gg'], ['ff'], ['', 'gg']] ``` Now, I check if it has empty string like this: ``` has_empty = False; for list1 in example_list: for val1 in list1: if val1 == '': has_empty = True print(has_empty) ``` This works OK as it prints True, but looking for more pythonik method?
2014/11/13
[ "https://Stackoverflow.com/questions/26902359", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3654650/" ]
You can use [`itertools.chain.from_iterable`](https://docs.python.org/3.4/library/itertools.html#itertools.chain.from_iterable): ``` >>> from itertools import chain >>> example_list = [['aaa'], ['fff', 'gg'], ['ff'], ['', 'gg']] >>> '' in chain.from_iterable(example_list) True ``` Just in case if the inner lists are bigger(more than 100 items) then using `any` with a generator will be faster than the above example because then the speed penalty of using a Python for-loop is compensated by the fast `in`-operation: ``` >>> any('' in x for x in example_list) True ``` **Timing comparisons:** ``` >>> example_list = [['aaa']*1000, ['fff', 'gg']*1000, ['gg']*1000]*10000 + [['']*1000] >>> %timeit '' in chain.from_iterable(example_list) 1 loops, best of 3: 706 ms per loop >>> %timeit any('' in x for x in example_list) 1 loops, best of 3: 417 ms per loop # With smaller inner lists for-loop makes `any()` version little slow >>> example_list = [['aaa'], ['fff', 'gg'], ['gg', 'kk']]*10000 + [['']] >>> %timeit '' in chain.from_iterable(example_list) 100 loops, best of 3: 2 ms per loop >>> %timeit any('' in x for x in example_list) 100 loops, best of 3: 2.65 ms per loop ```
You can do use combination of [any](https://docs.python.org/2/library/functions.html#any), [map](https://docs.python.org/2/library/functions.html#map) and [chain](https://docs.python.org/2/library/itertools.html#itertools.chain): ``` In [19]: example_list = [['aaa'], ['fff', 'gg'], ['ff'], ['', 'gg']] In [20]: import operator, itertools In [21]: any(map(operator.not_, itertools.chain(*example_list))) Out[21]: True In [22]: example_list = [['aaa'], ['fff', 'gg'], ['ff'], ['not empty', 'gg']] In [23]: any(map(operator.not_, itertools.chain(*example_list))) Out[23]: False ```
26,902,359
I have example list like this: ``` example_list = [['aaa'], ['fff', 'gg'], ['ff'], ['', 'gg']] ``` Now, I check if it has empty string like this: ``` has_empty = False; for list1 in example_list: for val1 in list1: if val1 == '': has_empty = True print(has_empty) ``` This works OK as it prints True, but looking for more pythonik method?
2014/11/13
[ "https://Stackoverflow.com/questions/26902359", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3654650/" ]
You can use [`itertools.chain.from_iterable`](https://docs.python.org/3.4/library/itertools.html#itertools.chain.from_iterable): ``` >>> from itertools import chain >>> example_list = [['aaa'], ['fff', 'gg'], ['ff'], ['', 'gg']] >>> '' in chain.from_iterable(example_list) True ``` Just in case if the inner lists are bigger(more than 100 items) then using `any` with a generator will be faster than the above example because then the speed penalty of using a Python for-loop is compensated by the fast `in`-operation: ``` >>> any('' in x for x in example_list) True ``` **Timing comparisons:** ``` >>> example_list = [['aaa']*1000, ['fff', 'gg']*1000, ['gg']*1000]*10000 + [['']*1000] >>> %timeit '' in chain.from_iterable(example_list) 1 loops, best of 3: 706 ms per loop >>> %timeit any('' in x for x in example_list) 1 loops, best of 3: 417 ms per loop # With smaller inner lists for-loop makes `any()` version little slow >>> example_list = [['aaa'], ['fff', 'gg'], ['gg', 'kk']]*10000 + [['']] >>> %timeit '' in chain.from_iterable(example_list) 100 loops, best of 3: 2 ms per loop >>> %timeit any('' in x for x in example_list) 100 loops, best of 3: 2.65 ms per loop ```
Just convert the whole list to string and check for the presence of empty strings, i.e. `''` or `""` in it. ``` >>> example_list = [['aaa'], ['fff', 'gg'], ['ff'], ['', 'gg']] >>> any(empty in str(example_list) for empty in ("''", '""')) True >>> example_list = [['aaa'], ['fff', 'gg'], ['ff'], [ 'gg', '\'']] >>> any(empty in str(example_list) for empty in ("''", '""')) False ``` Note that this won't work with lists which have empty string as part of the string itself - example `'hello "" world'` Another approach could be to flatten the dict and check for presence of empty strings in it ``` >>> example_list = [['aaa'], ['fff', 'gg'], ['ff'], ['gg', 'hello "" world']] >>> '' in [item for sublist in example_list for item in sublist] False >>> example_list = [['aaa'], ['fff', 'gg'], ['ff'], ['', 'gg', 'hello "" world']] >>> '' in [item for sublist in example_list for item in sublist] True ```
26,902,359
I have example list like this: ``` example_list = [['aaa'], ['fff', 'gg'], ['ff'], ['', 'gg']] ``` Now, I check if it has empty string like this: ``` has_empty = False; for list1 in example_list: for val1 in list1: if val1 == '': has_empty = True print(has_empty) ``` This works OK as it prints True, but looking for more pythonik method?
2014/11/13
[ "https://Stackoverflow.com/questions/26902359", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3654650/" ]
You can use [`itertools.chain.from_iterable`](https://docs.python.org/3.4/library/itertools.html#itertools.chain.from_iterable): ``` >>> from itertools import chain >>> example_list = [['aaa'], ['fff', 'gg'], ['ff'], ['', 'gg']] >>> '' in chain.from_iterable(example_list) True ``` Just in case if the inner lists are bigger(more than 100 items) then using `any` with a generator will be faster than the above example because then the speed penalty of using a Python for-loop is compensated by the fast `in`-operation: ``` >>> any('' in x for x in example_list) True ``` **Timing comparisons:** ``` >>> example_list = [['aaa']*1000, ['fff', 'gg']*1000, ['gg']*1000]*10000 + [['']*1000] >>> %timeit '' in chain.from_iterable(example_list) 1 loops, best of 3: 706 ms per loop >>> %timeit any('' in x for x in example_list) 1 loops, best of 3: 417 ms per loop # With smaller inner lists for-loop makes `any()` version little slow >>> example_list = [['aaa'], ['fff', 'gg'], ['gg', 'kk']]*10000 + [['']] >>> %timeit '' in chain.from_iterable(example_list) 100 loops, best of 3: 2 ms per loop >>> %timeit any('' in x for x in example_list) 100 loops, best of 3: 2.65 ms per loop ```
``` example_list = [['aaa'], ['fff', 'gg'], ['ff'], ['', 'gg']] has_empty = False for list in example_list: if '' in list: has_empty = True print(has_empty) ```
26,902,359
I have example list like this: ``` example_list = [['aaa'], ['fff', 'gg'], ['ff'], ['', 'gg']] ``` Now, I check if it has empty string like this: ``` has_empty = False; for list1 in example_list: for val1 in list1: if val1 == '': has_empty = True print(has_empty) ``` This works OK as it prints True, but looking for more pythonik method?
2014/11/13
[ "https://Stackoverflow.com/questions/26902359", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3654650/" ]
You can use [`itertools.chain.from_iterable`](https://docs.python.org/3.4/library/itertools.html#itertools.chain.from_iterable): ``` >>> from itertools import chain >>> example_list = [['aaa'], ['fff', 'gg'], ['ff'], ['', 'gg']] >>> '' in chain.from_iterable(example_list) True ``` Just in case if the inner lists are bigger(more than 100 items) then using `any` with a generator will be faster than the above example because then the speed penalty of using a Python for-loop is compensated by the fast `in`-operation: ``` >>> any('' in x for x in example_list) True ``` **Timing comparisons:** ``` >>> example_list = [['aaa']*1000, ['fff', 'gg']*1000, ['gg']*1000]*10000 + [['']*1000] >>> %timeit '' in chain.from_iterable(example_list) 1 loops, best of 3: 706 ms per loop >>> %timeit any('' in x for x in example_list) 1 loops, best of 3: 417 ms per loop # With smaller inner lists for-loop makes `any()` version little slow >>> example_list = [['aaa'], ['fff', 'gg'], ['gg', 'kk']]*10000 + [['']] >>> %timeit '' in chain.from_iterable(example_list) 100 loops, best of 3: 2 ms per loop >>> %timeit any('' in x for x in example_list) 100 loops, best of 3: 2.65 ms per loop ```
using `map` and `lambda`: ``` >>> def check_empty(l): ... k=map(lambda x: '' in x and True,l) ... for x in k: ... if x==True: ... return True ... return False ... >>> check_empty(example_list) False >>> example_list [['aaa'], ['fff', 'gg'], ['ff'], ['gg', 'hello "" world']] >>> example_list = [['aaa'], ['fff', 'gg',''], ['ff'], ['gg', 'hello "" world']] >>> check_empty(example_list) True ```
26,902,359
I have example list like this: ``` example_list = [['aaa'], ['fff', 'gg'], ['ff'], ['', 'gg']] ``` Now, I check if it has empty string like this: ``` has_empty = False; for list1 in example_list: for val1 in list1: if val1 == '': has_empty = True print(has_empty) ``` This works OK as it prints True, but looking for more pythonik method?
2014/11/13
[ "https://Stackoverflow.com/questions/26902359", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3654650/" ]
You can do use combination of [any](https://docs.python.org/2/library/functions.html#any), [map](https://docs.python.org/2/library/functions.html#map) and [chain](https://docs.python.org/2/library/itertools.html#itertools.chain): ``` In [19]: example_list = [['aaa'], ['fff', 'gg'], ['ff'], ['', 'gg']] In [20]: import operator, itertools In [21]: any(map(operator.not_, itertools.chain(*example_list))) Out[21]: True In [22]: example_list = [['aaa'], ['fff', 'gg'], ['ff'], ['not empty', 'gg']] In [23]: any(map(operator.not_, itertools.chain(*example_list))) Out[23]: False ```
``` example_list = [['aaa'], ['fff', 'gg'], ['ff'], ['', 'gg']] has_empty = False for list in example_list: if '' in list: has_empty = True print(has_empty) ```
26,902,359
I have example list like this: ``` example_list = [['aaa'], ['fff', 'gg'], ['ff'], ['', 'gg']] ``` Now, I check if it has empty string like this: ``` has_empty = False; for list1 in example_list: for val1 in list1: if val1 == '': has_empty = True print(has_empty) ``` This works OK as it prints True, but looking for more pythonik method?
2014/11/13
[ "https://Stackoverflow.com/questions/26902359", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3654650/" ]
You can do use combination of [any](https://docs.python.org/2/library/functions.html#any), [map](https://docs.python.org/2/library/functions.html#map) and [chain](https://docs.python.org/2/library/itertools.html#itertools.chain): ``` In [19]: example_list = [['aaa'], ['fff', 'gg'], ['ff'], ['', 'gg']] In [20]: import operator, itertools In [21]: any(map(operator.not_, itertools.chain(*example_list))) Out[21]: True In [22]: example_list = [['aaa'], ['fff', 'gg'], ['ff'], ['not empty', 'gg']] In [23]: any(map(operator.not_, itertools.chain(*example_list))) Out[23]: False ```
using `map` and `lambda`: ``` >>> def check_empty(l): ... k=map(lambda x: '' in x and True,l) ... for x in k: ... if x==True: ... return True ... return False ... >>> check_empty(example_list) False >>> example_list [['aaa'], ['fff', 'gg'], ['ff'], ['gg', 'hello "" world']] >>> example_list = [['aaa'], ['fff', 'gg',''], ['ff'], ['gg', 'hello "" world']] >>> check_empty(example_list) True ```
26,902,359
I have example list like this: ``` example_list = [['aaa'], ['fff', 'gg'], ['ff'], ['', 'gg']] ``` Now, I check if it has empty string like this: ``` has_empty = False; for list1 in example_list: for val1 in list1: if val1 == '': has_empty = True print(has_empty) ``` This works OK as it prints True, but looking for more pythonik method?
2014/11/13
[ "https://Stackoverflow.com/questions/26902359", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3654650/" ]
Just convert the whole list to string and check for the presence of empty strings, i.e. `''` or `""` in it. ``` >>> example_list = [['aaa'], ['fff', 'gg'], ['ff'], ['', 'gg']] >>> any(empty in str(example_list) for empty in ("''", '""')) True >>> example_list = [['aaa'], ['fff', 'gg'], ['ff'], [ 'gg', '\'']] >>> any(empty in str(example_list) for empty in ("''", '""')) False ``` Note that this won't work with lists which have empty string as part of the string itself - example `'hello "" world'` Another approach could be to flatten the dict and check for presence of empty strings in it ``` >>> example_list = [['aaa'], ['fff', 'gg'], ['ff'], ['gg', 'hello "" world']] >>> '' in [item for sublist in example_list for item in sublist] False >>> example_list = [['aaa'], ['fff', 'gg'], ['ff'], ['', 'gg', 'hello "" world']] >>> '' in [item for sublist in example_list for item in sublist] True ```
``` example_list = [['aaa'], ['fff', 'gg'], ['ff'], ['', 'gg']] has_empty = False for list in example_list: if '' in list: has_empty = True print(has_empty) ```