qid
int64
1
74.7M
question
stringlengths
1
70k
date
stringlengths
10
10
metadata
sequence
response_j
stringlengths
0
115k
response_k
stringlengths
0
60.5k
7,710,639
I am modifying a regex validator control. The regex at the moment looks like this: ``` (\d*\,?\d{2}?){1}$ ``` As I can understand it allows for a number with 2 decimal places. I need to modify it like this: * The number must range from 0 - 1.000.000. (Zero to one million). * The number may or may not have 2 decimals. * The value can not be negative. * Comma (`,`) is the decimal separator. * Should not allow any thousand separators.
2011/10/10
[ "https://Stackoverflow.com/questions/7710639", "https://Stackoverflow.com", "https://Stackoverflow.com/users/817455/" ]
Try this regex: ``` ^(((0|[1-9]\d{0,5})(\,\d{2})?)|(1000000(\,00)?))$ ``` It accepts numbers like: `"4", "4,23", "123456", "1000000", "1000000,00"`, but don't accepts: `",23", "4,7", "1000001", "4,234", "1000000,55"`. If you want accept only numbers with exactly two decimals, use this regex: ``` ^(((0|[1-9]\d{0,5})\,\d{2})|(1000000\,00))$ ```
What about this one ``` ^(?:\d{1,6}(?:\,\d{2})?|1000000)$ ``` See it [here on Regexr](http://regexr.com?2ut4u) It accepts between 1 and 6 digits and an optional fraction with 2 digits OR "1000000". And it allows the number to start with zeros! (001 would be accepted) `^` anchors the regex to the start of the string `$` anchors the regex to the end of the string `(?:)` is a non capturing group
7,710,639
I am modifying a regex validator control. The regex at the moment looks like this: ``` (\d*\,?\d{2}?){1}$ ``` As I can understand it allows for a number with 2 decimal places. I need to modify it like this: * The number must range from 0 - 1.000.000. (Zero to one million). * The number may or may not have 2 decimals. * The value can not be negative. * Comma (`,`) is the decimal separator. * Should not allow any thousand separators.
2011/10/10
[ "https://Stackoverflow.com/questions/7710639", "https://Stackoverflow.com", "https://Stackoverflow.com/users/817455/" ]
Try this regex: ``` ^(((0|[1-9]\d{0,5})(\,\d{2})?)|(1000000(\,00)?))$ ``` It accepts numbers like: `"4", "4,23", "123456", "1000000", "1000000,00"`, but don't accepts: `",23", "4,7", "1000001", "4,234", "1000000,55"`. If you want accept only numbers with exactly two decimals, use this regex: ``` ^(((0|[1-9]\d{0,5})\,\d{2})|(1000000\,00))$ ```
``` ^(([0-9]|([1-9][0-9]{1,5}))(\.[0-9]{1,2})?)|1000000$ ```
24,444,188
can someone please tell me what is going wrong? I am trying to create a basic login page and that opens only when a correct password is written ``` <html> <head> <script> function validateForm() { var x=document.forms["myForm"]["fname"].value; if (x==null || x=="") { alert("First name must be filled out"); return false; } var x=document.forms["myForm"]["fname2"].value; if (x==null || x=="") { alert("password must be filled out"); return false; } } function isValid(myNorm){ var password = myNorm.value; if (password == "hello_me") { return true; } else {alert('Wrong Password') return false; } } </script> </head> <body> <form name="myForm" action="helloworld.html" onsubmit="return !!(validateForm()& isValid())" method="post"> Login ID: <input type="text" name="fname"> <br /> <br> Password: <input type="password" name="fname2" > <br /> <br /> <br /> <input type="submit" value="Submit"> <input type="Reset" value="clear"> </form> </body> </html> ```
2014/06/27
[ "https://Stackoverflow.com/questions/24444188", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2771301/" ]
try this ``` BufferedWriter writer = new BufferedWriter(new FileWriter("result.txt")); for (String element : misspelledWords) { writer.write(element); writer.newLine(); } ``` Adding line separator at the end (like "\n") should work on most OS,but to be on safer side you should use **System.getProperty("line.separator")**
Open your file in append mode like this `FileWriter(String fileName, boolean append)` when you want to make an object of class `FileWrite` in constructor. ``` File file = new File("C:\\Users\\Izak\\Documents\\NetBeansProjects\\addNewLinetoTxtFile\\src\\addnewlinetotxtfile\\a.txt"); try (Writer newLine = new BufferedWriter(new FileWriter(file, true));) { newLine.write("New Line!"); newLine.write(System.getProperty( "line.separator" )); } catch (IOException e) { } ``` Note: > > "**line.separator**" is a Sequence used by operating system to separate lines > in text files > > > source: <http://docs.oracle.com/javase/tutorial/essential/environment/sysprop.html>
47,331,969
I'm trying to merge informations in two different data frames, but problem begins with uneven dimensions and trying to use not the column index but the information in the column. merge function in R or join's (dplyr) don't work with my data. I have to dataframes (One is subset of the others with updated info in the last column): `df1=data.frame(Name = print(LETTERS[1:9]), val = seq(1:3), Case = c("NA","1","NA","NA","1","NA","1","NA","NA"))` ``` Name val Case 1 A 1 NA 2 B 2 1 3 C 3 NA 4 D 1 NA 5 E 2 1 6 F 3 NA 7 G 1 1 8 H 2 NA 9 I 3 NA ``` Some rows in the `Case` column in `df1` have to be changed with the info in the `df2` below: `df2 = data.frame(Name = c("A","D","H"), val = seq(1:3), Case = "1")` ``` Name val Case 1 A 1 1 2 D 2 1 3 H 3 1 ``` So there's nothing important in the `val` column, however I added it into the examples since I want to indicate that I have more columns than two and also my real data is way bigger than the examples. Basically, I want to change specific rows by checking the information in the first columns (in this case, they're unique letters) and in the end I still want to have `df1` as a final data frame. for a better explanation, I want to see something like this: ``` Name val Case 1 A 1 1 2 B 2 1 3 C 3 NA 4 D 1 1 5 E 2 1 6 F 3 NA 7 G 1 1 8 H 2 1 9 I 3 NA ``` Note changed information for `A`,`D` and `H`. Thanks.
2017/11/16
[ "https://Stackoverflow.com/questions/47331969", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8609239/" ]
`%in%` from base-r is there to rescue. ``` df1=data.frame(Name = print(LETTERS[1:9]), val = seq(1:3), Case = c("NA","1","NA","NA","1","NA","1","NA","NA"), stringsAsFactors = F) df2 = data.frame(Name = c("A","D","H"), val = seq(1:3), Case = "1", stringsAsFactors = F) df1$Case <- ifelse(df1$Name %in% df2$Name, df2$Case[df2$Name %in% df1$Name], df1$Case) df1 Output: > df1 Name val Case 1 A 1 1 2 B 2 1 3 C 3 NA 4 D 1 1 5 E 2 1 6 F 3 NA 7 G 1 1 8 H 2 1 9 I 3 NA ```
Here is what I would do using `dplyr`: ``` df1 %>% left_join(df2, by = c("Name")) %>% mutate(val = if_else(is.na(val.y), val.x, val.y), Case = if_else(is.na(Case.y), Case.x, Case.y)) %>% select(Name, val, Case) ```
5,669,878
I'm building a WSGI web app and I have a MySQL database. I'm using MySQLdb, which provides cursors for executing statements and getting results. **What is the standard practice for getting and closing cursors?** In particular, how long should my cursors last? Should I get a new cursor for each transaction? I believe you need to close the cursor before committing the connection. Is there any significant advantage to finding sets of transactions that don't require intermediate commits so that you don't have to get new cursors for each transaction? Is there a lot of overhead for getting new cursors, or is it just not a big deal?
2011/04/14
[ "https://Stackoverflow.com/questions/5669878", "https://Stackoverflow.com", "https://Stackoverflow.com/users/484488/" ]
I think you'll be better off trying to use one cursor for all of your executions, and close it at the end of your code. It's easier to work with, and it might have efficiency benefits as well (don't quote me on that one). ``` conn = MySQLdb.connect("host","user","pass","database") cursor = conn.cursor() cursor.execute("somestuff") results = cursor.fetchall() ..do stuff with results cursor.execute("someotherstuff") results2 = cursor.fetchall() ..do stuff with results2 cursor.close() ``` The point is that you can store the results of a cursor's execution in another variable, thereby freeing your cursor to make a second execution. You run into problems this way only if you're using fetchone(), and need to make a second cursor execution before you've iterated through all results from the first query. Otherwise, I'd say just close your cursors as soon as you're done getting all of the data out of them. That way you don't have to worry about tying up loose ends later in your code.
I suggest to do it like php and mysql. Start i at the beginning of your code before printing of the first data. So if you get a connect error you can display a `50x`(Don't remember what internal error is) error message. And keep it open for the whole session and close it when you know you wont need it anymore.
5,669,878
I'm building a WSGI web app and I have a MySQL database. I'm using MySQLdb, which provides cursors for executing statements and getting results. **What is the standard practice for getting and closing cursors?** In particular, how long should my cursors last? Should I get a new cursor for each transaction? I believe you need to close the cursor before committing the connection. Is there any significant advantage to finding sets of transactions that don't require intermediate commits so that you don't have to get new cursors for each transaction? Is there a lot of overhead for getting new cursors, or is it just not a big deal?
2011/04/14
[ "https://Stackoverflow.com/questions/5669878", "https://Stackoverflow.com", "https://Stackoverflow.com/users/484488/" ]
It's better to rewrite it using 'with' keyword. 'With' will take care about closing cursor (it's important because it's unmanaged resource) automatically. The benefit is it will close cursor in case of exception too. ``` from contextlib import closing import MySQLdb ''' At the beginning you open a DB connection. Particular moment when you open connection depends from your approach: - it can be inside the same function where you work with cursors - in the class constructor - etc ''' db = MySQLdb.connect("host", "user", "pass", "database") with closing(db.cursor()) as cur: cur.execute("somestuff") results = cur.fetchall() # do stuff with results cur.execute("insert operation") # call commit if you do INSERT, UPDATE or DELETE operations db.commit() cur.execute("someotherstuff") results2 = cur.fetchone() # do stuff with results2 # at some point when you decided that you do not need # the open connection anymore you close it db.close() ```
I suggest to do it like php and mysql. Start i at the beginning of your code before printing of the first data. So if you get a connect error you can display a `50x`(Don't remember what internal error is) error message. And keep it open for the whole session and close it when you know you wont need it anymore.
5,669,878
I'm building a WSGI web app and I have a MySQL database. I'm using MySQLdb, which provides cursors for executing statements and getting results. **What is the standard practice for getting and closing cursors?** In particular, how long should my cursors last? Should I get a new cursor for each transaction? I believe you need to close the cursor before committing the connection. Is there any significant advantage to finding sets of transactions that don't require intermediate commits so that you don't have to get new cursors for each transaction? Is there a lot of overhead for getting new cursors, or is it just not a big deal?
2011/04/14
[ "https://Stackoverflow.com/questions/5669878", "https://Stackoverflow.com", "https://Stackoverflow.com/users/484488/" ]
Instead of asking what is standard practice, since that's often unclear and subjective, you might try looking to the module itself for guidance. In general, using the `with` keyword as another user suggested is a great idea, but in this specific circumstance it may not give you quite the functionality you expect. As of version 1.2.5 of the module, `MySQLdb.Connection` implements the [context manager protocol](http://docs.python.org/2/library/stdtypes.html#context-manager-types) with the following code ([github](https://github.com/farcepest/MySQLdb1/blob/2204283605e8c450223965eda8d8f357d5fe4c90/MySQLdb/connections.py)): ``` def __enter__(self): if self.get_autocommit(): self.query("BEGIN") return self.cursor() def __exit__(self, exc, value, tb): if exc: self.rollback() else: self.commit() ``` There are several existing Q&A about `with` already, or you can read [Understanding Python's "with" statement](http://effbot.org/zone/python-with-statement.htm), but essentially what happens is that `__enter__` executes at the start of the `with` block, and `__exit__` executes upon leaving the `with` block. You can use the optional syntax `with EXPR as VAR` to bind the object returned by `__enter__` to a name if you intend to reference that object later. So, given the above implementation, here's a simple way to query your database: ``` connection = MySQLdb.connect(...) with connection as cursor: # connection.__enter__ executes at this line cursor.execute('select 1;') result = cursor.fetchall() # connection.__exit__ executes after this line print result # prints "((1L,),)" ``` The question now is, what are the states of the connection and the cursor after exiting the `with` block? The `__exit__` method shown above calls only `self.rollback()` or `self.commit()`, and neither of those methods go on to call the `close()` method. The cursor itself has no `__exit__` method defined – and wouldn't matter if it did, because `with` is only managing the connection. Therefore, both the connection and the cursor remain open after exiting the `with` block. This is easily confirmed by adding the following code to the above example: ``` try: cursor.execute('select 1;') print 'cursor is open;', except MySQLdb.ProgrammingError: print 'cursor is closed;', if connection.open: print 'connection is open' else: print 'connection is closed' ``` You should see the output "cursor is open; connection is open" printed to stdout. > > I believe you need to close the cursor before committing the connection. > > > Why? The [MySQL C API](https://dev.mysql.com/doc/refman/5.6/en/c-api-function-overview.html), which is the basis for `MySQLdb`, does not implement any cursor object, as implied in the module documentation: ["MySQL does not support cursors; however, cursors are easily emulated."](http://mysql-python.sourceforge.net/MySQLdb.html#connection-objects) Indeed, the `MySQLdb.cursors.BaseCursor` class inherits directly from `object` and imposes no such restriction on cursors with regard to commit/rollback. An Oracle developer [had this to say](http://forums.mysql.com/read.php?50,578823,578941#msg-578941): > > cnx.commit() before cur.close() sounds most logical to me. Maybe you > can go by the rule: "Close the cursor if you do not need it anymore." > Thus commit() before closing the cursor. In the end, for > Connector/Python, it does not make much difference, but or other > databases it might. > > > I expect that's as close as you're going to get to "standard practice" on this subject. > > Is there any significant advantage to finding sets of transactions that don't require intermediate commits so that you don't have to get new cursors for each transaction? > > > I very much doubt it, and in trying to do so, you may introduce additional human error. Better to decide on a convention and stick with it. > > Is there a lot of overhead for getting new cursors, or is it just not a big deal? > > > The overhead is negligible, and doesn't touch the database server at all; it's entirely within the implementation of MySQLdb. You can [look at `BaseCursor.__init__` on github](https://github.com/farcepest/MySQLdb1/blob/master/MySQLdb/cursors.py) if you're really curious to know what's happening when you create a new cursor. Going back to earlier when we were discussing `with`, perhaps now you can understand why the `MySQLdb.Connection` class `__enter__` and `__exit__` methods give you a brand new cursor object in every `with` block and don't bother keeping track of it or closing it at the end of the block. It's fairly lightweight and exists purely for your convenience. If it's really that important to you to micromanage the cursor object, you can use [contextlib.closing](http://docs.python.org/2/library/contextlib.html#contextlib.closing) to make up for the fact that the cursor object has no defined `__exit__` method. For that matter, you can also use it to force the connection object to close itself upon exiting a `with` block. This should output "my\_curs is closed; my\_conn is closed": ``` from contextlib import closing import MySQLdb with closing(MySQLdb.connect(...)) as my_conn: with closing(my_conn.cursor()) as my_curs: my_curs.execute('select 1;') result = my_curs.fetchall() try: my_curs.execute('select 1;') print 'my_curs is open;', except MySQLdb.ProgrammingError: print 'my_curs is closed;', if my_conn.open: print 'my_conn is open' else: print 'my_conn is closed' ``` Note that `with closing(arg_obj)` will not call the argument object's `__enter__` and `__exit__` methods; it will *only* call the argument object's `close` method at the end of the `with` block. (To see this in action, simply define a class `Foo` with `__enter__`, `__exit__`, and `close` methods containing simple `print` statements, and compare what happens when you do `with Foo(): pass` to what happens when you do `with closing(Foo()): pass`.) This has two significant implications: First, if autocommit mode is enabled, MySQLdb will `BEGIN` an explicit transaction on the server when you use `with connection` and commit or rollback the transaction at the end of the block. These are default behaviors of MySQLdb, intended to protect you from MySQL's default behavior of immediately committing any and all DML statements. MySQLdb assumes that when you use a context manager, you want a transaction, and uses the explicit `BEGIN` to bypass the autocommit setting on the server. If you're used to using `with connection`, you might think autocommit is disabled when actually it was only being bypassed. You might get an unpleasant surprise if you add `closing` to your code and lose transactional integrity; you won't be able to rollback changes, you may start seeing concurrency bugs and it may not be immediately obvious why. Second, `with closing(MySQLdb.connect(user, pass)) as VAR` binds the *connection object* to `VAR`, in contrast to `with MySQLdb.connect(user, pass) as VAR`, which binds *a new cursor object* to `VAR`. In the latter case you would have no direct access to the connection object! Instead, you would have to use the cursor's `connection` attribute, which provides proxy access to the original connection. When the cursor is closed, its `connection` attribute is set to `None`. This results in an abandoned connection that will stick around until one of the following happens: * All references to the cursor are removed * The cursor goes out of scope * The connection times out * The connection is closed manually via server administration tools You can test this by monitoring open connections (in Workbench or by [using `SHOW PROCESSLIST`](http://dev.mysql.com/doc/refman/5.6/en/show-processlist.html)) while executing the following lines one by one: ``` with MySQLdb.connect(...) as my_curs: pass my_curs.close() my_curs.connection # None my_curs.connection.close() # throws AttributeError, but connection still open del my_curs # connection will close here ```
I suggest to do it like php and mysql. Start i at the beginning of your code before printing of the first data. So if you get a connect error you can display a `50x`(Don't remember what internal error is) error message. And keep it open for the whole session and close it when you know you wont need it anymore.
5,669,878
I'm building a WSGI web app and I have a MySQL database. I'm using MySQLdb, which provides cursors for executing statements and getting results. **What is the standard practice for getting and closing cursors?** In particular, how long should my cursors last? Should I get a new cursor for each transaction? I believe you need to close the cursor before committing the connection. Is there any significant advantage to finding sets of transactions that don't require intermediate commits so that you don't have to get new cursors for each transaction? Is there a lot of overhead for getting new cursors, or is it just not a big deal?
2011/04/14
[ "https://Stackoverflow.com/questions/5669878", "https://Stackoverflow.com", "https://Stackoverflow.com/users/484488/" ]
Note: this answer is for [PyMySQL](https://github.com/PyMySQL/PyMySQL), which is a drop-in replacement for MySQLdb and effectively the latest version of MySQLdb since MySQLdb stopped being maintained. I believe everything here is *also* true of the legacy MySQLdb, but haven't checked. First of all, some facts: * Python's [`with`](https://docs.python.org/library/stdtypes.html#context-manager-types) syntax calls the context manager's `__enter__` method before executing the body of the `with` block, and its `__exit__` method afterwards. * Connections have an [`__enter__`](https://github.com/PyMySQL/PyMySQL/blob/0.7.10/pymysql/connections.py#L831-L833) method that does nothing besides create and return a cursor, and an [`__exit__`](https://github.com/PyMySQL/PyMySQL/blob/0.7.10/pymysql/connections.py#L835-L840) method that either commits or rolls back (depending upon whether an exception was thrown). It *does not* close the connection. * Cursors in PyMySQL are purely an abstraction implemented in Python; there is no equivalent concept in MySQL itself.1 * Cursors have an [`__enter__`](https://github.com/PyMySQL/PyMySQL/blob/0.7.10/pymysql/cursors.py#L62-L63) method that doesn't do anything and an [`__exit__`](https://github.com/PyMySQL/PyMySQL/blob/0.7.10/pymysql/cursors.py#L65-L67) method which "closes" the cursor (which just means nulling the cursor's reference to its parent connection and throwing away any data stored on the cursor). * Cursors hold a reference to the connection that spawned them, but connections don't hold a reference to the cursors that they've created. * Connections have a [`__del__`](https://github.com/PyMySQL/PyMySQL/blob/0.7.10/pymysql/connections.py#L750) method which closes them * Per <https://docs.python.org/3/reference/datamodel.html>, CPython (the default Python implementation) uses reference counting and automatically deletes an object once the number of references to it hits zero. Putting these things together, we see that naive code like this is *in theory* problematic: ``` # Problematic code, at least in theory! import pymysql with pymysql.connect() as cursor: cursor.execute('SELECT 1') # ... happily carry on and do something unrelated ``` The problem is that nothing has closed the connection. Indeed, if you paste the code above into a Python shell and then run `SHOW FULL PROCESSLIST` at a MySQL shell, you'll be able to see the idle connection that you created. Since MySQL's default number of connections is [151](https://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html#sysvar_max_connections), which isn't *huge*, you could theoretically start running into problems if you had many processes keeping these connections open. However, in CPython, there is a saving grace that ensures that code like my example above *probably* won't cause you to leave around loads of open connections. That saving grace is that as soon as `cursor` goes out of scope (e.g. the function in which it was created finishes, or `cursor` gets another value assigned to it), its reference count hits zero, which causes it to be deleted, dropping the connection's reference count to zero, causing the connection's `__del__` method to be called which force-closes the connection. If you already pasted the code above into your Python shell, then you can now simulate this by running `cursor = 'arbitrary value'`; as soon as you do this, the connection you opened will vanish from the `SHOW PROCESSLIST` output. However, relying upon this is inelegant, and theoretically might fail in Python implementations other than CPython. Cleaner, in theory, would be to explicitly `.close()` the connection (to free up a connection on the database without waiting for Python to destroy the object). This more robust code looks like this: ``` import contextlib import pymysql with contextlib.closing(pymysql.connect()) as conn: with conn as cursor: cursor.execute('SELECT 1') ``` This is ugly, but doesn't rely upon Python destructing your objects to free up your (finite available number of) database connections. Note that closing the *cursor*, if you're already closing the connection explicitly like this, is entirely pointless. Finally, to answer the secondary questions here: > > Is there a lot of overhead for getting new cursors, or is it just not a big deal? > > > Nope, instantiating a cursor doesn't hit MySQL at all and [basically does nothing](https://github.com/PyMySQL/PyMySQL/blob/0.7.10/pymysql/cursors.py#L34-L47). > > Is there any significant advantage to finding sets of transactions that don't require intermediate commits so that you don't have to get new cursors for each transaction? > > > This is situational and difficult to give a general answer to. As <https://dev.mysql.com/doc/refman/en/optimizing-innodb-transaction-management.html> puts it, *"an application might encounter performance issues if it commits thousands of times per second, and different performance issues if it commits only every 2-3 hours"*. You pay a performance overhead for every commit, but by leaving transactions open for longer, you increase the chance of other connections having to spend time waiting for locks, increase your risk of deadlocks, and potentially increase the cost of some lookups performed by other connections. --- 1 MySQL *does* have a construct it calls a [cursor](https://dev.mysql.com/doc/refman/en/cursors.html) but they only exist inside stored procedures; they're completely different to PyMySQL cursors and are not relevant here.
I suggest to do it like php and mysql. Start i at the beginning of your code before printing of the first data. So if you get a connect error you can display a `50x`(Don't remember what internal error is) error message. And keep it open for the whole session and close it when you know you wont need it anymore.
5,669,878
I'm building a WSGI web app and I have a MySQL database. I'm using MySQLdb, which provides cursors for executing statements and getting results. **What is the standard practice for getting and closing cursors?** In particular, how long should my cursors last? Should I get a new cursor for each transaction? I believe you need to close the cursor before committing the connection. Is there any significant advantage to finding sets of transactions that don't require intermediate commits so that you don't have to get new cursors for each transaction? Is there a lot of overhead for getting new cursors, or is it just not a big deal?
2011/04/14
[ "https://Stackoverflow.com/questions/5669878", "https://Stackoverflow.com", "https://Stackoverflow.com/users/484488/" ]
It's better to rewrite it using 'with' keyword. 'With' will take care about closing cursor (it's important because it's unmanaged resource) automatically. The benefit is it will close cursor in case of exception too. ``` from contextlib import closing import MySQLdb ''' At the beginning you open a DB connection. Particular moment when you open connection depends from your approach: - it can be inside the same function where you work with cursors - in the class constructor - etc ''' db = MySQLdb.connect("host", "user", "pass", "database") with closing(db.cursor()) as cur: cur.execute("somestuff") results = cur.fetchall() # do stuff with results cur.execute("insert operation") # call commit if you do INSERT, UPDATE or DELETE operations db.commit() cur.execute("someotherstuff") results2 = cur.fetchone() # do stuff with results2 # at some point when you decided that you do not need # the open connection anymore you close it db.close() ```
I think you'll be better off trying to use one cursor for all of your executions, and close it at the end of your code. It's easier to work with, and it might have efficiency benefits as well (don't quote me on that one). ``` conn = MySQLdb.connect("host","user","pass","database") cursor = conn.cursor() cursor.execute("somestuff") results = cursor.fetchall() ..do stuff with results cursor.execute("someotherstuff") results2 = cursor.fetchall() ..do stuff with results2 cursor.close() ``` The point is that you can store the results of a cursor's execution in another variable, thereby freeing your cursor to make a second execution. You run into problems this way only if you're using fetchone(), and need to make a second cursor execution before you've iterated through all results from the first query. Otherwise, I'd say just close your cursors as soon as you're done getting all of the data out of them. That way you don't have to worry about tying up loose ends later in your code.
5,669,878
I'm building a WSGI web app and I have a MySQL database. I'm using MySQLdb, which provides cursors for executing statements and getting results. **What is the standard practice for getting and closing cursors?** In particular, how long should my cursors last? Should I get a new cursor for each transaction? I believe you need to close the cursor before committing the connection. Is there any significant advantage to finding sets of transactions that don't require intermediate commits so that you don't have to get new cursors for each transaction? Is there a lot of overhead for getting new cursors, or is it just not a big deal?
2011/04/14
[ "https://Stackoverflow.com/questions/5669878", "https://Stackoverflow.com", "https://Stackoverflow.com/users/484488/" ]
Instead of asking what is standard practice, since that's often unclear and subjective, you might try looking to the module itself for guidance. In general, using the `with` keyword as another user suggested is a great idea, but in this specific circumstance it may not give you quite the functionality you expect. As of version 1.2.5 of the module, `MySQLdb.Connection` implements the [context manager protocol](http://docs.python.org/2/library/stdtypes.html#context-manager-types) with the following code ([github](https://github.com/farcepest/MySQLdb1/blob/2204283605e8c450223965eda8d8f357d5fe4c90/MySQLdb/connections.py)): ``` def __enter__(self): if self.get_autocommit(): self.query("BEGIN") return self.cursor() def __exit__(self, exc, value, tb): if exc: self.rollback() else: self.commit() ``` There are several existing Q&A about `with` already, or you can read [Understanding Python's "with" statement](http://effbot.org/zone/python-with-statement.htm), but essentially what happens is that `__enter__` executes at the start of the `with` block, and `__exit__` executes upon leaving the `with` block. You can use the optional syntax `with EXPR as VAR` to bind the object returned by `__enter__` to a name if you intend to reference that object later. So, given the above implementation, here's a simple way to query your database: ``` connection = MySQLdb.connect(...) with connection as cursor: # connection.__enter__ executes at this line cursor.execute('select 1;') result = cursor.fetchall() # connection.__exit__ executes after this line print result # prints "((1L,),)" ``` The question now is, what are the states of the connection and the cursor after exiting the `with` block? The `__exit__` method shown above calls only `self.rollback()` or `self.commit()`, and neither of those methods go on to call the `close()` method. The cursor itself has no `__exit__` method defined – and wouldn't matter if it did, because `with` is only managing the connection. Therefore, both the connection and the cursor remain open after exiting the `with` block. This is easily confirmed by adding the following code to the above example: ``` try: cursor.execute('select 1;') print 'cursor is open;', except MySQLdb.ProgrammingError: print 'cursor is closed;', if connection.open: print 'connection is open' else: print 'connection is closed' ``` You should see the output "cursor is open; connection is open" printed to stdout. > > I believe you need to close the cursor before committing the connection. > > > Why? The [MySQL C API](https://dev.mysql.com/doc/refman/5.6/en/c-api-function-overview.html), which is the basis for `MySQLdb`, does not implement any cursor object, as implied in the module documentation: ["MySQL does not support cursors; however, cursors are easily emulated."](http://mysql-python.sourceforge.net/MySQLdb.html#connection-objects) Indeed, the `MySQLdb.cursors.BaseCursor` class inherits directly from `object` and imposes no such restriction on cursors with regard to commit/rollback. An Oracle developer [had this to say](http://forums.mysql.com/read.php?50,578823,578941#msg-578941): > > cnx.commit() before cur.close() sounds most logical to me. Maybe you > can go by the rule: "Close the cursor if you do not need it anymore." > Thus commit() before closing the cursor. In the end, for > Connector/Python, it does not make much difference, but or other > databases it might. > > > I expect that's as close as you're going to get to "standard practice" on this subject. > > Is there any significant advantage to finding sets of transactions that don't require intermediate commits so that you don't have to get new cursors for each transaction? > > > I very much doubt it, and in trying to do so, you may introduce additional human error. Better to decide on a convention and stick with it. > > Is there a lot of overhead for getting new cursors, or is it just not a big deal? > > > The overhead is negligible, and doesn't touch the database server at all; it's entirely within the implementation of MySQLdb. You can [look at `BaseCursor.__init__` on github](https://github.com/farcepest/MySQLdb1/blob/master/MySQLdb/cursors.py) if you're really curious to know what's happening when you create a new cursor. Going back to earlier when we were discussing `with`, perhaps now you can understand why the `MySQLdb.Connection` class `__enter__` and `__exit__` methods give you a brand new cursor object in every `with` block and don't bother keeping track of it or closing it at the end of the block. It's fairly lightweight and exists purely for your convenience. If it's really that important to you to micromanage the cursor object, you can use [contextlib.closing](http://docs.python.org/2/library/contextlib.html#contextlib.closing) to make up for the fact that the cursor object has no defined `__exit__` method. For that matter, you can also use it to force the connection object to close itself upon exiting a `with` block. This should output "my\_curs is closed; my\_conn is closed": ``` from contextlib import closing import MySQLdb with closing(MySQLdb.connect(...)) as my_conn: with closing(my_conn.cursor()) as my_curs: my_curs.execute('select 1;') result = my_curs.fetchall() try: my_curs.execute('select 1;') print 'my_curs is open;', except MySQLdb.ProgrammingError: print 'my_curs is closed;', if my_conn.open: print 'my_conn is open' else: print 'my_conn is closed' ``` Note that `with closing(arg_obj)` will not call the argument object's `__enter__` and `__exit__` methods; it will *only* call the argument object's `close` method at the end of the `with` block. (To see this in action, simply define a class `Foo` with `__enter__`, `__exit__`, and `close` methods containing simple `print` statements, and compare what happens when you do `with Foo(): pass` to what happens when you do `with closing(Foo()): pass`.) This has two significant implications: First, if autocommit mode is enabled, MySQLdb will `BEGIN` an explicit transaction on the server when you use `with connection` and commit or rollback the transaction at the end of the block. These are default behaviors of MySQLdb, intended to protect you from MySQL's default behavior of immediately committing any and all DML statements. MySQLdb assumes that when you use a context manager, you want a transaction, and uses the explicit `BEGIN` to bypass the autocommit setting on the server. If you're used to using `with connection`, you might think autocommit is disabled when actually it was only being bypassed. You might get an unpleasant surprise if you add `closing` to your code and lose transactional integrity; you won't be able to rollback changes, you may start seeing concurrency bugs and it may not be immediately obvious why. Second, `with closing(MySQLdb.connect(user, pass)) as VAR` binds the *connection object* to `VAR`, in contrast to `with MySQLdb.connect(user, pass) as VAR`, which binds *a new cursor object* to `VAR`. In the latter case you would have no direct access to the connection object! Instead, you would have to use the cursor's `connection` attribute, which provides proxy access to the original connection. When the cursor is closed, its `connection` attribute is set to `None`. This results in an abandoned connection that will stick around until one of the following happens: * All references to the cursor are removed * The cursor goes out of scope * The connection times out * The connection is closed manually via server administration tools You can test this by monitoring open connections (in Workbench or by [using `SHOW PROCESSLIST`](http://dev.mysql.com/doc/refman/5.6/en/show-processlist.html)) while executing the following lines one by one: ``` with MySQLdb.connect(...) as my_curs: pass my_curs.close() my_curs.connection # None my_curs.connection.close() # throws AttributeError, but connection still open del my_curs # connection will close here ```
I think you'll be better off trying to use one cursor for all of your executions, and close it at the end of your code. It's easier to work with, and it might have efficiency benefits as well (don't quote me on that one). ``` conn = MySQLdb.connect("host","user","pass","database") cursor = conn.cursor() cursor.execute("somestuff") results = cursor.fetchall() ..do stuff with results cursor.execute("someotherstuff") results2 = cursor.fetchall() ..do stuff with results2 cursor.close() ``` The point is that you can store the results of a cursor's execution in another variable, thereby freeing your cursor to make a second execution. You run into problems this way only if you're using fetchone(), and need to make a second cursor execution before you've iterated through all results from the first query. Otherwise, I'd say just close your cursors as soon as you're done getting all of the data out of them. That way you don't have to worry about tying up loose ends later in your code.
5,669,878
I'm building a WSGI web app and I have a MySQL database. I'm using MySQLdb, which provides cursors for executing statements and getting results. **What is the standard practice for getting and closing cursors?** In particular, how long should my cursors last? Should I get a new cursor for each transaction? I believe you need to close the cursor before committing the connection. Is there any significant advantage to finding sets of transactions that don't require intermediate commits so that you don't have to get new cursors for each transaction? Is there a lot of overhead for getting new cursors, or is it just not a big deal?
2011/04/14
[ "https://Stackoverflow.com/questions/5669878", "https://Stackoverflow.com", "https://Stackoverflow.com/users/484488/" ]
Instead of asking what is standard practice, since that's often unclear and subjective, you might try looking to the module itself for guidance. In general, using the `with` keyword as another user suggested is a great idea, but in this specific circumstance it may not give you quite the functionality you expect. As of version 1.2.5 of the module, `MySQLdb.Connection` implements the [context manager protocol](http://docs.python.org/2/library/stdtypes.html#context-manager-types) with the following code ([github](https://github.com/farcepest/MySQLdb1/blob/2204283605e8c450223965eda8d8f357d5fe4c90/MySQLdb/connections.py)): ``` def __enter__(self): if self.get_autocommit(): self.query("BEGIN") return self.cursor() def __exit__(self, exc, value, tb): if exc: self.rollback() else: self.commit() ``` There are several existing Q&A about `with` already, or you can read [Understanding Python's "with" statement](http://effbot.org/zone/python-with-statement.htm), but essentially what happens is that `__enter__` executes at the start of the `with` block, and `__exit__` executes upon leaving the `with` block. You can use the optional syntax `with EXPR as VAR` to bind the object returned by `__enter__` to a name if you intend to reference that object later. So, given the above implementation, here's a simple way to query your database: ``` connection = MySQLdb.connect(...) with connection as cursor: # connection.__enter__ executes at this line cursor.execute('select 1;') result = cursor.fetchall() # connection.__exit__ executes after this line print result # prints "((1L,),)" ``` The question now is, what are the states of the connection and the cursor after exiting the `with` block? The `__exit__` method shown above calls only `self.rollback()` or `self.commit()`, and neither of those methods go on to call the `close()` method. The cursor itself has no `__exit__` method defined – and wouldn't matter if it did, because `with` is only managing the connection. Therefore, both the connection and the cursor remain open after exiting the `with` block. This is easily confirmed by adding the following code to the above example: ``` try: cursor.execute('select 1;') print 'cursor is open;', except MySQLdb.ProgrammingError: print 'cursor is closed;', if connection.open: print 'connection is open' else: print 'connection is closed' ``` You should see the output "cursor is open; connection is open" printed to stdout. > > I believe you need to close the cursor before committing the connection. > > > Why? The [MySQL C API](https://dev.mysql.com/doc/refman/5.6/en/c-api-function-overview.html), which is the basis for `MySQLdb`, does not implement any cursor object, as implied in the module documentation: ["MySQL does not support cursors; however, cursors are easily emulated."](http://mysql-python.sourceforge.net/MySQLdb.html#connection-objects) Indeed, the `MySQLdb.cursors.BaseCursor` class inherits directly from `object` and imposes no such restriction on cursors with regard to commit/rollback. An Oracle developer [had this to say](http://forums.mysql.com/read.php?50,578823,578941#msg-578941): > > cnx.commit() before cur.close() sounds most logical to me. Maybe you > can go by the rule: "Close the cursor if you do not need it anymore." > Thus commit() before closing the cursor. In the end, for > Connector/Python, it does not make much difference, but or other > databases it might. > > > I expect that's as close as you're going to get to "standard practice" on this subject. > > Is there any significant advantage to finding sets of transactions that don't require intermediate commits so that you don't have to get new cursors for each transaction? > > > I very much doubt it, and in trying to do so, you may introduce additional human error. Better to decide on a convention and stick with it. > > Is there a lot of overhead for getting new cursors, or is it just not a big deal? > > > The overhead is negligible, and doesn't touch the database server at all; it's entirely within the implementation of MySQLdb. You can [look at `BaseCursor.__init__` on github](https://github.com/farcepest/MySQLdb1/blob/master/MySQLdb/cursors.py) if you're really curious to know what's happening when you create a new cursor. Going back to earlier when we were discussing `with`, perhaps now you can understand why the `MySQLdb.Connection` class `__enter__` and `__exit__` methods give you a brand new cursor object in every `with` block and don't bother keeping track of it or closing it at the end of the block. It's fairly lightweight and exists purely for your convenience. If it's really that important to you to micromanage the cursor object, you can use [contextlib.closing](http://docs.python.org/2/library/contextlib.html#contextlib.closing) to make up for the fact that the cursor object has no defined `__exit__` method. For that matter, you can also use it to force the connection object to close itself upon exiting a `with` block. This should output "my\_curs is closed; my\_conn is closed": ``` from contextlib import closing import MySQLdb with closing(MySQLdb.connect(...)) as my_conn: with closing(my_conn.cursor()) as my_curs: my_curs.execute('select 1;') result = my_curs.fetchall() try: my_curs.execute('select 1;') print 'my_curs is open;', except MySQLdb.ProgrammingError: print 'my_curs is closed;', if my_conn.open: print 'my_conn is open' else: print 'my_conn is closed' ``` Note that `with closing(arg_obj)` will not call the argument object's `__enter__` and `__exit__` methods; it will *only* call the argument object's `close` method at the end of the `with` block. (To see this in action, simply define a class `Foo` with `__enter__`, `__exit__`, and `close` methods containing simple `print` statements, and compare what happens when you do `with Foo(): pass` to what happens when you do `with closing(Foo()): pass`.) This has two significant implications: First, if autocommit mode is enabled, MySQLdb will `BEGIN` an explicit transaction on the server when you use `with connection` and commit or rollback the transaction at the end of the block. These are default behaviors of MySQLdb, intended to protect you from MySQL's default behavior of immediately committing any and all DML statements. MySQLdb assumes that when you use a context manager, you want a transaction, and uses the explicit `BEGIN` to bypass the autocommit setting on the server. If you're used to using `with connection`, you might think autocommit is disabled when actually it was only being bypassed. You might get an unpleasant surprise if you add `closing` to your code and lose transactional integrity; you won't be able to rollback changes, you may start seeing concurrency bugs and it may not be immediately obvious why. Second, `with closing(MySQLdb.connect(user, pass)) as VAR` binds the *connection object* to `VAR`, in contrast to `with MySQLdb.connect(user, pass) as VAR`, which binds *a new cursor object* to `VAR`. In the latter case you would have no direct access to the connection object! Instead, you would have to use the cursor's `connection` attribute, which provides proxy access to the original connection. When the cursor is closed, its `connection` attribute is set to `None`. This results in an abandoned connection that will stick around until one of the following happens: * All references to the cursor are removed * The cursor goes out of scope * The connection times out * The connection is closed manually via server administration tools You can test this by monitoring open connections (in Workbench or by [using `SHOW PROCESSLIST`](http://dev.mysql.com/doc/refman/5.6/en/show-processlist.html)) while executing the following lines one by one: ``` with MySQLdb.connect(...) as my_curs: pass my_curs.close() my_curs.connection # None my_curs.connection.close() # throws AttributeError, but connection still open del my_curs # connection will close here ```
It's better to rewrite it using 'with' keyword. 'With' will take care about closing cursor (it's important because it's unmanaged resource) automatically. The benefit is it will close cursor in case of exception too. ``` from contextlib import closing import MySQLdb ''' At the beginning you open a DB connection. Particular moment when you open connection depends from your approach: - it can be inside the same function where you work with cursors - in the class constructor - etc ''' db = MySQLdb.connect("host", "user", "pass", "database") with closing(db.cursor()) as cur: cur.execute("somestuff") results = cur.fetchall() # do stuff with results cur.execute("insert operation") # call commit if you do INSERT, UPDATE or DELETE operations db.commit() cur.execute("someotherstuff") results2 = cur.fetchone() # do stuff with results2 # at some point when you decided that you do not need # the open connection anymore you close it db.close() ```
5,669,878
I'm building a WSGI web app and I have a MySQL database. I'm using MySQLdb, which provides cursors for executing statements and getting results. **What is the standard practice for getting and closing cursors?** In particular, how long should my cursors last? Should I get a new cursor for each transaction? I believe you need to close the cursor before committing the connection. Is there any significant advantage to finding sets of transactions that don't require intermediate commits so that you don't have to get new cursors for each transaction? Is there a lot of overhead for getting new cursors, or is it just not a big deal?
2011/04/14
[ "https://Stackoverflow.com/questions/5669878", "https://Stackoverflow.com", "https://Stackoverflow.com/users/484488/" ]
It's better to rewrite it using 'with' keyword. 'With' will take care about closing cursor (it's important because it's unmanaged resource) automatically. The benefit is it will close cursor in case of exception too. ``` from contextlib import closing import MySQLdb ''' At the beginning you open a DB connection. Particular moment when you open connection depends from your approach: - it can be inside the same function where you work with cursors - in the class constructor - etc ''' db = MySQLdb.connect("host", "user", "pass", "database") with closing(db.cursor()) as cur: cur.execute("somestuff") results = cur.fetchall() # do stuff with results cur.execute("insert operation") # call commit if you do INSERT, UPDATE or DELETE operations db.commit() cur.execute("someotherstuff") results2 = cur.fetchone() # do stuff with results2 # at some point when you decided that you do not need # the open connection anymore you close it db.close() ```
Note: this answer is for [PyMySQL](https://github.com/PyMySQL/PyMySQL), which is a drop-in replacement for MySQLdb and effectively the latest version of MySQLdb since MySQLdb stopped being maintained. I believe everything here is *also* true of the legacy MySQLdb, but haven't checked. First of all, some facts: * Python's [`with`](https://docs.python.org/library/stdtypes.html#context-manager-types) syntax calls the context manager's `__enter__` method before executing the body of the `with` block, and its `__exit__` method afterwards. * Connections have an [`__enter__`](https://github.com/PyMySQL/PyMySQL/blob/0.7.10/pymysql/connections.py#L831-L833) method that does nothing besides create and return a cursor, and an [`__exit__`](https://github.com/PyMySQL/PyMySQL/blob/0.7.10/pymysql/connections.py#L835-L840) method that either commits or rolls back (depending upon whether an exception was thrown). It *does not* close the connection. * Cursors in PyMySQL are purely an abstraction implemented in Python; there is no equivalent concept in MySQL itself.1 * Cursors have an [`__enter__`](https://github.com/PyMySQL/PyMySQL/blob/0.7.10/pymysql/cursors.py#L62-L63) method that doesn't do anything and an [`__exit__`](https://github.com/PyMySQL/PyMySQL/blob/0.7.10/pymysql/cursors.py#L65-L67) method which "closes" the cursor (which just means nulling the cursor's reference to its parent connection and throwing away any data stored on the cursor). * Cursors hold a reference to the connection that spawned them, but connections don't hold a reference to the cursors that they've created. * Connections have a [`__del__`](https://github.com/PyMySQL/PyMySQL/blob/0.7.10/pymysql/connections.py#L750) method which closes them * Per <https://docs.python.org/3/reference/datamodel.html>, CPython (the default Python implementation) uses reference counting and automatically deletes an object once the number of references to it hits zero. Putting these things together, we see that naive code like this is *in theory* problematic: ``` # Problematic code, at least in theory! import pymysql with pymysql.connect() as cursor: cursor.execute('SELECT 1') # ... happily carry on and do something unrelated ``` The problem is that nothing has closed the connection. Indeed, if you paste the code above into a Python shell and then run `SHOW FULL PROCESSLIST` at a MySQL shell, you'll be able to see the idle connection that you created. Since MySQL's default number of connections is [151](https://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html#sysvar_max_connections), which isn't *huge*, you could theoretically start running into problems if you had many processes keeping these connections open. However, in CPython, there is a saving grace that ensures that code like my example above *probably* won't cause you to leave around loads of open connections. That saving grace is that as soon as `cursor` goes out of scope (e.g. the function in which it was created finishes, or `cursor` gets another value assigned to it), its reference count hits zero, which causes it to be deleted, dropping the connection's reference count to zero, causing the connection's `__del__` method to be called which force-closes the connection. If you already pasted the code above into your Python shell, then you can now simulate this by running `cursor = 'arbitrary value'`; as soon as you do this, the connection you opened will vanish from the `SHOW PROCESSLIST` output. However, relying upon this is inelegant, and theoretically might fail in Python implementations other than CPython. Cleaner, in theory, would be to explicitly `.close()` the connection (to free up a connection on the database without waiting for Python to destroy the object). This more robust code looks like this: ``` import contextlib import pymysql with contextlib.closing(pymysql.connect()) as conn: with conn as cursor: cursor.execute('SELECT 1') ``` This is ugly, but doesn't rely upon Python destructing your objects to free up your (finite available number of) database connections. Note that closing the *cursor*, if you're already closing the connection explicitly like this, is entirely pointless. Finally, to answer the secondary questions here: > > Is there a lot of overhead for getting new cursors, or is it just not a big deal? > > > Nope, instantiating a cursor doesn't hit MySQL at all and [basically does nothing](https://github.com/PyMySQL/PyMySQL/blob/0.7.10/pymysql/cursors.py#L34-L47). > > Is there any significant advantage to finding sets of transactions that don't require intermediate commits so that you don't have to get new cursors for each transaction? > > > This is situational and difficult to give a general answer to. As <https://dev.mysql.com/doc/refman/en/optimizing-innodb-transaction-management.html> puts it, *"an application might encounter performance issues if it commits thousands of times per second, and different performance issues if it commits only every 2-3 hours"*. You pay a performance overhead for every commit, but by leaving transactions open for longer, you increase the chance of other connections having to spend time waiting for locks, increase your risk of deadlocks, and potentially increase the cost of some lookups performed by other connections. --- 1 MySQL *does* have a construct it calls a [cursor](https://dev.mysql.com/doc/refman/en/cursors.html) but they only exist inside stored procedures; they're completely different to PyMySQL cursors and are not relevant here.
5,669,878
I'm building a WSGI web app and I have a MySQL database. I'm using MySQLdb, which provides cursors for executing statements and getting results. **What is the standard practice for getting and closing cursors?** In particular, how long should my cursors last? Should I get a new cursor for each transaction? I believe you need to close the cursor before committing the connection. Is there any significant advantage to finding sets of transactions that don't require intermediate commits so that you don't have to get new cursors for each transaction? Is there a lot of overhead for getting new cursors, or is it just not a big deal?
2011/04/14
[ "https://Stackoverflow.com/questions/5669878", "https://Stackoverflow.com", "https://Stackoverflow.com/users/484488/" ]
Instead of asking what is standard practice, since that's often unclear and subjective, you might try looking to the module itself for guidance. In general, using the `with` keyword as another user suggested is a great idea, but in this specific circumstance it may not give you quite the functionality you expect. As of version 1.2.5 of the module, `MySQLdb.Connection` implements the [context manager protocol](http://docs.python.org/2/library/stdtypes.html#context-manager-types) with the following code ([github](https://github.com/farcepest/MySQLdb1/blob/2204283605e8c450223965eda8d8f357d5fe4c90/MySQLdb/connections.py)): ``` def __enter__(self): if self.get_autocommit(): self.query("BEGIN") return self.cursor() def __exit__(self, exc, value, tb): if exc: self.rollback() else: self.commit() ``` There are several existing Q&A about `with` already, or you can read [Understanding Python's "with" statement](http://effbot.org/zone/python-with-statement.htm), but essentially what happens is that `__enter__` executes at the start of the `with` block, and `__exit__` executes upon leaving the `with` block. You can use the optional syntax `with EXPR as VAR` to bind the object returned by `__enter__` to a name if you intend to reference that object later. So, given the above implementation, here's a simple way to query your database: ``` connection = MySQLdb.connect(...) with connection as cursor: # connection.__enter__ executes at this line cursor.execute('select 1;') result = cursor.fetchall() # connection.__exit__ executes after this line print result # prints "((1L,),)" ``` The question now is, what are the states of the connection and the cursor after exiting the `with` block? The `__exit__` method shown above calls only `self.rollback()` or `self.commit()`, and neither of those methods go on to call the `close()` method. The cursor itself has no `__exit__` method defined – and wouldn't matter if it did, because `with` is only managing the connection. Therefore, both the connection and the cursor remain open after exiting the `with` block. This is easily confirmed by adding the following code to the above example: ``` try: cursor.execute('select 1;') print 'cursor is open;', except MySQLdb.ProgrammingError: print 'cursor is closed;', if connection.open: print 'connection is open' else: print 'connection is closed' ``` You should see the output "cursor is open; connection is open" printed to stdout. > > I believe you need to close the cursor before committing the connection. > > > Why? The [MySQL C API](https://dev.mysql.com/doc/refman/5.6/en/c-api-function-overview.html), which is the basis for `MySQLdb`, does not implement any cursor object, as implied in the module documentation: ["MySQL does not support cursors; however, cursors are easily emulated."](http://mysql-python.sourceforge.net/MySQLdb.html#connection-objects) Indeed, the `MySQLdb.cursors.BaseCursor` class inherits directly from `object` and imposes no such restriction on cursors with regard to commit/rollback. An Oracle developer [had this to say](http://forums.mysql.com/read.php?50,578823,578941#msg-578941): > > cnx.commit() before cur.close() sounds most logical to me. Maybe you > can go by the rule: "Close the cursor if you do not need it anymore." > Thus commit() before closing the cursor. In the end, for > Connector/Python, it does not make much difference, but or other > databases it might. > > > I expect that's as close as you're going to get to "standard practice" on this subject. > > Is there any significant advantage to finding sets of transactions that don't require intermediate commits so that you don't have to get new cursors for each transaction? > > > I very much doubt it, and in trying to do so, you may introduce additional human error. Better to decide on a convention and stick with it. > > Is there a lot of overhead for getting new cursors, or is it just not a big deal? > > > The overhead is negligible, and doesn't touch the database server at all; it's entirely within the implementation of MySQLdb. You can [look at `BaseCursor.__init__` on github](https://github.com/farcepest/MySQLdb1/blob/master/MySQLdb/cursors.py) if you're really curious to know what's happening when you create a new cursor. Going back to earlier when we were discussing `with`, perhaps now you can understand why the `MySQLdb.Connection` class `__enter__` and `__exit__` methods give you a brand new cursor object in every `with` block and don't bother keeping track of it or closing it at the end of the block. It's fairly lightweight and exists purely for your convenience. If it's really that important to you to micromanage the cursor object, you can use [contextlib.closing](http://docs.python.org/2/library/contextlib.html#contextlib.closing) to make up for the fact that the cursor object has no defined `__exit__` method. For that matter, you can also use it to force the connection object to close itself upon exiting a `with` block. This should output "my\_curs is closed; my\_conn is closed": ``` from contextlib import closing import MySQLdb with closing(MySQLdb.connect(...)) as my_conn: with closing(my_conn.cursor()) as my_curs: my_curs.execute('select 1;') result = my_curs.fetchall() try: my_curs.execute('select 1;') print 'my_curs is open;', except MySQLdb.ProgrammingError: print 'my_curs is closed;', if my_conn.open: print 'my_conn is open' else: print 'my_conn is closed' ``` Note that `with closing(arg_obj)` will not call the argument object's `__enter__` and `__exit__` methods; it will *only* call the argument object's `close` method at the end of the `with` block. (To see this in action, simply define a class `Foo` with `__enter__`, `__exit__`, and `close` methods containing simple `print` statements, and compare what happens when you do `with Foo(): pass` to what happens when you do `with closing(Foo()): pass`.) This has two significant implications: First, if autocommit mode is enabled, MySQLdb will `BEGIN` an explicit transaction on the server when you use `with connection` and commit or rollback the transaction at the end of the block. These are default behaviors of MySQLdb, intended to protect you from MySQL's default behavior of immediately committing any and all DML statements. MySQLdb assumes that when you use a context manager, you want a transaction, and uses the explicit `BEGIN` to bypass the autocommit setting on the server. If you're used to using `with connection`, you might think autocommit is disabled when actually it was only being bypassed. You might get an unpleasant surprise if you add `closing` to your code and lose transactional integrity; you won't be able to rollback changes, you may start seeing concurrency bugs and it may not be immediately obvious why. Second, `with closing(MySQLdb.connect(user, pass)) as VAR` binds the *connection object* to `VAR`, in contrast to `with MySQLdb.connect(user, pass) as VAR`, which binds *a new cursor object* to `VAR`. In the latter case you would have no direct access to the connection object! Instead, you would have to use the cursor's `connection` attribute, which provides proxy access to the original connection. When the cursor is closed, its `connection` attribute is set to `None`. This results in an abandoned connection that will stick around until one of the following happens: * All references to the cursor are removed * The cursor goes out of scope * The connection times out * The connection is closed manually via server administration tools You can test this by monitoring open connections (in Workbench or by [using `SHOW PROCESSLIST`](http://dev.mysql.com/doc/refman/5.6/en/show-processlist.html)) while executing the following lines one by one: ``` with MySQLdb.connect(...) as my_curs: pass my_curs.close() my_curs.connection # None my_curs.connection.close() # throws AttributeError, but connection still open del my_curs # connection will close here ```
Note: this answer is for [PyMySQL](https://github.com/PyMySQL/PyMySQL), which is a drop-in replacement for MySQLdb and effectively the latest version of MySQLdb since MySQLdb stopped being maintained. I believe everything here is *also* true of the legacy MySQLdb, but haven't checked. First of all, some facts: * Python's [`with`](https://docs.python.org/library/stdtypes.html#context-manager-types) syntax calls the context manager's `__enter__` method before executing the body of the `with` block, and its `__exit__` method afterwards. * Connections have an [`__enter__`](https://github.com/PyMySQL/PyMySQL/blob/0.7.10/pymysql/connections.py#L831-L833) method that does nothing besides create and return a cursor, and an [`__exit__`](https://github.com/PyMySQL/PyMySQL/blob/0.7.10/pymysql/connections.py#L835-L840) method that either commits or rolls back (depending upon whether an exception was thrown). It *does not* close the connection. * Cursors in PyMySQL are purely an abstraction implemented in Python; there is no equivalent concept in MySQL itself.1 * Cursors have an [`__enter__`](https://github.com/PyMySQL/PyMySQL/blob/0.7.10/pymysql/cursors.py#L62-L63) method that doesn't do anything and an [`__exit__`](https://github.com/PyMySQL/PyMySQL/blob/0.7.10/pymysql/cursors.py#L65-L67) method which "closes" the cursor (which just means nulling the cursor's reference to its parent connection and throwing away any data stored on the cursor). * Cursors hold a reference to the connection that spawned them, but connections don't hold a reference to the cursors that they've created. * Connections have a [`__del__`](https://github.com/PyMySQL/PyMySQL/blob/0.7.10/pymysql/connections.py#L750) method which closes them * Per <https://docs.python.org/3/reference/datamodel.html>, CPython (the default Python implementation) uses reference counting and automatically deletes an object once the number of references to it hits zero. Putting these things together, we see that naive code like this is *in theory* problematic: ``` # Problematic code, at least in theory! import pymysql with pymysql.connect() as cursor: cursor.execute('SELECT 1') # ... happily carry on and do something unrelated ``` The problem is that nothing has closed the connection. Indeed, if you paste the code above into a Python shell and then run `SHOW FULL PROCESSLIST` at a MySQL shell, you'll be able to see the idle connection that you created. Since MySQL's default number of connections is [151](https://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html#sysvar_max_connections), which isn't *huge*, you could theoretically start running into problems if you had many processes keeping these connections open. However, in CPython, there is a saving grace that ensures that code like my example above *probably* won't cause you to leave around loads of open connections. That saving grace is that as soon as `cursor` goes out of scope (e.g. the function in which it was created finishes, or `cursor` gets another value assigned to it), its reference count hits zero, which causes it to be deleted, dropping the connection's reference count to zero, causing the connection's `__del__` method to be called which force-closes the connection. If you already pasted the code above into your Python shell, then you can now simulate this by running `cursor = 'arbitrary value'`; as soon as you do this, the connection you opened will vanish from the `SHOW PROCESSLIST` output. However, relying upon this is inelegant, and theoretically might fail in Python implementations other than CPython. Cleaner, in theory, would be to explicitly `.close()` the connection (to free up a connection on the database without waiting for Python to destroy the object). This more robust code looks like this: ``` import contextlib import pymysql with contextlib.closing(pymysql.connect()) as conn: with conn as cursor: cursor.execute('SELECT 1') ``` This is ugly, but doesn't rely upon Python destructing your objects to free up your (finite available number of) database connections. Note that closing the *cursor*, if you're already closing the connection explicitly like this, is entirely pointless. Finally, to answer the secondary questions here: > > Is there a lot of overhead for getting new cursors, or is it just not a big deal? > > > Nope, instantiating a cursor doesn't hit MySQL at all and [basically does nothing](https://github.com/PyMySQL/PyMySQL/blob/0.7.10/pymysql/cursors.py#L34-L47). > > Is there any significant advantage to finding sets of transactions that don't require intermediate commits so that you don't have to get new cursors for each transaction? > > > This is situational and difficult to give a general answer to. As <https://dev.mysql.com/doc/refman/en/optimizing-innodb-transaction-management.html> puts it, *"an application might encounter performance issues if it commits thousands of times per second, and different performance issues if it commits only every 2-3 hours"*. You pay a performance overhead for every commit, but by leaving transactions open for longer, you increase the chance of other connections having to spend time waiting for locks, increase your risk of deadlocks, and potentially increase the cost of some lookups performed by other connections. --- 1 MySQL *does* have a construct it calls a [cursor](https://dev.mysql.com/doc/refman/en/cursors.html) but they only exist inside stored procedures; they're completely different to PyMySQL cursors and are not relevant here.
56,148,199
I am new to the Ruby on Rails ecosystem so might question might be really trivial. I have set up an Active Storage on one of my model ```rb class Sedcard < ApplicationRecord has_many_attached :photos end ``` And I simply want to seed data with `Faker` in it like so: ```rb require 'faker' Sedcard.destroy_all 20.times do |_i| sedcard = Sedcard.create!( showname: Faker::Name.female_first_name, description: Faker::Lorem.paragraph(10), phone: Faker::PhoneNumber.cell_phone, birthdate: Faker::Date.birthday(18, 40), gender: Sedcard.genders[:female], is_active: Faker::Boolean.boolean ) index = Faker::Number.unique.between(1, 99) image = open("https://randomuser.me/api/portraits/women/#{index}.jpg") sedcard.photos.attach(io: image, filename: "avatar#{index}.jpg", content_type: 'image/png') end ``` The problem is that some of these records end up with multiple photos attached to them, could be 5 or 10. Most records are seeded well, they have only one photo associated, but the ones with multiple photos all follow the same pattern, they are all seeded with the exact same images.
2019/05/15
[ "https://Stackoverflow.com/questions/56148199", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2828594/" ]
I found the problem myself. I was using UUID as my model's primary key which is not natively compatible with ActiveStorage. Thus, I more or less followed the instructions [here](https://www.wrburgess.com/posts/2018-02-03-1.html)
You need to purge the attachments. Try adding this snippet before destroyingthe `Sedcard`'s ``` Sedcard.all.each{ |s| s.photos.purge } ``` Ref: <https://edgeguides.rubyonrails.org/active_storage_overview.html#removing-files>
199,883
Say I have an expansion of terms containing functions `y[j,t]` and its derivatives, indexed by `j` with the index beginning at 0 whose independent variable are `t`, like so: `Expr = y[0,t]^2 + D[y[0,t],t]*y[0,t] + y[0,t]*y[1,t] + y[0,t]*D[y[1,t],t] + (y[1,t])^2*y[0,t] +` ... etc. Now I wish to define new functions indexed by `i`, call them `A[i]`, that collect all terms from the expression above such that the sum of the indices of the factors in each term sums to `i`. In the above case for the terms shown we would have for example `A[0] = y[0,t]^2 + D[y[0,t],t]*y[0,t]` `A[1] = y[0,t]*y[1,t] + y[0,t]*D[y[1,t],t]` `A[2] = (y[1,t])^2*y[0,t]` How can I get mathematica to assign these terms to these new functions automatically for all `i`? Note: If there is a better way to be indexing functions also feel free to suggest.
2019/06/06
[ "https://mathematica.stackexchange.com/questions/199883", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/41975/" ]
Since [`Show`](http://reference.wolfram.com/language/ref/Show) uses the [`PlotRange`](http://reference.wolfram.com/language/ref/PlotRange) setting from the first plot, you can just set your plot range when defining the first plot: ``` p1=LogLogPlot[ RO, {t,0.00001,0.05}, PlotRange -> {{10^-5, 10^-4}, All}, PlotStyle->{Purple} ]; Show[p1, p2] ``` or you can use a dummy plot with the desired plot range: ``` p0 = LogLogPlot[None, {t, 10^-5, 10^-4}]; Show[p0, p1, p2] ``` If you really want to set the [`PlotRange`](http://reference.wolfram.com/language/ref/PlotRange) using a [`Show`](http://reference.wolfram.com/language/ref/Show) option, than you need to realize that the [`Graphics`](http://reference.wolfram.com/language/ref/Graphics) objects produced by `p1` and `p2` don't know that "Log" scaling functions were used. So, you need to adjust the desired [`PlotRange`](http://reference.wolfram.com/language/ref/PlotRange) accordingly: ``` Show[p1, p2, PlotRange -> {Log @ {10^-5, 10^-4}, All}] ```
Your syntax is incorrect, it should be ``` Show[{p1,p2},PlotRange->{{x_min,x_max},{y_min,y_max}}] ``` If you want all in y, you can do: ``` Show[{p1,p2},PlotRange->{{10^(-5),10^(-4)},All}] ```
61,989,976
I am trying to perform JWT auth in spring boot and the request are getting stuck in redirect loop. **JWTAuthenticationProvider** ``` @Component public class JwtAuthenticationProvider extends AbstractUserDetailsAuthenticationProvider { @Autowired private JwtUtil jwtUtil; @Override public boolean supports(Class<?> authentication) { return (JwtAuthenticationToken.class.isAssignableFrom(authentication)); } @Override protected void additionalAuthenticationChecks(UserDetails userDetails, UsernamePasswordAuthenticationToken authentication) throws AuthenticationException { } @Override protected UserDetails retrieveUser(String username, UsernamePasswordAuthenticationToken authentication) throws AuthenticationException { JwtAuthenticationToken jwtAuthenticationToken = (JwtAuthenticationToken) authentication; String token = jwtAuthenticationToken.getToken(); JwtParsedUser parsedUser = jwtUtil.parseToken(token); if (parsedUser == null) { throw new JwtException("JWT token is not valid"); } UserDetails user = User.withUsername(parsedUser.getUserName()).password("temp_password").authorities(parsedUser.getRole()).build(); return user; } ``` **JwtAuthenticationFilter** ``` public class JwtAuthenticationFilter extends AbstractAuthenticationProcessingFilter { public JwtAuthenticationFilter(AuthenticationManager authenticationManager) { super("/**"); this.setAuthenticationManager(authenticationManager); } @Override protected boolean requiresAuthentication(HttpServletRequest request, HttpServletResponse response) { return true; } @Override public Authentication attemptAuthentication(HttpServletRequest request, HttpServletResponse response) throws AuthenticationException { String header = request.getHeader("Authorization"); if (header == null || !header.startsWith("Bearer ")) { throw new JwtException("No JWT token found in request headers"); } String authToken = header.substring(7); JwtAuthenticationToken authRequest = new JwtAuthenticationToken(authToken); return getAuthenticationManager().authenticate(authRequest); } @Override protected void successfulAuthentication(HttpServletRequest request, HttpServletResponse response, FilterChain chain, Authentication authResult) throws IOException, ServletException { super.successfulAuthentication(request, response, chain, authResult); chain.doFilter(request, response); } } ``` **SecurityConfiguration** ``` @Configuration @EnableWebSecurity(debug = true) public class SecurityConfiguration extends WebSecurityConfigurerAdapter { @Autowired private JwtAuthenticationProvider jwtAuthenticationProvider; @Autowired public void configureGlobalSecurity(AuthenticationManagerBuilder auth) throws Exception { auth.authenticationProvider(jwtAuthenticationProvider); } @Override protected void configure(HttpSecurity http) throws Exception { http.csrf().disable().authorizeRequests().antMatchers("/secured-resource-1/**", "/secured-resource-2/**") .hasRole("ADMIN").antMatchers("/secured-resource-2/**").hasRole("ADMIN").and().formLogin() .successHandler(new AuthenticationSuccessHandler()).and().httpBasic().and().exceptionHandling() .accessDeniedHandler(new CustomAccessDeniedHandler()).authenticationEntryPoint(getBasicAuthEntryPoint()) .and() .addFilterBefore(new JwtAuthenticationFilter(authenticationManager()), FilterSecurityInterceptor.class) .sessionManagement().sessionCreationPolicy(SessionCreationPolicy.STATELESS); } @Bean public CustomBasicAuthenticationEntryPoint getBasicAuthEntryPoint() { return new CustomBasicAuthenticationEntryPoint(); } } ``` **MainController** ``` @RestController public class MainController { @Autowired private JwtUtil jwtUtil; @GetMapping("/secured-resource-1") public String securedResource1() { return "Secured resource1"; } } ``` When I hit the endpoint with the valid JWT token, the code goes in a loop from Filter to provider class and ends in Error: ``` Exceeded maxRedirects. Probably stuck in a redirect loop http://localhost:8000/ error. ``` Debug logs shows the following error: ``` Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed; nested exception is java.lang.IllegalStateException: Cannot call sendError() after the response has been committed] with root cause java.lang.IllegalStateException: Cannot call sendError() after the response has been committed ``` Any suggestions what am I missing here. Thanks in advance.
2020/05/24
[ "https://Stackoverflow.com/questions/61989976", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4457734/" ]
I believe the the reason for this is because you have not actually set the `AuthenticationSuccessHandler` for the bean `JwtAuthenticationFilter`, since it is not actually set it will keep looping around super and chain and later when the error needs to be sent since response is already written in `super()` `chain.doFilter` will fail because once the response is written it cannot be again written hence the error `call sendError() after the response has been committed`. To correct this in your SecurityConfiguration before setting this ``` .addFilterBefore(new JwtAuthenticationFilter(authenticationManager()), FilterSecurityInterceptor.class) ``` Instantiate the filter and set it's success manager like so ``` JwtAuthenticationFilter jwtAuthenticationFilter = new JwtAuthenticationFilter(authenticationManager()),FilterSecurityInterceptor.class); jwtAuthenticationFilter.setAuthenticationSuccessHandler(new CustomAuthenticationSuccessHandler()); ``` Now use the above variable to set the filter. This is a great reference project: <https://gitlab.com/palmapps/jwt-spring-security-demo/-/tree/master/>.
I solved this problem with another approach. In the JwtAuthenticationFilter class we need to set authentication object in context and call chain.doFilter. Calling super.successfulAuthentication can be skipped as we have overridden the implementation. ``` @Override protected void successfulAuthentication(HttpServletRequest request, HttpServletResponse response, FilterChain chain, Authentication authResult) throws IOException, ServletException { //super.successfulAuthentication(request, response, chain, authResult); SecurityContextHolder.getContext().setAuthentication(authResult); chain.doFilter(request, response); } public JwtAuthenticationFilter(AuthenticationManager authenticationManager) { super("/**"); this.setAuthenticationManager(authenticationManager); //this.setAuthenticationSuccessHandler(new JwtAuthenticationSuccessHandler()); } ```
4,177,291
I have the following string: ``` Mon Sep 14 15:24:40 UTC 2009 ``` I need to format it into a string like this: ``` 14/9/2009 ``` How do I do it in Java?
2010/11/14
[ "https://Stackoverflow.com/questions/4177291", "https://Stackoverflow.com", "https://Stackoverflow.com/users/264419/" ]
You can use SimpleDateFormat class to convert the string you have to a date object. The date format can be given in the constructor. The format method converts the string to a date object. After getting the date object, you can format it in the way you want.
``` Date d = new Date("Mon Sep 14 15:24:40 UTC 2009"); SimpleDateFormat f = new SimpleDateFormat("dd/M/yyyy"); String s = new String(f.format(d)); ```
4,177,291
I have the following string: ``` Mon Sep 14 15:24:40 UTC 2009 ``` I need to format it into a string like this: ``` 14/9/2009 ``` How do I do it in Java?
2010/11/14
[ "https://Stackoverflow.com/questions/4177291", "https://Stackoverflow.com", "https://Stackoverflow.com/users/264419/" ]
Use [`SimpleDateFormat`](http://download.oracle.com/javase/6/docs/api/java/text/SimpleDateFormat.html) (click the javadoc link to see patterns) to parse the string in one pattern to a fullworthy [`Date`](http://download.oracle.com/javase/6/docs/api/java/util/Date.html) and use another one to format the parsed `Date` to a string in another pattern. ``` String string1 = "Mon Sep 14 15:24:40 UTC 2009"; Date date = new SimpleDateFormat("EEE MMM d HH:mm:ss Z yyyy").parse(string1); String string2 = new SimpleDateFormat("d/M/yyyy").format(date); System.out.println(string2); // 14/9/2009 ```
You can use SimpleDateFormat class to convert the string you have to a date object. The date format can be given in the constructor. The format method converts the string to a date object. After getting the date object, you can format it in the way you want.
4,177,291
I have the following string: ``` Mon Sep 14 15:24:40 UTC 2009 ``` I need to format it into a string like this: ``` 14/9/2009 ``` How do I do it in Java?
2010/11/14
[ "https://Stackoverflow.com/questions/4177291", "https://Stackoverflow.com", "https://Stackoverflow.com/users/264419/" ]
You can use SimpleDateFormat class to convert the string you have to a date object. The date format can be given in the constructor. The format method converts the string to a date object. After getting the date object, you can format it in the way you want.
One liner in java 8 and above. ``` String localDateTime= LocalDateTime.parse("Mon Sep 14 15:24:40 UTC 2009", DateTimeFormatter.ofPattern("EE MMM dd HH:mm:ss z yyyy")).format(DateTimeFormatter.ofPattern("d/M/yyyy")); ```
4,177,291
I have the following string: ``` Mon Sep 14 15:24:40 UTC 2009 ``` I need to format it into a string like this: ``` 14/9/2009 ``` How do I do it in Java?
2010/11/14
[ "https://Stackoverflow.com/questions/4177291", "https://Stackoverflow.com", "https://Stackoverflow.com/users/264419/" ]
Use [`SimpleDateFormat`](http://download.oracle.com/javase/6/docs/api/java/text/SimpleDateFormat.html) (click the javadoc link to see patterns) to parse the string in one pattern to a fullworthy [`Date`](http://download.oracle.com/javase/6/docs/api/java/util/Date.html) and use another one to format the parsed `Date` to a string in another pattern. ``` String string1 = "Mon Sep 14 15:24:40 UTC 2009"; Date date = new SimpleDateFormat("EEE MMM d HH:mm:ss Z yyyy").parse(string1); String string2 = new SimpleDateFormat("d/M/yyyy").format(date); System.out.println(string2); // 14/9/2009 ```
``` Date d = new Date("Mon Sep 14 15:24:40 UTC 2009"); SimpleDateFormat f = new SimpleDateFormat("dd/M/yyyy"); String s = new String(f.format(d)); ```
4,177,291
I have the following string: ``` Mon Sep 14 15:24:40 UTC 2009 ``` I need to format it into a string like this: ``` 14/9/2009 ``` How do I do it in Java?
2010/11/14
[ "https://Stackoverflow.com/questions/4177291", "https://Stackoverflow.com", "https://Stackoverflow.com/users/264419/" ]
One liner in java 8 and above. ``` String localDateTime= LocalDateTime.parse("Mon Sep 14 15:24:40 UTC 2009", DateTimeFormatter.ofPattern("EE MMM dd HH:mm:ss z yyyy")).format(DateTimeFormatter.ofPattern("d/M/yyyy")); ```
``` Date d = new Date("Mon Sep 14 15:24:40 UTC 2009"); SimpleDateFormat f = new SimpleDateFormat("dd/M/yyyy"); String s = new String(f.format(d)); ```
4,177,291
I have the following string: ``` Mon Sep 14 15:24:40 UTC 2009 ``` I need to format it into a string like this: ``` 14/9/2009 ``` How do I do it in Java?
2010/11/14
[ "https://Stackoverflow.com/questions/4177291", "https://Stackoverflow.com", "https://Stackoverflow.com/users/264419/" ]
Use [`SimpleDateFormat`](http://download.oracle.com/javase/6/docs/api/java/text/SimpleDateFormat.html) (click the javadoc link to see patterns) to parse the string in one pattern to a fullworthy [`Date`](http://download.oracle.com/javase/6/docs/api/java/util/Date.html) and use another one to format the parsed `Date` to a string in another pattern. ``` String string1 = "Mon Sep 14 15:24:40 UTC 2009"; Date date = new SimpleDateFormat("EEE MMM d HH:mm:ss Z yyyy").parse(string1); String string2 = new SimpleDateFormat("d/M/yyyy").format(date); System.out.println(string2); // 14/9/2009 ```
One liner in java 8 and above. ``` String localDateTime= LocalDateTime.parse("Mon Sep 14 15:24:40 UTC 2009", DateTimeFormatter.ofPattern("EE MMM dd HH:mm:ss z yyyy")).format(DateTimeFormatter.ofPattern("d/M/yyyy")); ```
44,169,413
[Error Message Picture](https://i.stack.imgur.com/kkbkN.png) I basically followed the instructions from the below link EXACTLY and I'm getting this damn error? I have no idea what I'm supposed to do, wtf? Do I need to create some kind of persisted method?? There were several other questions like this and after reading ALL of them they were not helpful at ALL. Please help. <https://github.com/zquestz/omniauth-google-oauth2> Omniauths Controller ``` class OmniauthCallbacksController < Devise::OmniauthCallbacksController def google_oauth2 # You need to implement the method below in your model (e.g. app/models/user.rb) @user = User.from_omniauth(request.env["omniauth.auth"]) if @user.persisted? flash[:notice] = I18n.t "devise.omniauth_callbacks.success", :kind => "Google" sign_in_and_redirect @user, :event => :authentication else session["devise.google_data"] = request.env["omniauth.auth"].except(:extra) #Removing extra as it can overflow some session stores redirect_to new_user_registration_url, alert: @user.errors.full_messages.join("\n") end end end ``` User model code snippet ``` def self.from_omniauth(access_token) data = access_token.info user = User.where(:email => data["email"]).first # Uncomment the section below if you want users to be created if they don't exist # unless user # user = User.create(name: data["name"], # email: data["email"], # password: Devise.friendly_token[0,20] # ) # end user end ```
2017/05/24
[ "https://Stackoverflow.com/questions/44169413", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6484371/" ]
Changed the bottom portion to: ``` def self.from_omniauth(auth) where(provider: auth.provider, uid: auth.uid).first_or_create do |user| user.email = auth.info.email user.password = Devise.friendly_token[0,20] user.name = auth.info.name # assuming the user model has a name end end ``` ran rails g migration AddOmniauthToUsers provider:string uid:string Then it went to Successfully authenticated from Google account. So I believe it works now. I think maybe the issue was I needed to add the provider and uid to the user database model?
The persisted? method is checking whether or not the user exists thereby returning nil value if no such record exists and your user model is not creating new ones. So by uncommenting the code from the example to this: ``` def self.from_omniauth(access_token) data = access_token.info user = User.where(:email => data["email"]).first # creates a new user if user email does not exist. unless user user = User.create(name: data["name"], email: data["email"], password: Devise.friendly_token[0,20] ) end user end ``` Should solve the problem of checking to see if the user exists or creating a new user.
29,130,635
I read [this](https://stackoverflow.com/questions/5842903/block-tridiagonal-matrix-python), but I wasn't able to create a (N^2 x N^2) - matrix **A** with (N x N) - *matrices* **I** on the lower and upper side-diagonal and **T** on the diagonal. I tried this ``` def prep_matrix(N): I_N = np.identity(N) NV = zeros(N*N - 1) # now I want NV[0]=NV[1]=...=NV[N-1]:=I_N ``` but I have no idea how to fill NV with my matrices. What can I do? I found a lot on how to create tridiagonal matrices with scalars, but not with matrix blocks.
2015/03/18
[ "https://Stackoverflow.com/questions/29130635", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2668777/" ]
Dirty, hacky, inefficient, solution (assumes use of forms authentication): ``` public void Global_BeginRequest(object sender, EventArgs e) { if( Context.User != null && !String.IsNullOrWhiteSpace(Context.User.Identity.Name) && Context.Session != null && Context.Session["IAMTRACKED"] == null ) { Context.Session["IAMTRACKED"] = new object(); Application.Lock(); Application["UsersLoggedIn"] = Application["UsersLoggedIn"] + 1; Application.UnLock(); } } ``` At a high level, this works by, on every request, checking if the user is logged in and, if so, tags the user as logged in and increments the login. This assumes users cannot log out (if they can, you can add a similar test for users who are logged out and tracked). This is a horrible way to solve your problem, but it's a working proto-type which demonstrates that your problem is solvable. Note that this understates logins substantially after an application recycle; logins are much longer term than sessions.
I think the session items are client sided. You can create a query to count the open connections (hence you're working with a MySQL database.) Another option is to use external software (I use the tawk.to helpchat, which shows the amount of users visiting a page in realtime). You could maybe use that, making the supportchat invisible, and only putting it on paging which are accesible for loggedin users. OR Execute an update query which adds/substracts from a column in your database (using the onStart and OnEnd hooks).
29,130,635
I read [this](https://stackoverflow.com/questions/5842903/block-tridiagonal-matrix-python), but I wasn't able to create a (N^2 x N^2) - matrix **A** with (N x N) - *matrices* **I** on the lower and upper side-diagonal and **T** on the diagonal. I tried this ``` def prep_matrix(N): I_N = np.identity(N) NV = zeros(N*N - 1) # now I want NV[0]=NV[1]=...=NV[N-1]:=I_N ``` but I have no idea how to fill NV with my matrices. What can I do? I found a lot on how to create tridiagonal matrices with scalars, but not with matrix blocks.
2015/03/18
[ "https://Stackoverflow.com/questions/29130635", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2668777/" ]
Dirty, hacky, inefficient, solution (assumes use of forms authentication): ``` public void Global_BeginRequest(object sender, EventArgs e) { if( Context.User != null && !String.IsNullOrWhiteSpace(Context.User.Identity.Name) && Context.Session != null && Context.Session["IAMTRACKED"] == null ) { Context.Session["IAMTRACKED"] = new object(); Application.Lock(); Application["UsersLoggedIn"] = Application["UsersLoggedIn"] + 1; Application.UnLock(); } } ``` At a high level, this works by, on every request, checking if the user is logged in and, if so, tags the user as logged in and increments the login. This assumes users cannot log out (if they can, you can add a similar test for users who are logged out and tracked). This is a horrible way to solve your problem, but it's a working proto-type which demonstrates that your problem is solvable. Note that this understates logins substantially after an application recycle; logins are much longer term than sessions.
That is the problem that you cannot do it using *Session[]* variables. You need to be using a database (or a central data source to store the total number of active users). For example, you can see in your application, when the application starts there is no `Application["UsersOnline"]` variable, you create it at the very instance. That is why, each time the application is started, the variable is initialized with a *new value*; always *1*. You can create a separate table for your application, and inside it you can then create a column to contain *OnlineUsers* value. Which can be then incremented each time the app start event is triggered. ``` public void Session_OnStart() { Application.Lock(); Application["UsersOnline"] = (int)Application["UsersOnline"] + 1; // At this position, execute an SQL command to update the value Application.UnLock(); } ``` Otherwise, for every user the session variable would have a new value, and you would not be able to accomplish this. Session variables were never designed for such purpose, you can access the variables, but you cannot rely on them for such a task. You can get more guidance about SQL commands in .NET framework, from MSDN's [SqlClient namespace library](https://msdn.microsoft.com/en-us/library/system.data.sqlclient%28v=vs.110%29.aspx).
29,130,635
I read [this](https://stackoverflow.com/questions/5842903/block-tridiagonal-matrix-python), but I wasn't able to create a (N^2 x N^2) - matrix **A** with (N x N) - *matrices* **I** on the lower and upper side-diagonal and **T** on the diagonal. I tried this ``` def prep_matrix(N): I_N = np.identity(N) NV = zeros(N*N - 1) # now I want NV[0]=NV[1]=...=NV[N-1]:=I_N ``` but I have no idea how to fill NV with my matrices. What can I do? I found a lot on how to create tridiagonal matrices with scalars, but not with matrix blocks.
2015/03/18
[ "https://Stackoverflow.com/questions/29130635", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2668777/" ]
Dirty, hacky, inefficient, solution (assumes use of forms authentication): ``` public void Global_BeginRequest(object sender, EventArgs e) { if( Context.User != null && !String.IsNullOrWhiteSpace(Context.User.Identity.Name) && Context.Session != null && Context.Session["IAMTRACKED"] == null ) { Context.Session["IAMTRACKED"] = new object(); Application.Lock(); Application["UsersLoggedIn"] = Application["UsersLoggedIn"] + 1; Application.UnLock(); } } ``` At a high level, this works by, on every request, checking if the user is logged in and, if so, tags the user as logged in and increments the login. This assumes users cannot log out (if they can, you can add a similar test for users who are logged out and tracked). This is a horrible way to solve your problem, but it's a working proto-type which demonstrates that your problem is solvable. Note that this understates logins substantially after an application recycle; logins are much longer term than sessions.
Perhaps I am missing something, but why not something like this: ``` public void Session_OnStart() { Application.Lock(); if (Application["UsersOnline"] == null ) { Application["UsersOnline"] = 0 } Application["UsersOnline"] = (int)Application["UsersOnline"] + 1; Application.UnLock(); ``` }
29,130,635
I read [this](https://stackoverflow.com/questions/5842903/block-tridiagonal-matrix-python), but I wasn't able to create a (N^2 x N^2) - matrix **A** with (N x N) - *matrices* **I** on the lower and upper side-diagonal and **T** on the diagonal. I tried this ``` def prep_matrix(N): I_N = np.identity(N) NV = zeros(N*N - 1) # now I want NV[0]=NV[1]=...=NV[N-1]:=I_N ``` but I have no idea how to fill NV with my matrices. What can I do? I found a lot on how to create tridiagonal matrices with scalars, but not with matrix blocks.
2015/03/18
[ "https://Stackoverflow.com/questions/29130635", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2668777/" ]
Dirty, hacky, inefficient, solution (assumes use of forms authentication): ``` public void Global_BeginRequest(object sender, EventArgs e) { if( Context.User != null && !String.IsNullOrWhiteSpace(Context.User.Identity.Name) && Context.Session != null && Context.Session["IAMTRACKED"] == null ) { Context.Session["IAMTRACKED"] = new object(); Application.Lock(); Application["UsersLoggedIn"] = Application["UsersLoggedIn"] + 1; Application.UnLock(); } } ``` At a high level, this works by, on every request, checking if the user is logged in and, if so, tags the user as logged in and increments the login. This assumes users cannot log out (if they can, you can add a similar test for users who are logged out and tracked). This is a horrible way to solve your problem, but it's a working proto-type which demonstrates that your problem is solvable. Note that this understates logins substantially after an application recycle; logins are much longer term than sessions.
Maybe I'm missing something, but is there a reason you don't just want to use something like Google Analytics? Unless you're looking for more queryable data, in which case I'd suggest what others have; store the login count to a data store. Just keep in mind you also have to have something to decrement that counter when the user either logs out or their session times out.
29,130,635
I read [this](https://stackoverflow.com/questions/5842903/block-tridiagonal-matrix-python), but I wasn't able to create a (N^2 x N^2) - matrix **A** with (N x N) - *matrices* **I** on the lower and upper side-diagonal and **T** on the diagonal. I tried this ``` def prep_matrix(N): I_N = np.identity(N) NV = zeros(N*N - 1) # now I want NV[0]=NV[1]=...=NV[N-1]:=I_N ``` but I have no idea how to fill NV with my matrices. What can I do? I found a lot on how to create tridiagonal matrices with scalars, but not with matrix blocks.
2015/03/18
[ "https://Stackoverflow.com/questions/29130635", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2668777/" ]
Dirty, hacky, inefficient, solution (assumes use of forms authentication): ``` public void Global_BeginRequest(object sender, EventArgs e) { if( Context.User != null && !String.IsNullOrWhiteSpace(Context.User.Identity.Name) && Context.Session != null && Context.Session["IAMTRACKED"] == null ) { Context.Session["IAMTRACKED"] = new object(); Application.Lock(); Application["UsersLoggedIn"] = Application["UsersLoggedIn"] + 1; Application.UnLock(); } } ``` At a high level, this works by, on every request, checking if the user is logged in and, if so, tags the user as logged in and increments the login. This assumes users cannot log out (if they can, you can add a similar test for users who are logged out and tracked). This is a horrible way to solve your problem, but it's a working proto-type which demonstrates that your problem is solvable. Note that this understates logins substantially after an application recycle; logins are much longer term than sessions.
Try this. It may help you. ``` void Application_Start(object sender, EventArgs e) { Application["cnt"] = 0; Application["onlineusers"] = 0; // Code that runs on application startup } void Session_Start(object sender, EventArgs e) { Application.Lock(); Application["cnt"] = (int)Application["cnt"] + 1; if(Session["username"] != null) { Application["onlineusers"] = (int)Application["onlineusers"] + 1; } else { Application["onlineusers"] = (int)Application["onlineusers"] - 1; } Application.UnLock(); // Code that runs when a new session is started } ``` now you can display the number of users(Without Loggedin): ``` <%=Application["cnt"].ToString()%> ``` and number of online users: ``` <%=Application["onlineusers"].ToString()%> ```
25,677,031
I want Rails to automatically translate placeholder text like it does with form labels. How can I do this? Form labels are translated automatically like this: ``` = f.text_field :first_name ``` This helper uses the locale file: ``` en: active_model: models: user: attributes: first_name: Your name ``` Which outputs this HTML ``` <label for="first_name">Your name</label> ``` How can I make it so the placeholder is translated? Do I have to type the full scope like this: ``` = f.text_field :first_name, placeholder: t('.first_name', scope: 'active_model.models.user.attributes.first_name') ``` Is there are easier way?
2014/09/05
[ "https://Stackoverflow.com/questions/25677031", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2615384/" ]
If using Rails 4.2, you can set the placeholder attribute to true: ``` = f.text_field :first_name, placeholder: true ``` and specify the placeholder text in the locale file like this: ``` en: helpers: placeholder: user: first_name: "Your name" ```
You can view the source on render at <http://rubydoc.info/docs/rails/ActionView/Helpers/Tags/Label> to see how Rails does it. It probably doesn't get a lot better than you have, but you could probably swipe some of Rail's logic and stick it in a helper, if you have a lot of them to do. Alternatively, you may consider using a custom form builder to remove some of the repetition in your whole form, not just placeholders.
25,677,031
I want Rails to automatically translate placeholder text like it does with form labels. How can I do this? Form labels are translated automatically like this: ``` = f.text_field :first_name ``` This helper uses the locale file: ``` en: active_model: models: user: attributes: first_name: Your name ``` Which outputs this HTML ``` <label for="first_name">Your name</label> ``` How can I make it so the placeholder is translated? Do I have to type the full scope like this: ``` = f.text_field :first_name, placeholder: t('.first_name', scope: 'active_model.models.user.attributes.first_name') ``` Is there are easier way?
2014/09/05
[ "https://Stackoverflow.com/questions/25677031", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2615384/" ]
With Rails >= 4.2, you can set the placeholder attribute to true `= f.text_field :first_name, placeholder: true` and in your local file (e.g. en.yml): ``` ru: activerecord: attributes: user: first_name: Your name ``` otherwise (Rails >= 3.0) I think you can write something like this: ``` = f.text_field :attr, placeholder: "#{I18n.t 'activerecord.attributes.user.first_name'}" ```
You can view the source on render at <http://rubydoc.info/docs/rails/ActionView/Helpers/Tags/Label> to see how Rails does it. It probably doesn't get a lot better than you have, but you could probably swipe some of Rail's logic and stick it in a helper, if you have a lot of them to do. Alternatively, you may consider using a custom form builder to remove some of the repetition in your whole form, not just placeholders.
25,677,031
I want Rails to automatically translate placeholder text like it does with form labels. How can I do this? Form labels are translated automatically like this: ``` = f.text_field :first_name ``` This helper uses the locale file: ``` en: active_model: models: user: attributes: first_name: Your name ``` Which outputs this HTML ``` <label for="first_name">Your name</label> ``` How can I make it so the placeholder is translated? Do I have to type the full scope like this: ``` = f.text_field :first_name, placeholder: t('.first_name', scope: 'active_model.models.user.attributes.first_name') ``` Is there are easier way?
2014/09/05
[ "https://Stackoverflow.com/questions/25677031", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2615384/" ]
If using Rails 4.2, you can set the placeholder attribute to true: ``` = f.text_field :first_name, placeholder: true ``` and specify the placeholder text in the locale file like this: ``` en: helpers: placeholder: user: first_name: "Your name" ```
With Rails >= 4.2, you can set the placeholder attribute to true `= f.text_field :first_name, placeholder: true` and in your local file (e.g. en.yml): ``` ru: activerecord: attributes: user: first_name: Your name ``` otherwise (Rails >= 3.0) I think you can write something like this: ``` = f.text_field :attr, placeholder: "#{I18n.t 'activerecord.attributes.user.first_name'}" ```
3,829,150
If I call `console.log('something');` from the popup page, or any script included off that it works fine. However as the background page is not directly run off the popup page it is not included in the console. Is there a way that I can get `console.log()`'s in the background page to show up in the console for the popup page? is there any way to, from the background page call a function in the popup page?
2010/09/30
[ "https://Stackoverflow.com/questions/3829150", "https://Stackoverflow.com", "https://Stackoverflow.com/users/383759/" ]
You can open the background page's console if you click on the "background.html" link in the extensions list. To access the background page that corresponds to your extensions open `Settings / Extensions` or open a new tab and enter `chrome://extensions`. You will see something like this screenshot. ![Chrome extensions dialogue](https://i.stack.imgur.com/Xulbx.png) Under your extension click on the link `background page`. This opens a new window. For the **[context menu sample](https://developer.chrome.com/extensions/samples#context-menus-sample)** the window has the title: `_generated_background_page.html`.
To answer your question directly, when you call `console.log("something")` from the background, this message is logged, to the background page's console. To view it, you may go to `chrome://extensions/` and click on that `inspect view` under your extension. When you click the popup, it's loaded into the current page, thus the console.log should show log message in the current page.
3,829,150
If I call `console.log('something');` from the popup page, or any script included off that it works fine. However as the background page is not directly run off the popup page it is not included in the console. Is there a way that I can get `console.log()`'s in the background page to show up in the console for the popup page? is there any way to, from the background page call a function in the popup page?
2010/09/30
[ "https://Stackoverflow.com/questions/3829150", "https://Stackoverflow.com", "https://Stackoverflow.com/users/383759/" ]
The simplest solution would be to add the following code on the top of the file. And than you can use all full [Chrome console api](https://developers.google.com/chrome-developer-tools/docs/console-api) as you would normally. ``` console = chrome.extension.getBackgroundPage().console; // for instance, console.assert(1!=1) will return assertion error // console.log("msg") ==> prints msg // etc ```
In relation to the original question I'd like to add to the accepted answer by Mohamed Mansour that there is also a way to make this work the other way around: You can access other extension pages (i.e. options page, popup page) from *within the background page/script* with the `chrome.extension.getViews()` call. As described [here](https://developer.chrome.com/extensions/background_pages). ``` // overwrite the console object with the right one. var optionsPage = ( chrome.extension.getViews() && (chrome.extension.getViews().length > 1) ) ? chrome.extension.getViews()[1] : null; // safety precaution. if (optionsPage) { var console = optionsPage.console; } ```
3,829,150
If I call `console.log('something');` from the popup page, or any script included off that it works fine. However as the background page is not directly run off the popup page it is not included in the console. Is there a way that I can get `console.log()`'s in the background page to show up in the console for the popup page? is there any way to, from the background page call a function in the popup page?
2010/09/30
[ "https://Stackoverflow.com/questions/3829150", "https://Stackoverflow.com", "https://Stackoverflow.com/users/383759/" ]
You can still use console.log(), but it gets logged into a separate console. In order to view it - right click on the extension icon and select "Inspect popup".
To view console while debugging your chrome extension, you should use the `chrome.extension.getBackgroundPage();` API, after that you can use `console.log()` as usual: ``` chrome.extension.getBackgroundPage().console.log('Testing'); ``` This is good when you use multiple time, so for that you create custom function: ``` const console = { log: (info) => chrome.extension.getBackgroundPage().console.log(info), }; console.log("foo"); ``` you only use `console.log('learnin')` everywhere
3,829,150
If I call `console.log('something');` from the popup page, or any script included off that it works fine. However as the background page is not directly run off the popup page it is not included in the console. Is there a way that I can get `console.log()`'s in the background page to show up in the console for the popup page? is there any way to, from the background page call a function in the popup page?
2010/09/30
[ "https://Stackoverflow.com/questions/3829150", "https://Stackoverflow.com", "https://Stackoverflow.com/users/383759/" ]
Curently with Manifest 3 and service worker, you just need to go to `Extensions Page / Details` and just click `Inspect Views / Service Worker`.
To view console while debugging your chrome extension, you should use the `chrome.extension.getBackgroundPage();` API, after that you can use `console.log()` as usual: ``` chrome.extension.getBackgroundPage().console.log('Testing'); ``` This is good when you use multiple time, so for that you create custom function: ``` const console = { log: (info) => chrome.extension.getBackgroundPage().console.log(info), }; console.log("foo"); ``` you only use `console.log('learnin')` everywhere
3,829,150
If I call `console.log('something');` from the popup page, or any script included off that it works fine. However as the background page is not directly run off the popup page it is not included in the console. Is there a way that I can get `console.log()`'s in the background page to show up in the console for the popup page? is there any way to, from the background page call a function in the popup page?
2010/09/30
[ "https://Stackoverflow.com/questions/3829150", "https://Stackoverflow.com", "https://Stackoverflow.com/users/383759/" ]
``` const log = chrome.extension.getBackgroundPage().console.log; log('something') ``` Open log: * Open: chrome://extensions/ * Details > Background page
Curently with Manifest 3 and service worker, you just need to go to `Extensions Page / Details` and just click `Inspect Views / Service Worker`.
3,829,150
If I call `console.log('something');` from the popup page, or any script included off that it works fine. However as the background page is not directly run off the popup page it is not included in the console. Is there a way that I can get `console.log()`'s in the background page to show up in the console for the popup page? is there any way to, from the background page call a function in the popup page?
2010/09/30
[ "https://Stackoverflow.com/questions/3829150", "https://Stackoverflow.com", "https://Stackoverflow.com/users/383759/" ]
You can open the background page's console if you click on the "background.html" link in the extensions list. To access the background page that corresponds to your extensions open `Settings / Extensions` or open a new tab and enter `chrome://extensions`. You will see something like this screenshot. ![Chrome extensions dialogue](https://i.stack.imgur.com/Xulbx.png) Under your extension click on the link `background page`. This opens a new window. For the **[context menu sample](https://developer.chrome.com/extensions/samples#context-menus-sample)** the window has the title: `_generated_background_page.html`.
You can still use console.log(), but it gets logged into a separate console. In order to view it - right click on the extension icon and select "Inspect popup".
3,829,150
If I call `console.log('something');` from the popup page, or any script included off that it works fine. However as the background page is not directly run off the popup page it is not included in the console. Is there a way that I can get `console.log()`'s in the background page to show up in the console for the popup page? is there any way to, from the background page call a function in the popup page?
2010/09/30
[ "https://Stackoverflow.com/questions/3829150", "https://Stackoverflow.com", "https://Stackoverflow.com/users/383759/" ]
Any *extension page* (except [content scripts](http://developer.chrome.com/extensions/content_scripts.html)) has direct access to the background page via [`chrome.extension.getBackgroundPage()`](http://developer.chrome.com/extensions/extension.html#method-getBackgroundPage). That means, within the [popup page](http://developer.chrome.com/extensions/browserAction.html), you can just do: ``` chrome.extension.getBackgroundPage().console.log('foo'); ``` To make it easier to use: ``` var bkg = chrome.extension.getBackgroundPage(); bkg.console.log('foo'); ``` Now if you want to do the same within [content scripts](http://developer.chrome.com/extensions/content_scripts.html) you have to use [Message Passing](http://developer.chrome.com/extensions/messaging.html) to achieve that. The reason, they both belong to different domains, which make sense. There are many examples in the [Message Passing](http://developer.chrome.com/extensions/messaging.html) page for you to check out. Hope that clears everything.
To get a console log from a background page you need to write the following code snippet in your background page background.js - ``` chrome.extension.getBackgroundPage().console.log('hello'); ``` Then load the extension and inspect its background page to see the console log. Go ahead!!
3,829,150
If I call `console.log('something');` from the popup page, or any script included off that it works fine. However as the background page is not directly run off the popup page it is not included in the console. Is there a way that I can get `console.log()`'s in the background page to show up in the console for the popup page? is there any way to, from the background page call a function in the popup page?
2010/09/30
[ "https://Stackoverflow.com/questions/3829150", "https://Stackoverflow.com", "https://Stackoverflow.com/users/383759/" ]
The simplest solution would be to add the following code on the top of the file. And than you can use all full [Chrome console api](https://developers.google.com/chrome-developer-tools/docs/console-api) as you would normally. ``` console = chrome.extension.getBackgroundPage().console; // for instance, console.assert(1!=1) will return assertion error // console.log("msg") ==> prints msg // etc ```
`chrome.extension.getBackgroundPage()` I get `null` and accrouding [documentation](https://developer.chrome.com/docs/extensions/mv3/mv3-migration-checklist/), > > Background pages are replaced by service workers in MV3. > > > * Replace `background.page` or `background.scripts` with `background.service_worker` in manifest.json. Note that the service\_worker field takes a string, not an array of strings. > > > manifest.json ```json { "manifest_version": 3, "name": "", "version": "", "background": { "service_worker": "background.js" } } ``` anyway, I don't know how to use `getBackgroundPage`, but I found another solution as below, Solution -------- use the [chrome.scripting.executeScript](https://developer.chrome.com/docs/extensions/reference/scripting/#runtime-functions) So you can inject any script or file. You can directly click inspect (F12) and can debug the function. for example ```js chrome.commands.onCommand.addListener((cmdName) => { switch (cmdName) { case "show-alert": chrome.storage.sync.set({msg: cmdName}) // You can not get the context on the function, so using the Storage API to help you. // https://developer.chrome.com/docs/extensions/reference/storage/ chrome.tabs.query({active: true, currentWindow: true}).then(([tab])=>{ chrome.scripting.executeScript({ target: {tabId: tab.id}, function: () => { chrome.storage.sync.get(['msg'], ({msg})=> { console.log(`${msg}`) alert(`Command: ${msg}`) }) } }) }) break default: alert(`Unknown Command: ${cmdName}`) } }) ``` I create an [open-source](https://github.com/CarsonSlovoka/chrome/tree/f9d37b5/tutorials/extensions/console-alert) for you reference.
3,829,150
If I call `console.log('something');` from the popup page, or any script included off that it works fine. However as the background page is not directly run off the popup page it is not included in the console. Is there a way that I can get `console.log()`'s in the background page to show up in the console for the popup page? is there any way to, from the background page call a function in the popup page?
2010/09/30
[ "https://Stackoverflow.com/questions/3829150", "https://Stackoverflow.com", "https://Stackoverflow.com/users/383759/" ]
``` const log = chrome.extension.getBackgroundPage().console.log; log('something') ``` Open log: * Open: chrome://extensions/ * Details > Background page
In relation to the original question I'd like to add to the accepted answer by Mohamed Mansour that there is also a way to make this work the other way around: You can access other extension pages (i.e. options page, popup page) from *within the background page/script* with the `chrome.extension.getViews()` call. As described [here](https://developer.chrome.com/extensions/background_pages). ``` // overwrite the console object with the right one. var optionsPage = ( chrome.extension.getViews() && (chrome.extension.getViews().length > 1) ) ? chrome.extension.getViews()[1] : null; // safety precaution. if (optionsPage) { var console = optionsPage.console; } ```
3,829,150
If I call `console.log('something');` from the popup page, or any script included off that it works fine. However as the background page is not directly run off the popup page it is not included in the console. Is there a way that I can get `console.log()`'s in the background page to show up in the console for the popup page? is there any way to, from the background page call a function in the popup page?
2010/09/30
[ "https://Stackoverflow.com/questions/3829150", "https://Stackoverflow.com", "https://Stackoverflow.com/users/383759/" ]
Try this, if you want to log to the active page's console: ``` chrome.tabs.executeScript({ code: 'console.log("addd")' }); ```
To view console while debugging your chrome extension, you should use the `chrome.extension.getBackgroundPage();` API, after that you can use `console.log()` as usual: ``` chrome.extension.getBackgroundPage().console.log('Testing'); ``` This is good when you use multiple time, so for that you create custom function: ``` const console = { log: (info) => chrome.extension.getBackgroundPage().console.log(info), }; console.log("foo"); ``` you only use `console.log('learnin')` everywhere
66,233,268
I am working on a project where we frequently work with a list of usernames. We also have a function to take a username and return a dataframe with that user's data. E.g. ``` users = c("bob", "john", "michael") get_data_for_user = function(user) { data.frame(user=user, data=sample(10)) } ``` We often: 1. Iterate over each element of `users` 2. Call `get_data_for_user` to get their data 3. `rbind` the results into a single dataerame I am currently doing this in a purely imperative way: ``` ret = get_data_for_user(users[1]) for (i in 2:length(users)) { ret = rbind(ret, get_data_for_user(users[i])) } ``` This works, but my impression is that all the cool kids are now using libraries like `purrr` to do this in a single line. I am fairly new to `purrr`, and the closest I can see is using `map_df` to convert the vector of usernames to a vector of dataframes. I.e. ``` dfs = map_df(users, get_data_for_user) ``` That is, it seems like I would still be on the hook for writing a loop to do the `rbind`. I'd like to clarify whether my solution (which works) is currently considered best practice in R / amongst users of the tidyverse. Thanks.
2021/02/16
[ "https://Stackoverflow.com/questions/66233268", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2518602/" ]
In steps. First we do a rolling `diff` on your index, anything that is greater than 1 we code as True, we then apply a `cumsum` to create a new group per sequence. ``` 45 0 46 0 47 0 51 1 52 1 ``` --- Next, we use the `groupby` method with the new sequences to create your nested list inside a list comprehension Setup. ------ ``` df = pd.DataFrame([1,2,3,4,5],columns=['A'],index=[45,46, 47, 51, 52]) A 45 1 46 2 47 3 51 4 52 5 ``` --- ``` df['grp'] = df.assign(idx=df.index)['idx'].diff().fillna(1).ne(1).cumsum() idx = [i.index.tolist() for _,i in df.groupby('grp')] [[45, 46, 47], [51, 52]] ```
The issue is with this line ``` sequences[seq] = [index] ``` You are trying to assign the list an index which is not created. Instead do this. ``` sequences.append([index]) ```
66,233,268
I am working on a project where we frequently work with a list of usernames. We also have a function to take a username and return a dataframe with that user's data. E.g. ``` users = c("bob", "john", "michael") get_data_for_user = function(user) { data.frame(user=user, data=sample(10)) } ``` We often: 1. Iterate over each element of `users` 2. Call `get_data_for_user` to get their data 3. `rbind` the results into a single dataerame I am currently doing this in a purely imperative way: ``` ret = get_data_for_user(users[1]) for (i in 2:length(users)) { ret = rbind(ret, get_data_for_user(users[i])) } ``` This works, but my impression is that all the cool kids are now using libraries like `purrr` to do this in a single line. I am fairly new to `purrr`, and the closest I can see is using `map_df` to convert the vector of usernames to a vector of dataframes. I.e. ``` dfs = map_df(users, get_data_for_user) ``` That is, it seems like I would still be on the hook for writing a loop to do the `rbind`. I'd like to clarify whether my solution (which works) is currently considered best practice in R / amongst users of the tidyverse. Thanks.
2021/02/16
[ "https://Stackoverflow.com/questions/66233268", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2518602/" ]
You can use this: ``` s_index=df.index.to_series() l = s_index.groupby(s_index.diff().ne(1).cumsum()).agg(list).to_numpy() ``` Output: ``` l[0] [45, 46, 47] ``` and ``` l[1] [51, 52] ```
The issue is with this line ``` sequences[seq] = [index] ``` You are trying to assign the list an index which is not created. Instead do this. ``` sequences.append([index]) ```
66,233,268
I am working on a project where we frequently work with a list of usernames. We also have a function to take a username and return a dataframe with that user's data. E.g. ``` users = c("bob", "john", "michael") get_data_for_user = function(user) { data.frame(user=user, data=sample(10)) } ``` We often: 1. Iterate over each element of `users` 2. Call `get_data_for_user` to get their data 3. `rbind` the results into a single dataerame I am currently doing this in a purely imperative way: ``` ret = get_data_for_user(users[1]) for (i in 2:length(users)) { ret = rbind(ret, get_data_for_user(users[i])) } ``` This works, but my impression is that all the cool kids are now using libraries like `purrr` to do this in a single line. I am fairly new to `purrr`, and the closest I can see is using `map_df` to convert the vector of usernames to a vector of dataframes. I.e. ``` dfs = map_df(users, get_data_for_user) ``` That is, it seems like I would still be on the hook for writing a loop to do the `rbind`. I'd like to clarify whether my solution (which works) is currently considered best practice in R / amongst users of the tidyverse. Thanks.
2021/02/16
[ "https://Stackoverflow.com/questions/66233268", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2518602/" ]
The issue is with this line ``` sequences[seq] = [index] ``` You are trying to assign the list an index which is not created. Instead do this. ``` sequences.append([index]) ```
I use the diff to find when the index value diff changes greater than 1. I iterate the tuples and access by index their values. ``` index=[45,46,47,51,52] price=[3909.0,3908.75,3908.50,3907.75,3907.5] count=[8,8,8,8,8] df=pd.DataFrame({'index':index,'price':price,'count':count}) df['diff']=df['index'].diff().fillna(0) print(df) result_list=[[]] seq=0 for row in df.itertuples(): index=row[1] diff=row[4] if diff<=1: result_list[seq].append(index) else: seq+=1 result_list.insert(1,[index]) print(result_list) output: [[45, 46, 47], [51, 52]] ```
66,233,268
I am working on a project where we frequently work with a list of usernames. We also have a function to take a username and return a dataframe with that user's data. E.g. ``` users = c("bob", "john", "michael") get_data_for_user = function(user) { data.frame(user=user, data=sample(10)) } ``` We often: 1. Iterate over each element of `users` 2. Call `get_data_for_user` to get their data 3. `rbind` the results into a single dataerame I am currently doing this in a purely imperative way: ``` ret = get_data_for_user(users[1]) for (i in 2:length(users)) { ret = rbind(ret, get_data_for_user(users[i])) } ``` This works, but my impression is that all the cool kids are now using libraries like `purrr` to do this in a single line. I am fairly new to `purrr`, and the closest I can see is using `map_df` to convert the vector of usernames to a vector of dataframes. I.e. ``` dfs = map_df(users, get_data_for_user) ``` That is, it seems like I would still be on the hook for writing a loop to do the `rbind`. I'd like to clarify whether my solution (which works) is currently considered best practice in R / amongst users of the tidyverse. Thanks.
2021/02/16
[ "https://Stackoverflow.com/questions/66233268", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2518602/" ]
You can use this: ``` s_index=df.index.to_series() l = s_index.groupby(s_index.diff().ne(1).cumsum()).agg(list).to_numpy() ``` Output: ``` l[0] [45, 46, 47] ``` and ``` l[1] [51, 52] ```
In steps. First we do a rolling `diff` on your index, anything that is greater than 1 we code as True, we then apply a `cumsum` to create a new group per sequence. ``` 45 0 46 0 47 0 51 1 52 1 ``` --- Next, we use the `groupby` method with the new sequences to create your nested list inside a list comprehension Setup. ------ ``` df = pd.DataFrame([1,2,3,4,5],columns=['A'],index=[45,46, 47, 51, 52]) A 45 1 46 2 47 3 51 4 52 5 ``` --- ``` df['grp'] = df.assign(idx=df.index)['idx'].diff().fillna(1).ne(1).cumsum() idx = [i.index.tolist() for _,i in df.groupby('grp')] [[45, 46, 47], [51, 52]] ```
66,233,268
I am working on a project where we frequently work with a list of usernames. We also have a function to take a username and return a dataframe with that user's data. E.g. ``` users = c("bob", "john", "michael") get_data_for_user = function(user) { data.frame(user=user, data=sample(10)) } ``` We often: 1. Iterate over each element of `users` 2. Call `get_data_for_user` to get their data 3. `rbind` the results into a single dataerame I am currently doing this in a purely imperative way: ``` ret = get_data_for_user(users[1]) for (i in 2:length(users)) { ret = rbind(ret, get_data_for_user(users[i])) } ``` This works, but my impression is that all the cool kids are now using libraries like `purrr` to do this in a single line. I am fairly new to `purrr`, and the closest I can see is using `map_df` to convert the vector of usernames to a vector of dataframes. I.e. ``` dfs = map_df(users, get_data_for_user) ``` That is, it seems like I would still be on the hook for writing a loop to do the `rbind`. I'd like to clarify whether my solution (which works) is currently considered best practice in R / amongst users of the tidyverse. Thanks.
2021/02/16
[ "https://Stackoverflow.com/questions/66233268", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2518602/" ]
In steps. First we do a rolling `diff` on your index, anything that is greater than 1 we code as True, we then apply a `cumsum` to create a new group per sequence. ``` 45 0 46 0 47 0 51 1 52 1 ``` --- Next, we use the `groupby` method with the new sequences to create your nested list inside a list comprehension Setup. ------ ``` df = pd.DataFrame([1,2,3,4,5],columns=['A'],index=[45,46, 47, 51, 52]) A 45 1 46 2 47 3 51 4 52 5 ``` --- ``` df['grp'] = df.assign(idx=df.index)['idx'].diff().fillna(1).ne(1).cumsum() idx = [i.index.tolist() for _,i in df.groupby('grp')] [[45, 46, 47], [51, 52]] ```
I use the diff to find when the index value diff changes greater than 1. I iterate the tuples and access by index their values. ``` index=[45,46,47,51,52] price=[3909.0,3908.75,3908.50,3907.75,3907.5] count=[8,8,8,8,8] df=pd.DataFrame({'index':index,'price':price,'count':count}) df['diff']=df['index'].diff().fillna(0) print(df) result_list=[[]] seq=0 for row in df.itertuples(): index=row[1] diff=row[4] if diff<=1: result_list[seq].append(index) else: seq+=1 result_list.insert(1,[index]) print(result_list) output: [[45, 46, 47], [51, 52]] ```
66,233,268
I am working on a project where we frequently work with a list of usernames. We also have a function to take a username and return a dataframe with that user's data. E.g. ``` users = c("bob", "john", "michael") get_data_for_user = function(user) { data.frame(user=user, data=sample(10)) } ``` We often: 1. Iterate over each element of `users` 2. Call `get_data_for_user` to get their data 3. `rbind` the results into a single dataerame I am currently doing this in a purely imperative way: ``` ret = get_data_for_user(users[1]) for (i in 2:length(users)) { ret = rbind(ret, get_data_for_user(users[i])) } ``` This works, but my impression is that all the cool kids are now using libraries like `purrr` to do this in a single line. I am fairly new to `purrr`, and the closest I can see is using `map_df` to convert the vector of usernames to a vector of dataframes. I.e. ``` dfs = map_df(users, get_data_for_user) ``` That is, it seems like I would still be on the hook for writing a loop to do the `rbind`. I'd like to clarify whether my solution (which works) is currently considered best practice in R / amongst users of the tidyverse. Thanks.
2021/02/16
[ "https://Stackoverflow.com/questions/66233268", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2518602/" ]
You can use this: ``` s_index=df.index.to_series() l = s_index.groupby(s_index.diff().ne(1).cumsum()).agg(list).to_numpy() ``` Output: ``` l[0] [45, 46, 47] ``` and ``` l[1] [51, 52] ```
I use the diff to find when the index value diff changes greater than 1. I iterate the tuples and access by index their values. ``` index=[45,46,47,51,52] price=[3909.0,3908.75,3908.50,3907.75,3907.5] count=[8,8,8,8,8] df=pd.DataFrame({'index':index,'price':price,'count':count}) df['diff']=df['index'].diff().fillna(0) print(df) result_list=[[]] seq=0 for row in df.itertuples(): index=row[1] diff=row[4] if diff<=1: result_list[seq].append(index) else: seq+=1 result_list.insert(1,[index]) print(result_list) output: [[45, 46, 47], [51, 52]] ```
6,677,308
I'm using jQuery Treeview. Is there's a way to populate a children node in a specific parent on onclick event? Please give me some advise or simple sample code to do this.
2011/07/13
[ "https://Stackoverflow.com/questions/6677308", "https://Stackoverflow.com", "https://Stackoverflow.com/users/768789/" ]
You can trigger the change event yourself in the `inputvalue` function: ``` function inputvalue(){ $("#inputid").val("bla").change(); } ``` Also notice the correction of your syntax... in jQuery, `val` is a function that takes a string as a parameter. You can't assign to it as you are doing. Here is an [example fiddle](http://jsfiddle.net/sm4cD/) showing the above in action.
Your function has an error. Try this: ``` function inputvalue(){ $("#inputid").val()="bla" } ``` EDIT My function is also wrong as pointed in comments. The correct is: Using pure javascript ``` function inputvalue(){ document.getElementById("inputid").value ="bla"; } ``` Using jQuery ``` function inputvalue(){ $("#inputid").val("bla"); } ```
24,010
UPDATE Have included an image. As you can see, LED is ON when base is floating. This is a 2N222A transistor. ![enter image description here](https://i.stack.imgur.com/0x1RH.jpg) ![enter image description here](https://i.stack.imgur.com/C2V28.jpg) --- Playing with an NPN bipolar transistor. The Collector is connected to the positive terminal of a 9V battery through a 1k Ohm resistor, and the Emitter is connected to the ground through an LED. The Base is not connected to anything. The LED seems to be dim in the above case. When I connect the Base to the positive terminal, the LED is much brighter. That makes sense as current through the base Base amplifies the current. My questions is: **should any current flow through the emitter if the base is not connected to anything? I.e. Shouldn't the LED be completely off?** I have a similar question for NPN Unijunction transistors (understand that nomenclature changes from CBE to AGC)?
2011/12/21
[ "https://electronics.stackexchange.com/questions/24010", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/4336/" ]
Okay, looking at the picture I think you may have the transistor the wrong way round. Try turning it round. See this picture for reference: ![enter image description here](https://i.stack.imgur.com/tDQzJ.png) As you can see the collector is on the right with the flat part facing you, so you have the collector connected to the LED in your circuit (if the 2N2222A part you are using has the same pinout) I got the picture from [here](http://www.fairchildsemi.com/ds/PN/PN2222A.pdf). **EDIT** - It's actually a 2N222A, but the above advice still goes as the pinout appears to be the same from the picture posted. As Russell mentions the more standard way is to connect the LED to the collector, but your circuit should work if set up correctly.
Please answer: * What colour LED are you using? * What is your transistor type. ? --- You should look at some of the 10's of thousands of diagrams available on the net before connecting a transistor to try to do this job and/or look at the transistor's data sheet. All transistors have a maximum Vbe rating and you have probably exceeded yours quite substantially. You transistor MAY be OK but may be damaged. You MAY have been saved by your interesting emitter follower style circuit. As a starting point always drive the base through a resistor of from 1k to 10k. 1K for low voltages (2-5) and 10k or so for larger voltages (5-30). None of that is ideal but it will keep your transistor alive and your LED lit in most cases. Connect a 100k from base to emitter. This passes the small CB leakage current that exists when the base is open and stops it driving the transistor on partially and dimly lighting your LED. Your circuit with the LED in the emitter has its uses, but more usual and useful is the circuit below. R1 is not needed if you are driving R2 with a source that always has a low impedance, such as a microcontroller pin in normal output mode (active high and active low drive. Transistor type is your choice. LED current is ~~~= (Vsupply - VLED\_on)/ R4. VLDon from data sheet or elseweher. For red LEDs ~= 2V. White and blue LEDS typically 3V - 3.5V So here with Vsupply = 5V * LED current is ~~~= (Vsupply - VLED\_on)/ R4 ~~= (5 - 3.3) / 1000 = 0.017 = 17 mA This is shown being driven by a relay (high on / low off) but any voltage that switches between low ~=0V and 2V <= high <= ~= 12v is OK. For Vin high > 12 V increase R2. --- Suggestion: Experiment with vakues of R2 all else being the same, and see what happens. . Never have R2 < about 500 ohms. R2 can be as large as you like but the LED will stop working when R2 is above **about** 470k to 1 megohm. ![enter image description here](https://i.stack.imgur.com/z3bUo.jpg) --- Recommendation: The BC337-40 is my favorite leaded "jellybean" bipolar NPN transistor. If you can ever buy some of these at a good price, do. Digikey has them at 58 cents in 1.s, 40c/10, 18c/100, 7c/1000, 4.5 cents/ 10k. --- This is a BAD circuit BUT if you add 100K as shown the LED should turn off. NOW connect 10k from base to V+ and see what happens. NEVER connect the base directly to V+ or to any "stiff' voltage source that may cause very high base currents to flow. What is your transistor type. ? ![enter image description here](https://i.stack.imgur.com/LMPaH.jpg)
23,238,724
I have a layout as follows for mobile .. ``` +------------------------+ | (col-md-6) Div 1 | +------------------------+ | (col-md-6) Div 2 | +------------------------+ | (col-md-6) Div 3 | +------------------------+ | (col-md-6) Div 4 | +------------------------+ | (col-md-6) Div 5 | +------------------------+ | (col-md-6) Div 6 | +------------------------+ | (col-md-6) Div 7 | +------------------------+ ``` When the screen widens or goes on tablet the layout changes as expected to ... ``` +------------------------+------------------------+ | (col-md-6) Div 1 | (col-md-6) Div 2 | +------------------------+------------------------+ | (col-md-6) Div 3 | (col-md-6) Div 4 | +------------------------+------------------------+ | (col-md-6) Div 5 | (col-md-6) Div 6 | +------------------------+------------------------+ | (col-md-6) Div 7 | +------------------------+ ``` But I would like the layout to look like .. ``` +------------------------+------------------------+ | (col-md-6) Div 1 | (col-md-6) Div 5 | +------------------------+------------------------+ | (col-md-6) Div 2 | (col-md-6) Div 6 | +------------------------+------------------------+ | (col-md-6) Div 3 | (col-md-6) Div 7 | +------------------------+------------------------+ | (col-md-6) Div 4 | +------------------------+ ``` Is this possible?
2014/04/23
[ "https://Stackoverflow.com/questions/23238724", "https://Stackoverflow.com", "https://Stackoverflow.com/users/505055/" ]
The problem is not linked to data.frame, but simply that you cannot have in the same vector objects of class numeric and objects of class character. It is NOT possible. The person who started the project before you should not have used the string "Error" to indicate a missing data. Instead, you should use NA : ``` x=c(1,2) y=c("Error","Error") c(x,y) # Here the result is coerced as character automatically by R. There is no way to avoid that. ``` Instead you should use ``` c(x,NA) # NA is accepted in a vector of numeric ``` **Note:** you should think a data.frame as a list of vectors which are the columns of the data.frame. Hence if you have 2 columns, *each column is an independent vector* and hence it is possible to have different class per column: ``` x <- c(1,2) y <- c("Error","Error") df=data.frame(x=x,y=y,stringsAsFactors=FALSE) class(df$x) class(df$y) ``` Now if you try to transpose the data.frame, of course the new column vectors will become c(1,"Error") and c(2,"Error") that will be coerced as character as we have seen before. ``` t(df) ```
You could do this: ``` x <- 1 y <- c("Error","Error") df <- data.frame(c(list(), x, y), stringsAsFactors = FALSE) > str(df) 'data.frame': 1 obs. of 3 variables: $ X1 : num 1 $ X.Error. : chr "Error" $ X.Error..1: chr "Error" ``` You just have to set proper column names.
74,081,278
I have a dataframe which looks like this: ``` val "ID: abc\nName: John\nLast name: Johnson\nAge: 27" "ID: igb1\nName: Mike\nLast name: Jackson\nPosition: CEO\nAge: 42" ... ``` I would like to extract Name, Position and Age from those values and turn them into separate columns to get dataframe which looks like this: ``` Name Position Age John NaN 27 Mike CEO 42 ``` How could I do that?
2022/10/15
[ "https://Stackoverflow.com/questions/74081278", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17378883/" ]
As mentioned in the comment, `True/False` values are also the instances of `int` in Python, so you can add one more condition to check if the value is not an instance of `bool`: ```py >>> lst = [True, 19, 19.5, False] >>> [x for x in lst if isinstance(x, int) and not isinstance(x, bool)] [19] ```
`bool` is a subclass of `int`, therefore `True` is an instance of `int`. * `False` equal to `0` * `True` equal to `1` you can use another check or `if type(i) is int`
74,081,278
I have a dataframe which looks like this: ``` val "ID: abc\nName: John\nLast name: Johnson\nAge: 27" "ID: igb1\nName: Mike\nLast name: Jackson\nPosition: CEO\nAge: 42" ... ``` I would like to extract Name, Position and Age from those values and turn them into separate columns to get dataframe which looks like this: ``` Name Position Age John NaN 27 Mike CEO 42 ``` How could I do that?
2022/10/15
[ "https://Stackoverflow.com/questions/74081278", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17378883/" ]
As mentioned in the comment, `True/False` values are also the instances of `int` in Python, so you can add one more condition to check if the value is not an instance of `bool`: ```py >>> lst = [True, 19, 19.5, False] >>> [x for x in lst if isinstance(x, int) and not isinstance(x, bool)] [19] ```
Use `type` comparison instead: ``` _list = [i for i in _list if type(i) is int] ``` Side note: avoid using `list` as the variable name since it's the Python builtin name for the [`list`](https://docs.python.org/3/library/stdtypes.html#list) type.
74,081,278
I have a dataframe which looks like this: ``` val "ID: abc\nName: John\nLast name: Johnson\nAge: 27" "ID: igb1\nName: Mike\nLast name: Jackson\nPosition: CEO\nAge: 42" ... ``` I would like to extract Name, Position and Age from those values and turn them into separate columns to get dataframe which looks like this: ``` Name Position Age John NaN 27 Mike CEO 42 ``` How could I do that?
2022/10/15
[ "https://Stackoverflow.com/questions/74081278", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17378883/" ]
As mentioned in the comment, `True/False` values are also the instances of `int` in Python, so you can add one more condition to check if the value is not an instance of `bool`: ```py >>> lst = [True, 19, 19.5, False] >>> [x for x in lst if isinstance(x, int) and not isinstance(x, bool)] [19] ```
You could try using a conditional statement within your code that checks for isinstance(i, int) and only appends i to the list if that condition is True.
74,081,278
I have a dataframe which looks like this: ``` val "ID: abc\nName: John\nLast name: Johnson\nAge: 27" "ID: igb1\nName: Mike\nLast name: Jackson\nPosition: CEO\nAge: 42" ... ``` I would like to extract Name, Position and Age from those values and turn them into separate columns to get dataframe which looks like this: ``` Name Position Age John NaN 27 Mike CEO 42 ``` How could I do that?
2022/10/15
[ "https://Stackoverflow.com/questions/74081278", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17378883/" ]
Use `type` comparison instead: ``` _list = [i for i in _list if type(i) is int] ``` Side note: avoid using `list` as the variable name since it's the Python builtin name for the [`list`](https://docs.python.org/3/library/stdtypes.html#list) type.
`bool` is a subclass of `int`, therefore `True` is an instance of `int`. * `False` equal to `0` * `True` equal to `1` you can use another check or `if type(i) is int`
74,081,278
I have a dataframe which looks like this: ``` val "ID: abc\nName: John\nLast name: Johnson\nAge: 27" "ID: igb1\nName: Mike\nLast name: Jackson\nPosition: CEO\nAge: 42" ... ``` I would like to extract Name, Position and Age from those values and turn them into separate columns to get dataframe which looks like this: ``` Name Position Age John NaN 27 Mike CEO 42 ``` How could I do that?
2022/10/15
[ "https://Stackoverflow.com/questions/74081278", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17378883/" ]
`bool` is a subclass of `int`, therefore `True` is an instance of `int`. * `False` equal to `0` * `True` equal to `1` you can use another check or `if type(i) is int`
You could try using a conditional statement within your code that checks for isinstance(i, int) and only appends i to the list if that condition is True.
74,081,278
I have a dataframe which looks like this: ``` val "ID: abc\nName: John\nLast name: Johnson\nAge: 27" "ID: igb1\nName: Mike\nLast name: Jackson\nPosition: CEO\nAge: 42" ... ``` I would like to extract Name, Position and Age from those values and turn them into separate columns to get dataframe which looks like this: ``` Name Position Age John NaN 27 Mike CEO 42 ``` How could I do that?
2022/10/15
[ "https://Stackoverflow.com/questions/74081278", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17378883/" ]
Use `type` comparison instead: ``` _list = [i for i in _list if type(i) is int] ``` Side note: avoid using `list` as the variable name since it's the Python builtin name for the [`list`](https://docs.python.org/3/library/stdtypes.html#list) type.
You could try using a conditional statement within your code that checks for isinstance(i, int) and only appends i to the list if that condition is True.
25,207,910
I'm adding a new model to rails\_admin. The list page displays datetime fields correctly, even without any configuration. But the detail (show) page for a given object does not display datetimes. How do I configure rails\_admin to show datetime fields on the show page? Model file: alert\_recording.rb: ``` class AlertRecording < ActiveRecord::Base attr_accessible :user_id, :admin_id, :message, :sent_at, :acknowledged_at, :created_at, :updated_at end ``` Rails\_admin initializer file: ``` ... config.included_models = [ AlertRecording ] ... config.model AlertRecording do field :sent_at, :datetime field :acknowledged_at, :datetime field :message field :user field :admin field :created_at, :datetime field :updated_at, :datetime list do; end show do; end end ``` What's the correct way to configure the datetime fields so I see them on the show view?
2014/08/08
[ "https://Stackoverflow.com/questions/25207910", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3923011/" ]
These fields are hidden by default as you can see here: <https://github.com/sferik/rails_admin/blob/ead1775c48754d6a99c25e4d74494a60aee9b4d1/lib/rails_admin/config.rb#L277> You can overwrite this setting on your config initializer, just open the file `config/initializers/rails_admin.rb` and add this line to it: `config.default_hidden_fields = []` or something like this: `config.default_hidden_fields = [:id, :my_super_top_secret_field]` That way you doesn't need to do a config to every model in your app ;) **BUT!!!** This will show these fields in *edit* action, so it's a good idea to hide id, created\_at and updated\_at in this case. To do this you can assign a hash on this setting, like so: ``` config.default_hidden_fields = { show: [], edit: [:id, :created_at, :updated_at] } ``` And *voilà*, you have what you want. ;)
What you have in there for `sent_at` and `acknowledged_at` should work. Make sure the records you are trying to "show" have dates present for these fields. For `created_at` and `updated_at`, try [this](https://github.com/sferik/rails_admin/issues/994): ``` config.model AlertRecording do field :created_at configure :created_at do show end field :updated_at, :datetime configure :updated_at do show end end ```
18,392,750
Trying to work out where I have screwed up with trying to create a count down timer which displays seconds and milliseconds. The idea is the timer displays the count down time to an NSString which updates a UILable. The code I currently have is ``` -(void)timerRun { if (self.timerPageView.startCountdown) { NSLog(@"%i",self.timerPageView.xtime); self.timerPageView.sec = self.timerPageView.sec - 1; seconds = (self.timerPageView.sec % 60) % 60 ; milliseconds = (self.timerPageView.sec % 60) % 1000; NSString *timerOutput = [NSString stringWithFormat:@"%i:%i", seconds, milliseconds]; self.timerPageView.timerText.text = timerOutput; if (self.timerPageView.resetTimer == YES) { [self setTimer]; } } else { } } -(void)setTimer{ if (self.timerPageView.xtime == 0) { self.timerPageView.xtime = 60000; } self.timerPageView.sec = self.timerPageView.xtime; self.timerPageView.countdownTimer = [NSTimer scheduledTimerWithTimeInterval:0.01 target:self selector:@selector(timerRun) userInfo:Nil repeats:YES]; self.timerPageView.resetTimer = NO; } int seconds; int milliseconds; int minutes; } ``` Anyone got any ideas what I am doing wrong?
2013/08/23
[ "https://Stackoverflow.com/questions/18392750", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2556050/" ]
You have a timer that will execute roughly 100 times per second (interval of 0.01). You decrement a value by `1` each time. Therefore, your `self.timerPageView.sec` variable appears to be hundredths of a second. To get the number of seconds, you need to divide this value by 100. To get the number of milliseconds, you need to multiply by 10 then modulo by 1000. ``` seconds = self.timerPageView.sec / 100; milliseconds = (self.timerPageView.sec * 10) % 1000; ``` Update: Also note that your timer is highly inaccurate. The timer will not repeat EXACTLY every hundredth of a second. It may only run 80 times per second or some other inexact rate. A better approach would be to get the current time at the start. Then inside your `timerRun` method you get the current time again. Subtract the two numbers. This will give the actual elapsed time. Use this instead of decrementing a value each loop.
These calculations look pretty suspect: ``` seconds = (self.timerPageView.sec % 60) % 60 ; milliseconds = (self.timerPageView.sec % 60) % 1000; ``` You are using int type calculations (pretty sketchy in their implementation) on a float value for seconds. ``` NSUInteger seconds = (NSUInteger)(self.timerPageView.sec * 100); //convert to an int NSUInteger milliseconds = (NSUInteger) ((self.timerPageView.sec - seconds)* 1000.); ```
18,392,750
Trying to work out where I have screwed up with trying to create a count down timer which displays seconds and milliseconds. The idea is the timer displays the count down time to an NSString which updates a UILable. The code I currently have is ``` -(void)timerRun { if (self.timerPageView.startCountdown) { NSLog(@"%i",self.timerPageView.xtime); self.timerPageView.sec = self.timerPageView.sec - 1; seconds = (self.timerPageView.sec % 60) % 60 ; milliseconds = (self.timerPageView.sec % 60) % 1000; NSString *timerOutput = [NSString stringWithFormat:@"%i:%i", seconds, milliseconds]; self.timerPageView.timerText.text = timerOutput; if (self.timerPageView.resetTimer == YES) { [self setTimer]; } } else { } } -(void)setTimer{ if (self.timerPageView.xtime == 0) { self.timerPageView.xtime = 60000; } self.timerPageView.sec = self.timerPageView.xtime; self.timerPageView.countdownTimer = [NSTimer scheduledTimerWithTimeInterval:0.01 target:self selector:@selector(timerRun) userInfo:Nil repeats:YES]; self.timerPageView.resetTimer = NO; } int seconds; int milliseconds; int minutes; } ``` Anyone got any ideas what I am doing wrong?
2013/08/23
[ "https://Stackoverflow.com/questions/18392750", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2556050/" ]
You have a timer that will execute roughly 100 times per second (interval of 0.01). You decrement a value by `1` each time. Therefore, your `self.timerPageView.sec` variable appears to be hundredths of a second. To get the number of seconds, you need to divide this value by 100. To get the number of milliseconds, you need to multiply by 10 then modulo by 1000. ``` seconds = self.timerPageView.sec / 100; milliseconds = (self.timerPageView.sec * 10) % 1000; ``` Update: Also note that your timer is highly inaccurate. The timer will not repeat EXACTLY every hundredth of a second. It may only run 80 times per second or some other inexact rate. A better approach would be to get the current time at the start. Then inside your `timerRun` method you get the current time again. Subtract the two numbers. This will give the actual elapsed time. Use this instead of decrementing a value each loop.
You set a time interval of 0.01, which is every 10 milliseconds, 0.001 is every millisecond. Even so, NSTimer is not that accurate, you probably won't get it to work every 1 ms. It is fired from the run loop so there is latency and jitter.
4,882,465
I am quite a noob when it comes to deploying a Django project. I'd like to know what are the various methods to deploy Django project and which one is the most preferred.
2011/02/03
[ "https://Stackoverflow.com/questions/4882465", "https://Stackoverflow.com", "https://Stackoverflow.com/users/277848/" ]
Use the Nginx/Apache/mod-wsgi and you can't go wrong. If you prefer a simple alternative, just use Apache. There is a very good deployment document: <http://lethain.com/entry/2009/feb/13/the-django-and-ubuntu-intrepid-almanac/>
I myself have faced a lot of problems in deploying Django Projects and automating the deployment process. Apache and mod\_wsgi were like curse for Django Deployment. There are several tools like [Nginx](http://wiki.nginx.org/Main), [Gunicorn](http://gunicorn.org/), [SupervisorD](http://supervisord.org/) and Fabric which are trending for Django deployment. At first I used/configured them individually without Deployment automation which took a lot of time(I had to maintain testing as well as production servers for my client and had to update them as soon as a new feature was tested and approved.) but then I stumbled upon django-fagungis, which totally automates my Django Deployment from cloning my project from bitbucket to deploying on my remote server (it uses Nginx, Gunicorn, SupervisorD, Fabtic and virtualenv and also installs all the dependencies on the fly), all with just three commands :) You can find more about it in my blog post [**here**](http://alirazabhayani.blogspot.com/2013/02/easy-django-deployment-tools-tutorial-fabric-gunicorn-nginx-supervisor.html). Now I even don't have to get involved in this process(which used to take a lot of my time) and one of my junior developers runs those three commands of django-fagungis [mentioned here](http://alirazabhayani.blogspot.com/2013/02/easy-django-deployment-tools-tutorial-fabric-gunicorn-nginx-supervisor.html) on his local machine and we get a crisp new copy of our project deployed in minutes without any hassle:)
4,882,465
I am quite a noob when it comes to deploying a Django project. I'd like to know what are the various methods to deploy Django project and which one is the most preferred.
2011/02/03
[ "https://Stackoverflow.com/questions/4882465", "https://Stackoverflow.com", "https://Stackoverflow.com/users/277848/" ]
The Django documentation lists Apache/mod\_wsgi, Apache/mod\_python and FastCGI etc. **mod\_python** is deprecated now, one should use mod\_wsgi instead. Django with **mod\_wsgi** is easy to setup, but: * you can only use one python version at a time [edit: you even can only use the python version mod\_wsgi was compiled for] * [edit: seems if I'm wrong on mod\_wsgi not supporting virtualenv: it does] So for multiple sites (targeting different django/python versions) on a server mod\_wsgi is not the best solution. **FastCGI** can be used with virtualenv, also with different python versions, as you run it with ``` ./manage.py runfcgi … ``` and then configure your webserver to use this fcgi interface. The new, hot stuff about django deployment seems to be **gunicorn**. It's a webserver that implements wsgi and is typically used as backend with a "big" webserver as proxy. Deployment with **gunicorn** feels a lot like fcgi: you run a process doing the django processing stuff with manage.py, and a webserver as frontend to the world. But gunicorn deployment has some advantages over fcgi: * speed - I didn't find the sources, but benchmarks say fcgi is not as fast as the f suggests * config files, for fcgi you must do all configuration on the commandline when executing the manage.py command. This comes unhandy when running multiple django instances via an init.d (unix-like OS' system service startup). It's always the same cmdline, with just different configuration files * gunicorn can drop privileges: no need to do this in your init.d script, and it's easy to switch to one user per django instance * gunicorn behaves more like a daemon: writing pidfile and logfile, forking to the background etc. makes again using it in an init.d script easier. Thus, I would suggest to use the gunicorn solution, unless you have a single site on a single server with low traffic, than you could use the wsgi solution. But I think in the long run you're more happy with gunicorn. If you have a django only webserver, I would suggest to use nginx as frontendproxy, as it's the best performing (again this is based on benchmarks I read in some blogposts - don't have the url anymore). Personally I use apache as frontendproxy, as I need it for other sites hosted on the server. A simple setup instruction for django deployment could be found here: <http://ericholscher.com/blog/2010/aug/16/lessons-learned-dash-easy-django-deployment/> My init.d script for gunicorn is located at github: <https://gist.github.com/753053> Unfortunately I did not yet blog about it, but an experienced sysadmin should be able to do the required setup.
I myself have faced a lot of problems in deploying Django Projects and automating the deployment process. Apache and mod\_wsgi were like curse for Django Deployment. There are several tools like [Nginx](http://wiki.nginx.org/Main), [Gunicorn](http://gunicorn.org/), [SupervisorD](http://supervisord.org/) and Fabric which are trending for Django deployment. At first I used/configured them individually without Deployment automation which took a lot of time(I had to maintain testing as well as production servers for my client and had to update them as soon as a new feature was tested and approved.) but then I stumbled upon django-fagungis, which totally automates my Django Deployment from cloning my project from bitbucket to deploying on my remote server (it uses Nginx, Gunicorn, SupervisorD, Fabtic and virtualenv and also installs all the dependencies on the fly), all with just three commands :) You can find more about it in my blog post [**here**](http://alirazabhayani.blogspot.com/2013/02/easy-django-deployment-tools-tutorial-fabric-gunicorn-nginx-supervisor.html). Now I even don't have to get involved in this process(which used to take a lot of my time) and one of my junior developers runs those three commands of django-fagungis [mentioned here](http://alirazabhayani.blogspot.com/2013/02/easy-django-deployment-tools-tutorial-fabric-gunicorn-nginx-supervisor.html) on his local machine and we get a crisp new copy of our project deployed in minutes without any hassle:)
23,898,432
I have this source of a single file that is successfully compiled in C: ``` #include <stdio.h> int a; unsigned char b = 'A'; extern int alpha; int main() { extern unsigned char b; double a = 3.4; { extern a; printf("%d %d\n", b, a+1); } return 0; } ``` After running it, the output is > > 65 1 > > > Could anybody please tell me why the extern a statement will capture the global value instead of the **double** local one and why the **printf** statement print the global value instead of the local one? Also, I have noticed that if I change the statement on line 3 from ``` int a; ``` to ``` int a2; ``` I will get an error from the **extern a;** statement. Why does not a just use the assignment **double a=3.4;** ? It's not like it is bound to be int.
2014/05/27
[ "https://Stackoverflow.com/questions/23898432", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2565010/" ]
The problem is here: ``` System.out.println("Average of column 1:" + (myTable [0][0] + myTable [0][1] + myTable [0][2] + myTable [0][3] + myTable [0][4]) / 4 ); ``` You're accessing to `myTable[0][4]`, when `myTable` is defined as `int[5][4]`. Fix this and your code will work. ``` System.out.println("Average of column 1:" + (myTable [0][0] + myTable [0][1] + myTable [0][2] + myTable [0][3]) / 4 ); ``` --- A biggest problem here is your design. You should use a `for` loop to set the values in your `myTable` instead of using 20 variables (!). Here's an example: ``` int[][] myTable = new int[5][4]; for (int i = 0; i < myTable.length; i++) { for (int j = 0; j < myTable[i].length; j++) { System.out.println("Type a number:"); myTable[i][j] = Integer.parseInt(mVHS.readLine()); } } ```
Essentially, `myTable` is defined for values up to `[4][3]` which is fine, but in your output statements, you get the mixed, as if they went up to `[3][4]`. You need 5 not 4 lines and you need to remove `+ myTable [?][4]` in each of them.
2,884,715
I am currently investigating the specific square number $a^n+1$ and whether it can become a square. I know that $a^n+1$ cannot be a square if n is even because then I can write n=2x, and so $(a^n)^2$+1 is always smaller than $(a^n+1)^2$. But what about odd powers of n? Can they allow $a^n+1$ to become a square? Or a more general case, can $a^n+1$ ever be a square number?
2018/08/16
[ "https://math.stackexchange.com/questions/2884715", "https://math.stackexchange.com", "https://math.stackexchange.com/users/449771/" ]
To answer the question in the title: The next square after $n^2$ is $(n+1)^2=n^2+2n+1 > n^2+1$ if $n>0$. Therefore, $n^2+1$ is never a square, unless $n=0$.
Can $a^n+1$ ever be a square? PARTIAL ANSWER If $a^n+1=m^2$ then $a^n=(m^2-1)=(m+1)(m-1)$. If $a$ is odd, then both $(m+1)$ and $(m-1)$ are odd, and moreover, two sequential odd numbers have no factors in common. Thus $(m+1)=r^n$ and $(m-1)=s^n$, $\gcd{r,s}=1$, because the product of $(m+1)$ and $(m-1)$ must be an $n^{th}$ power. $r^n-s^n=2$ has no integer solutions for $n>1$, so $a^n+1\neq m^2$ if $a$ is odd.
4,623,475
The MySQL Stored Procedure was: ``` BEGIN set @sql=_sql; PREPARE stmt FROM @sql; EXECUTE stmt; DEALLOCATE PREPARE stmt; set _ires=LAST_INSERT_ID(); END$$ ``` I tried to convert it to: ``` BEGIN EXECUTE _sql; SELECT INTO _ires CURRVAL('table_seq'); RETURN; END; ``` I get the error: ``` SQL error: ERROR: relation "table_seq" does not exist LINE 1: SELECT CURRVAL('table_seq') ^ QUERY: SELECT CURRVAL('table_seq') CONTEXT: PL/pgSQL function "myexecins" line 4 at SQL statement In statement: SELECT myexecins('SELECT * FROM tblbilldate WHERE billid = 2') ``` The query used is for testing purposes only. I believe this function is used to get the row id of the inserted or created row from the query. Any Suggestions?
2011/01/07
[ "https://Stackoverflow.com/questions/4623475", "https://Stackoverflow.com", "https://Stackoverflow.com/users/563296/" ]
You can add an option 'expression' to the configuration. Normally it gets a "$user" as argument. So you can do something like: ``` array('allow', 'actions'=>array('index','update', 'create', 'delete'), 'expression'=> '$user->isAdmin', ), ``` Note that I haven't tested this but I think it will work. Take a look [here](http://www.yiiframework.com/doc/api/1.1/CAccessControlFilter) for the rest.
Well it won't work because it knows Yii::app()->user as a CWebUser Instance and you developed the UserIdentity class so it would say 'CWebUser and its behaviors do not have a method or closure named "isAdmin"'! To use expressions like $user->isAdmin your should set isAdmin property throw the setState command which would use session to save that usually in authentication method so it would be something like this: ``` class UserIdentity extends CUserIdentity { public function authenticate() { //your authentication code //using your functions like $level=$this->isTeacher(); //or $level=$this->isAdmin(); $this->setState('isAdmin',$level); } } ``` and now in the user controller in accessRules method you can have expressions ``` public function accessRules() { return array( array('allow', 'actions'=>array('action1','action2',...), 'expression'=>'$user->isAdmin', //or Yii::app()->user->getState('isAdmin'), ), //... ); } ```
3,988,620
This should be straight foreward, but I simply can't figure it out(!) I have a UIView 'filled with' a UIScrollView. Inside the scrollView I wan't to have a UITableView. I have hooked up both the scrollView and the tableView with IBOutlet's in IB and set the ViewController to be the delegate and datasource of the tableView. What else do I need to do ? Or what shouldn't I have done?
2010/10/21
[ "https://Stackoverflow.com/questions/3988620", "https://Stackoverflow.com", "https://Stackoverflow.com/users/463808/" ]
Personally, I would define the key-states as disjoint states and write a simple state-machine, thus: ``` enum keystate { inactive, firstPress, active }; keystate keystates[256]; Class::CalculateKeyStates() { for (int i = 0; i < 256; ++i) { keystate &k = keystates[i]; switch (k) { inactive: k = (isDown(i)) ? firstPress : inactive; break; firstPress: k = (isDown(i)) ? active : inactive; break; active: k = (isDown(i)) ? active : inactive; break; } } } ``` This is easier to extend, and easier to read if it gets any more complex.
You are always setting `IsFirstPress` if the key is down, which might not be what you want.
3,988,620
This should be straight foreward, but I simply can't figure it out(!) I have a UIView 'filled with' a UIScrollView. Inside the scrollView I wan't to have a UITableView. I have hooked up both the scrollView and the tableView with IBOutlet's in IB and set the ViewController to be the delegate and datasource of the tableView. What else do I need to do ? Or what shouldn't I have done?
2010/10/21
[ "https://Stackoverflow.com/questions/3988620", "https://Stackoverflow.com", "https://Stackoverflow.com/users/463808/" ]
Personally, I would define the key-states as disjoint states and write a simple state-machine, thus: ``` enum keystate { inactive, firstPress, active }; keystate keystates[256]; Class::CalculateKeyStates() { for (int i = 0; i < 256; ++i) { keystate &k = keystates[i]; switch (k) { inactive: k = (isDown(i)) ? firstPress : inactive; break; firstPress: k = (isDown(i)) ? active : inactive; break; active: k = (isDown(i)) ? active : inactive; break; } } } ``` This is easier to extend, and easier to read if it gets any more complex.
I'm not sure what you want to achieve with `IsFirstPress`, as the keystate cannot remember any previous presses anyways. If you want to mark with this bit, that it's the first time you recognized the key being down, then your logic is wrong in the corresponding `if` statement. `keystates[i] & WasHeldDown` evaluates to true if you already set the bit `WasHeldDown` earlier for this keystate. In that case, what you may want to do is actually remove the `IsFirstPress` bit by xor-ing it: `keystates[i] ^= IsFirstPress`
4,084,790
AFAIK GHC is the most common compiler today, but I also see, that some other ompilers are available too. Is GHC really the best choice for all purposes or may I use something else instead? For instance, I read that some compiler (forgot the name) does better on optimizations, but doesn't implements all extensions.
2010/11/03
[ "https://Stackoverflow.com/questions/4084790", "https://Stackoverflow.com", "https://Stackoverflow.com/users/417501/" ]
GHC is by far the most widely used Haskell compiler, and it offers the most features. There are other options, though, which sometimes have some benefits over GHC. These are some of the more popular alternatives: [Hugs](http://www.haskell.org/hugs/) - Hugs is an interpreter (I don't think it includes a compiler) which is fast and efficient. It's also known for producing more easily understood error messages than GHC. [JHC](http://repetae.net/computer/jhc/) - A whole-program compiler. JHC can produce very efficient code, but it's not feature-complete yet (this is probably what you're thinking of). Note that it's not always faster than GHC, only sometimes. I haven't used JHC much because it doesn't implement multi-parameter type classes, which I use heavily. I've heard that the source code is extremely clear and readable, making this a good compiler to hack on. JHC is also more convenient for cross-compiling and usually produces smaller binaries. [UHC](http://www.cs.uu.nl/wiki/UHC) - The Utrecht Haskell Compiler is near feature-complete (I think the only thing missing is n+k patterns) for Haskell98. It implements many of GHC's most popular extensions and some original extensions as well. According to the documentation code isn't necessarily well-optimized yet. This is also a good compiler to hack on. In short, if you want efficient code and cutting-edge features, GHC is your best bet. JHC is worth trying if you don't need MPTC's or some other features. UHC's extensions may be compelling in some cases, but I wouldn't count on it for fast code yet.
ghc is a solid compiler. saying it is the best choice for all purposes is a very strong one. and looking for such a tool is futile. Use it, and if you really require something else then by that point you'll probably know what it is.
4,084,790
AFAIK GHC is the most common compiler today, but I also see, that some other ompilers are available too. Is GHC really the best choice for all purposes or may I use something else instead? For instance, I read that some compiler (forgot the name) does better on optimizations, but doesn't implements all extensions.
2010/11/03
[ "https://Stackoverflow.com/questions/4084790", "https://Stackoverflow.com", "https://Stackoverflow.com/users/417501/" ]
GHC is by far the most widely used Haskell compiler, and it offers the most features. There are other options, though, which sometimes have some benefits over GHC. These are some of the more popular alternatives: [Hugs](http://www.haskell.org/hugs/) - Hugs is an interpreter (I don't think it includes a compiler) which is fast and efficient. It's also known for producing more easily understood error messages than GHC. [JHC](http://repetae.net/computer/jhc/) - A whole-program compiler. JHC can produce very efficient code, but it's not feature-complete yet (this is probably what you're thinking of). Note that it's not always faster than GHC, only sometimes. I haven't used JHC much because it doesn't implement multi-parameter type classes, which I use heavily. I've heard that the source code is extremely clear and readable, making this a good compiler to hack on. JHC is also more convenient for cross-compiling and usually produces smaller binaries. [UHC](http://www.cs.uu.nl/wiki/UHC) - The Utrecht Haskell Compiler is near feature-complete (I think the only thing missing is n+k patterns) for Haskell98. It implements many of GHC's most popular extensions and some original extensions as well. According to the documentation code isn't necessarily well-optimized yet. This is also a good compiler to hack on. In short, if you want efficient code and cutting-edge features, GHC is your best bet. JHC is worth trying if you don't need MPTC's or some other features. UHC's extensions may be compelling in some cases, but I wouldn't count on it for fast code yet.
I think it's also worth mentioning [nhc98](http://www.haskell.org/nhc98/). From the blurb on the homepage: > > nhc98 is a small, easy to install, > standards-compliant compiler for > Haskell 98, the lazy functional > programming language. It is very > portable, and aims to produce small > executables that run in small amounts > of memory. It produces medium-fast > code, and compilation is itself quite > fast. It also comes with extensive > tool support for automatic > compilation, foreign language > interfacing, heap and time profiling, > tracing, and debugging. > > >
4,084,790
AFAIK GHC is the most common compiler today, but I also see, that some other ompilers are available too. Is GHC really the best choice for all purposes or may I use something else instead? For instance, I read that some compiler (forgot the name) does better on optimizations, but doesn't implements all extensions.
2010/11/03
[ "https://Stackoverflow.com/questions/4084790", "https://Stackoverflow.com", "https://Stackoverflow.com/users/417501/" ]
GHC is by far the most widely used Haskell compiler, and it offers the most features. There are other options, though, which sometimes have some benefits over GHC. These are some of the more popular alternatives: [Hugs](http://www.haskell.org/hugs/) - Hugs is an interpreter (I don't think it includes a compiler) which is fast and efficient. It's also known for producing more easily understood error messages than GHC. [JHC](http://repetae.net/computer/jhc/) - A whole-program compiler. JHC can produce very efficient code, but it's not feature-complete yet (this is probably what you're thinking of). Note that it's not always faster than GHC, only sometimes. I haven't used JHC much because it doesn't implement multi-parameter type classes, which I use heavily. I've heard that the source code is extremely clear and readable, making this a good compiler to hack on. JHC is also more convenient for cross-compiling and usually produces smaller binaries. [UHC](http://www.cs.uu.nl/wiki/UHC) - The Utrecht Haskell Compiler is near feature-complete (I think the only thing missing is n+k patterns) for Haskell98. It implements many of GHC's most popular extensions and some original extensions as well. According to the documentation code isn't necessarily well-optimized yet. This is also a good compiler to hack on. In short, if you want efficient code and cutting-edge features, GHC is your best bet. JHC is worth trying if you don't need MPTC's or some other features. UHC's extensions may be compelling in some cases, but I wouldn't count on it for fast code yet.
1. Haskell is informally defined as the language handled by GHC. 2. GHC is the compiler of the *Haskell platform*. 3. Trying to optimize your code with GHC may pay off more than switching to another compiler as you'll learn some optimization skills. 4. There are many *very* useful extensions in GHC. I just can't see how to live without them. So, for anything serious (e.g. non-academic, non-experimental, non-volatile or using many packages), the pragmatic choice is to go with GHC.
4,084,790
AFAIK GHC is the most common compiler today, but I also see, that some other ompilers are available too. Is GHC really the best choice for all purposes or may I use something else instead? For instance, I read that some compiler (forgot the name) does better on optimizations, but doesn't implements all extensions.
2010/11/03
[ "https://Stackoverflow.com/questions/4084790", "https://Stackoverflow.com", "https://Stackoverflow.com/users/417501/" ]
GHC is by far the most widely used Haskell compiler, and it offers the most features. There are other options, though, which sometimes have some benefits over GHC. These are some of the more popular alternatives: [Hugs](http://www.haskell.org/hugs/) - Hugs is an interpreter (I don't think it includes a compiler) which is fast and efficient. It's also known for producing more easily understood error messages than GHC. [JHC](http://repetae.net/computer/jhc/) - A whole-program compiler. JHC can produce very efficient code, but it's not feature-complete yet (this is probably what you're thinking of). Note that it's not always faster than GHC, only sometimes. I haven't used JHC much because it doesn't implement multi-parameter type classes, which I use heavily. I've heard that the source code is extremely clear and readable, making this a good compiler to hack on. JHC is also more convenient for cross-compiling and usually produces smaller binaries. [UHC](http://www.cs.uu.nl/wiki/UHC) - The Utrecht Haskell Compiler is near feature-complete (I think the only thing missing is n+k patterns) for Haskell98. It implements many of GHC's most popular extensions and some original extensions as well. According to the documentation code isn't necessarily well-optimized yet. This is also a good compiler to hack on. In short, if you want efficient code and cutting-edge features, GHC is your best bet. JHC is worth trying if you don't need MPTC's or some other features. UHC's extensions may be compelling in some cases, but I wouldn't count on it for fast code yet.
As of 2011, there's really no other choice than GHC for everyday programming. The HP team strongly encourages the use of GHC by all Haskell programmers. --- If you're a researcher, you might use be using UHC, if you're on a very strange system, you might have only Hugs or nhc98 available. If you're a retro-fan like me, you still have gofer, hbc and hbi installed: ``` $ hbi Welcome to interactive Haskell98 version 0.9999.5c Pentium 2004 Jun 29! Loading prelude... 1 values, 4 libraries, 200 types found. Type "help;" to get help. > help; HBI -- Interactive Haskell B 1.3 ``` hbi is cool because a) it implements Haskell B, and b) it supports the full language at the command line: ``` > data L a = I | X a (L a) deriving Show; data L b = I | X b (L b) deriving (Show) > X 4 (X 3 I); X 4 (X 3 I) ``` and the compiler produces pretty good code, even 15 years later.
3,551
I have a chunk of text I need to paste into Illustrator. No matter what I do, I can't resize the text area without scaling the text. I want to resize the area and have the text wrap (font size to remain the same). How can I do this? I'm using CS5.1
2011/09/06
[ "https://graphicdesign.stackexchange.com/questions/3551", "https://graphicdesign.stackexchange.com", "https://graphicdesign.stackexchange.com/users/2357/" ]
Select your `Type` tool. Instead of clicking on your canvas, click and drag to draw a box. Put whatever copy you want inside the type box and when you resize the box it will reflow the text instead of changing the font. You can also link multiple type boxes together to flow text across multiple points on your artboard. Create type objects wherever you want text to be. Add all your copy to the first box. Assuming it overflows, there will be a ![+](https://i.stack.imgur.com/FpE3v.png) symbol in the bottom-right corner of the type box. Click this ![+](https://i.stack.imgur.com/FpE3v.png) symbol and then click the next type box where you want text to flow. Illustrator will flow text through as many type boxes as you link together.
Here's a great article, **["Make Illustrator behave!"](http://www.creativepro.com/article/make-illustrator-behave-)** that explains it all in in full. Figuring out under what circumstances Illustrator scales the many types of text object and when it scales the bounding box, wrapping the text, is a common frustration. The differences between the different types of text object in Illustrator are brilliant once you've mastered and made sense of them, but massively frustrating until you do so... I really recommend taking some time out to go through that article in full.
3,551
I have a chunk of text I need to paste into Illustrator. No matter what I do, I can't resize the text area without scaling the text. I want to resize the area and have the text wrap (font size to remain the same). How can I do this? I'm using CS5.1
2011/09/06
[ "https://graphicdesign.stackexchange.com/questions/3551", "https://graphicdesign.stackexchange.com", "https://graphicdesign.stackexchange.com/users/2357/" ]
Select your `Type` tool. Instead of clicking on your canvas, click and drag to draw a box. Put whatever copy you want inside the type box and when you resize the box it will reflow the text instead of changing the font. You can also link multiple type boxes together to flow text across multiple points on your artboard. Create type objects wherever you want text to be. Add all your copy to the first box. Assuming it overflows, there will be a ![+](https://i.stack.imgur.com/FpE3v.png) symbol in the bottom-right corner of the type box. Click this ![+](https://i.stack.imgur.com/FpE3v.png) symbol and then click the next type box where you want text to flow. Illustrator will flow text through as many type boxes as you link together.
OK, here goes. If you have already type some text using the type tool and selected this text using `Selection Tool`(the black arrow), you will see something like this ![enter image description here](https://i.stack.imgur.com/TM0x3.png) have you notice the rightmost circle? ![enter image description here](https://i.stack.imgur.com/3l66t.png) just move the cursor on to this circle, double click it, and you'll see this ![enter image description here](https://i.stack.imgur.com/cj6Pk.png) OK, now you can adjust the border of this text box without shrinking the characters, but it takes two steps, first ![enter image description here](https://i.stack.imgur.com/WBx1T.png) to notice there is a ![enter image description here](https://i.stack.imgur.com/j5nxs.png) which means there are some text hidden. Then, second step increase the height of the box, the word "here" automatically wrapped to the second line. ![enter image description here](https://i.stack.imgur.com/y3Edr.png)
3,551
I have a chunk of text I need to paste into Illustrator. No matter what I do, I can't resize the text area without scaling the text. I want to resize the area and have the text wrap (font size to remain the same). How can I do this? I'm using CS5.1
2011/09/06
[ "https://graphicdesign.stackexchange.com/questions/3551", "https://graphicdesign.stackexchange.com", "https://graphicdesign.stackexchange.com/users/2357/" ]
OK, here goes. If you have already type some text using the type tool and selected this text using `Selection Tool`(the black arrow), you will see something like this ![enter image description here](https://i.stack.imgur.com/TM0x3.png) have you notice the rightmost circle? ![enter image description here](https://i.stack.imgur.com/3l66t.png) just move the cursor on to this circle, double click it, and you'll see this ![enter image description here](https://i.stack.imgur.com/cj6Pk.png) OK, now you can adjust the border of this text box without shrinking the characters, but it takes two steps, first ![enter image description here](https://i.stack.imgur.com/WBx1T.png) to notice there is a ![enter image description here](https://i.stack.imgur.com/j5nxs.png) which means there are some text hidden. Then, second step increase the height of the box, the word "here" automatically wrapped to the second line. ![enter image description here](https://i.stack.imgur.com/y3Edr.png)
Here's a great article, **["Make Illustrator behave!"](http://www.creativepro.com/article/make-illustrator-behave-)** that explains it all in in full. Figuring out under what circumstances Illustrator scales the many types of text object and when it scales the bounding box, wrapping the text, is a common frustration. The differences between the different types of text object in Illustrator are brilliant once you've mastered and made sense of them, but massively frustrating until you do so... I really recommend taking some time out to go through that article in full.
21,816,154
I have a question about if a animation ends that it will like `gotoAndStop()` to another frame ``` if (bird.hitTestObject(pipe1)) { bird.gotoAndStop(3); //frame 3 = animation } ``` after it ends it will need to go the Game Over frame (frame 3) and I use the `Flash Timeline` not `.as` thanks!
2014/02/16
[ "https://Stackoverflow.com/questions/21816154", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3316757/" ]
Java does not provide a convenient way to list the "files" in a "directory", when that directory is backed by a JAR file on the classpath (see [How do I list the files inside a JAR file?](https://stackoverflow.com/questions/1429172/how-do-i-list-the-files-inside-a-jar-file) for some work-arounds). I believe this is because the general case where a "directory" exists in multiple .jar files and classpath directories is really complicated (the system would have to present a union of entries across multiple sources and deal with overlaps). Because Java/Android do not have clean support this, neither does Libgdx (searching the classpath is what "internal" Libgdx files map to). I think the easiest way to work-around this is to build a list of levels into a text file, then open that and use it as a list of file names. So, something like: ``` // XXX More pseudo code in Java than actual code ... (this is untested) fileList = Gdx.files.internal("levels.txt"); String files[] = fileList.readString().split("\\n"); for (String filename: files) { FileHandle fh = Gdx.files.internal("levels/" + filename); ... } ``` Ideally you'd set something up that builds this text file when the JAR file is built. (That's a separate SO question I don't know the answer to ...) As an aside, your work-around that uses `ApplicationType.Desktop` is leveraging the fact that in addition to the .jar file, there is a real directory on the classpath that you can open. That directory doesn't exist on the Android device (it actually only exists on the desktop when running under your build environment).
If you are still looking for an actual answer here is [mine](https://pastebin.com/R0jMh4ui) (it is kinda hacky but its work). To use it you simply have to call one of the 2 options below : ``` FileHandle[] fileList = JarUtils.listFromJarIfNecessary("Path/to/your/folder"); FileHandle[] fileList = JarUtils.listFromJarIfNecessary(yourJarFileFilter); ``` A JarFileFilter is a simple class that overload the Java default FileFilter to fits the jar filter needs. This will work both: when running from the IDE and from within a jar file. It also check that the game is running on desktop otherwise it use the default libGDX way of loadding the resources so it is safe to use for cross-platform project (Android, Ios, etc.). Feel free to update the code if necessary to fit your needs. The code is documented and is ~150 lines so hopefully you will understand it. Warning: Beware that this might not be very efficient since it has to look for all the entries within your jar so you do not want to call it too often.
21,816,154
I have a question about if a animation ends that it will like `gotoAndStop()` to another frame ``` if (bird.hitTestObject(pipe1)) { bird.gotoAndStop(3); //frame 3 = animation } ``` after it ends it will need to go the Game Over frame (frame 3) and I use the `Flash Timeline` not `.as` thanks!
2014/02/16
[ "https://Stackoverflow.com/questions/21816154", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3316757/" ]
Java does not provide a convenient way to list the "files" in a "directory", when that directory is backed by a JAR file on the classpath (see [How do I list the files inside a JAR file?](https://stackoverflow.com/questions/1429172/how-do-i-list-the-files-inside-a-jar-file) for some work-arounds). I believe this is because the general case where a "directory" exists in multiple .jar files and classpath directories is really complicated (the system would have to present a union of entries across multiple sources and deal with overlaps). Because Java/Android do not have clean support this, neither does Libgdx (searching the classpath is what "internal" Libgdx files map to). I think the easiest way to work-around this is to build a list of levels into a text file, then open that and use it as a list of file names. So, something like: ``` // XXX More pseudo code in Java than actual code ... (this is untested) fileList = Gdx.files.internal("levels.txt"); String files[] = fileList.readString().split("\\n"); for (String filename: files) { FileHandle fh = Gdx.files.internal("levels/" + filename); ... } ``` Ideally you'd set something up that builds this text file when the JAR file is built. (That's a separate SO question I don't know the answer to ...) As an aside, your work-around that uses `ApplicationType.Desktop` is leveraging the fact that in addition to the .jar file, there is a real directory on the classpath that you can open. That directory doesn't exist on the Android device (it actually only exists on the desktop when running under your build environment).
What i do sometimes is to loop the "sounds" folder with a prefixed filename and a numbered suffix like this: ``` for(String snd_name : new String[]{"flesh_", "sword_", "shield_", "death_", "skating_", "fall_", "block_"}) { int idx = 0; FileHandle fh; while((fh = Gdx.files.internal("snd/" + (snd_name + idx++) + ".ogg")).exists()) { String name = fh.nameWithoutExtension(); Sound s = Gdx.audio.newSound(fh); if(name.startsWith("flesh")) { sounds_flesh.addLast(s); } else if(name.startsWith("sword")) { sounds_sword.addLast(s); } else if(name.startsWith("shield")) { sounds_shield.addLast(s); } else if(name.startsWith("death")) { sounds_death.addLast(s); } else if(name.startsWith("skating")) { sounds_skating.addLast(s); } else if(name.startsWith("fall")) { sounds_fall.addLast(s); } else if(name.startsWith("block")) { sounds_block.addLast(s); } } } ```
21,816,154
I have a question about if a animation ends that it will like `gotoAndStop()` to another frame ``` if (bird.hitTestObject(pipe1)) { bird.gotoAndStop(3); //frame 3 = animation } ``` after it ends it will need to go the Game Over frame (frame 3) and I use the `Flash Timeline` not `.as` thanks!
2014/02/16
[ "https://Stackoverflow.com/questions/21816154", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3316757/" ]
If you are still looking for an actual answer here is [mine](https://pastebin.com/R0jMh4ui) (it is kinda hacky but its work). To use it you simply have to call one of the 2 options below : ``` FileHandle[] fileList = JarUtils.listFromJarIfNecessary("Path/to/your/folder"); FileHandle[] fileList = JarUtils.listFromJarIfNecessary(yourJarFileFilter); ``` A JarFileFilter is a simple class that overload the Java default FileFilter to fits the jar filter needs. This will work both: when running from the IDE and from within a jar file. It also check that the game is running on desktop otherwise it use the default libGDX way of loadding the resources so it is safe to use for cross-platform project (Android, Ios, etc.). Feel free to update the code if necessary to fit your needs. The code is documented and is ~150 lines so hopefully you will understand it. Warning: Beware that this might not be very efficient since it has to look for all the entries within your jar so you do not want to call it too often.
What i do sometimes is to loop the "sounds" folder with a prefixed filename and a numbered suffix like this: ``` for(String snd_name : new String[]{"flesh_", "sword_", "shield_", "death_", "skating_", "fall_", "block_"}) { int idx = 0; FileHandle fh; while((fh = Gdx.files.internal("snd/" + (snd_name + idx++) + ".ogg")).exists()) { String name = fh.nameWithoutExtension(); Sound s = Gdx.audio.newSound(fh); if(name.startsWith("flesh")) { sounds_flesh.addLast(s); } else if(name.startsWith("sword")) { sounds_sword.addLast(s); } else if(name.startsWith("shield")) { sounds_shield.addLast(s); } else if(name.startsWith("death")) { sounds_death.addLast(s); } else if(name.startsWith("skating")) { sounds_skating.addLast(s); } else if(name.startsWith("fall")) { sounds_fall.addLast(s); } else if(name.startsWith("block")) { sounds_block.addLast(s); } } } ```
3,197,405
I got the following crash logs after my app crashes: ``` 0 libobjc.A.dylib 0x000034f4 objc_msgSend + 20 1 UIKit 0x000a9248 -[UITableView(UITableViewInternal) _createPreparedCellForGlobalRow:withIndexPath:] + 644 2 UIKit 0x000a8eac -[UITableView(UITableViewInternal) _createPreparedCellForGlobalRow:] + 44 3 UIKit 0x0006f480 -[UITableView(_UITableViewPrivate) _updateVisibleCellsNow:] + 1300 4 UIKit 0x0006ce40 -[UITableView layoutSubviews] + 200 5 UIKit 0x00014ab0 -[UIView(CALayerDelegate) _layoutSublayersOfLayer:] + 32 6 CoreFoundation 0x000285ba -[NSObject(NSObject) performSelector:withObject:] + 18 7 QuartzCore 0x0000a61c -[CALayer layoutSublayers] + 176 8 QuartzCore 0x0000a2a4 CALayerLayoutIfNeeded + 192 9 QuartzCore 0x00009bb0 CA::Context::commit_transaction(CA::Transaction*) + 256 10 QuartzCore 0x000097d8 CA::Transaction::commit() + 276 11 QuartzCore 0x000119d8 CA::Transaction::observer_callback(__CFRunLoopObserver*, unsigned long, void*) + 80 12 CoreFoundation 0x00074244 __CFRUNLOOP_IS_CALLING_OUT_TO_AN_OBSERVER_CALLBACK_FUNCTION__ + 12 13 CoreFoundation 0x00075d9e __CFRunLoopDoObservers + 494 14 CoreFoundation 0x000772f6 __CFRunLoopRun + 934 15 CoreFoundation 0x0001e0bc CFRunLoopRunSpecific + 220 16 CoreFoundation 0x0001dfca CFRunLoopRunInMode + 54 17 GraphicsServices 0x00003f88 GSEventRunModal + 188 18 UIKit 0x00007b40 -[UIApplication _run] + 564 19 UIKit 0x00005fb8 UIApplicationMain + 964 20 my_app 0x0000291e main (main.m:14) 21 my_app 0x000028c8 start + 32 ``` or, another times: ``` 0 libobjc.A.dylib 0x00003508 objc_msgSend + 40 1 CoreFoundation 0x00027348 -[NSObject(NSObject) performSelector:withObject:withObject:] + 20 2 UIKit 0x00009ae4 -[UIView(Hierarchy) _makeSubtreePerformSelector:withObject:withObject:copySublayers:] + 276 3 UIKit 0x00009b04 -[UIView(Hierarchy) _makeSubtreePerformSelector:withObject:withObject:copySublayers:] + 308 4 UIKit 0x00009b04 -[UIView(Hierarchy) _makeSubtreePerformSelector:withObject:withObject:copySublayers:] + 308 5 UIKit 0x00009b04 -[UIView(Hierarchy) _makeSubtreePerformSelector:withObject:withObject:copySublayers:] + 308 6 UIKit 0x00009b04 -[UIView(Hierarchy) _makeSubtreePerformSelector:withObject:withObject:copySublayers:] + 308 7 UIKit 0x000099bc -[UIView(Hierarchy) _makeSubtreePerformSelector:withObject:] + 28 8 UIKit 0x000095d4 -[UIView(Internal) _addSubview:positioned:relativeTo:] + 448 9 UIKit 0x00009400 -[UIView(Hierarchy) addSubview:] + 28 10 UIKit 0x0009b788 +[UIViewControllerWrapperView wrapperViewForView:frame:] + 328 11 UIKit 0x0009e42c -[UITabBarController transitionFromViewController:toViewController:transition:shouldSetSelected:] + 140 12 UIKit 0x0009e38c -[UITabBarController transitionFromViewController:toViewController:] + 32 13 UIKit 0x0009d9d0 -[UITabBarController _setSelectedViewController:] + 248 14 UIKit 0x0009d8c8 -[UITabBarController setSelectedViewController:] + 12 15 UIKit 0x000b8e54 -[UITabBarController _tabBarItemClicked:] + 308 16 CoreFoundation 0x00027348 -[NSObject(NSObject) performSelector:withObject:withObject:] + 20 17 UIKit 0x0008408c -[UIApplication sendAction:to:from:forEvent:] + 128 18 UIKit 0x00083ff4 -[UIApplication sendAction:toTarget:fromSender:forEvent:] + 32 19 UIKit 0x000b8c7c -[UITabBar _sendAction:withEvent:] + 416 20 CoreFoundation 0x00027348 -[NSObject(NSObject) performSelector:withObject:withObject:] + 20 21 UIKit 0x0008408c -[UIApplication sendAction:to:from:forEvent:] + 128 22 UIKit 0x00083ff4 -[UIApplication sendAction:toTarget:fromSender:forEvent:] + 32 23 UIKit 0x00083fbc -[UIControl sendAction:to:forEvent:] + 44 24 UIKit 0x00083c0c -[UIControl(Internal) _sendActionsForEvents:withEvent:] + 528 25 UIKit 0x000b8acc -[UIControl sendActionsForControlEvents:] + 16 26 UIKit 0x000b890c -[UITabBar(Static) _buttonUp:] + 108 27 CoreFoundation 0x00027348 -[NSObject(NSObject) performSelector:withObject:withObject:] + 20 28 UIKit 0x0008408c -[UIApplication sendAction:to:from:forEvent:] + 128 29 UIKit 0x00083ff4 -[UIApplication sendAction:toTarget:fromSender:forEvent:] + 32 30 UIKit 0x00083fbc -[UIControl sendAction:to:forEvent:] + 44 31 UIKit 0x00083c0c -[UIControl(Internal) _sendActionsForEvents:withEvent:] + 528 32 UIKit 0x00084484 -[UIControl touchesEnded:withEvent:] + 452 33 UIKit 0x000824e4 -[UIWindow _sendTouchesForEvent:] + 580 34 UIKit 0x00081b18 -[UIWindow sendEvent:] + 388 35 UIKit 0x0007c034 -[UIApplication sendEvent:] + 444 36 UIKit 0x0007b7e8 _UIApplicationHandleEvent + 6704 37 GraphicsServices 0x00004edc PurpleEventCallback + 1024 38 CoreFoundation 0x000742ac __CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE1_PERFORM_FUNCTION__ + 22 39 CoreFoundation 0x000761d6 __CFRunLoopDoSource1 + 158 40 CoreFoundation 0x0007718e __CFRunLoopRun + 574 41 CoreFoundation 0x0001e0bc CFRunLoopRunSpecific + 220 42 CoreFoundation 0x0001dfca CFRunLoopRunInMode + 54 43 GraphicsServices 0x00003f88 GSEventRunModal + 188 44 UIKit 0x00007b40 -[UIApplication _run] + 564 45 UIKit 0x00005fb8 UIApplicationMain + 964 46 my_app 0x00002ba2 main (main.m:14) 47 my_app 0x00002b4c start + 32 ``` What's wrong there? I mean - my app is at the bottom of the stack trace in the main-method... I checked for memory leaks - nothing... Thanks for any help!
2010/07/07
[ "https://Stackoverflow.com/questions/3197405", "https://Stackoverflow.com", "https://Stackoverflow.com/users/27404/" ]
My best guess would be that something isn't linking correctly in one of you .xib views. Another common cause of this is an object being called upon that's already been released from memory. Try tracing this with `NSZombieEnabled`
You're missing a connection between an xib and its controller.
22,953,075
I am using jQuery mobile 1.4.2. I need to shift the search icon from left to right and how can I avoid the clear button appearing. ``` <div data-role=page"> <form method="post"> <input name="search" id="search" value="" placeholder="Buscar" type="search" onfocus="this.placeholder = ''" onblur="this.placeholder = 'Buscar'"> </form> </div> ```
2014/04/09
[ "https://Stackoverflow.com/questions/22953075", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3350169/" ]
The constructor is `new Rect(left,top,width,height)` Any fewer arguments will result in those many fields not being set. eg. `new Rect(1,2,3)` will return a Rect object with no height Screenshot here : <http://puu.sh/81KSa/e225064bec.png>
The `Rect` in your jsfiddle is **NOT** the Mozilla `Rect` object. [What is Rect function in Chrome/Firefox for?](https://stackoverflow.com/questions/18814683/what-is-rect-function-for-in-javascript) The Mozilla `Rect` is provided by <https://developer.mozilla.org/en-US/docs/Mozilla/JavaScript_code_modules/Geometry.jsm> You can only use it in a Firefox extension after `Components.utils.import("resource://gre/modules/Geometry.jsm", your_scope_obj);` I don't think you can test it in jsfiddle env.
70,504,314
I have some code which captures a key. Now i have in event.code the real key-name and in event.key i get the key-char. To make an example: I press SHIFT and J, so i get: *event.code: SHIFT KEYJ* *event.key: J* So, i get in event.key the "interpreted key" from shift J. But now i need to convert the native key to a char. So i am looking for a solution to get from "KEYJ" to a char, that means "j". How can this be done? I have not found anything about this.
2021/12/28
[ "https://Stackoverflow.com/questions/70504314", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17779252/" ]
Solution was the H2 version. We upgraded also H2 to 2.0 but that version is not compatible with Spring Batch 4.3.4. Downgrade to H2 version 1.4.200 solved the issue.
`MODE=LEGACY` will solve the issue on H2 2.x with Spring boot 2.6.x. (`nextval` support will be re-enabled) See the details: <https://github.com/spring-projects/spring-boot/issues/29034#issuecomment-1002641782>
11,240,180
I want to run a .exe file (or) any application from pen drive on insert in to pc. I dont want to use Autorun.inf file, as all anti virus software's blocks it. I have used portable application launcher also, that also using autorun only. so once again anti virus software blocks it. Is there any alternative option, such that .exe file from pen drive should start automatically on pen drive insert?
2012/06/28
[ "https://Stackoverflow.com/questions/11240180", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1487797/" ]
Anti-virus programs block autorun.inf on the solely purpose not to allow some .exe-s to start automatically on pen drive insert. So, basically, what you're asking is impossible.
I havent used Windows in a long time, but I am fairly sure there is a setting in Windows to enable/disable autorunning executeables on mounted drives. That and changing such setting in your antivirus application (or get a new, saner one) would be my best guess. Good luck!
1,712,401
Here is what I have so far: Let $\varepsilon > 0$ be given We want for all $n >N$, $|1/n! - 0|< \varepsilon$ We know $1/n <ε$ and $1/n! ≤ 1/n < ε$. I don't know how to solve for $n$, given $1/n!$. And this is where I get stuck in my proof, I cannot solve for $n$, and therefore cannot pick $N > \cdots$.
2016/03/25
[ "https://math.stackexchange.com/questions/1712401", "https://math.stackexchange.com", "https://math.stackexchange.com/users/325725/" ]
Since $0 < \frac{1}{n!} < \frac{1}{n}$, by the squeeze theorem... Without squeeze theorem: Let $\epsilon > 0$. Define $N = \epsilon^{-1}$. Then if $n > N$, $$\left|\frac{1}{n!}\right| = \frac{1}{n!} < \frac{1}{n} < \frac{1}{N} = \epsilon$$
You don't need to *solve for* $n$, just to find *some* $n$ that is large enough that $1/n!$ is smaller than $\varepsilon$. It doesn't matter if you find an $N$ that is much larger than it *needs to be*, as long as it is finite. By the hint that $1/n! \le 1/n$, therefore, it is enough to find an $N$ that is large enough that $1/n < \varepsilon$. And this is clearly the case if only $n > 1/\varepsilon$ ...
1,712,401
Here is what I have so far: Let $\varepsilon > 0$ be given We want for all $n >N$, $|1/n! - 0|< \varepsilon$ We know $1/n <ε$ and $1/n! ≤ 1/n < ε$. I don't know how to solve for $n$, given $1/n!$. And this is where I get stuck in my proof, I cannot solve for $n$, and therefore cannot pick $N > \cdots$.
2016/03/25
[ "https://math.stackexchange.com/questions/1712401", "https://math.stackexchange.com", "https://math.stackexchange.com/users/325725/" ]
Since $0 < \frac{1}{n!} < \frac{1}{n}$, by the squeeze theorem... Without squeeze theorem: Let $\epsilon > 0$. Define $N = \epsilon^{-1}$. Then if $n > N$, $$\left|\frac{1}{n!}\right| = \frac{1}{n!} < \frac{1}{n} < \frac{1}{N} = \epsilon$$
How about this one: $$\sum\_{n=0}^{\infty}\frac{1}{n!}=e^1=e.$$ Thus the series converges and therefore $$\lim\_{n\to \infty}\frac{1}{n!}=0.$$
5,637,808
I got my first android phone since two week And I'm starting my first real App. My phone is a LG Optimus 2X and one of the missing thing is a notification led for when there is a missed call, sms, email ect ... So I'm wondering what's the best way to do this. For know I've a broatcastreceiver for incoming sms, and the I call a service that will light the phone buttons (don't bother about this part, it's working). But seems that this method will workin only for sms, phone calls, not emails. So know I'm thinking to used Listeners instead for everything, but this mean having a service running nonstop. Not sure it's the best way ... I hope I'm clear, and that my ennglish is not too bad. Thx in advance
2011/04/12
[ "https://Stackoverflow.com/questions/5637808", "https://Stackoverflow.com", "https://Stackoverflow.com/users/704403/" ]
``` UIBarButtonItem *addButton = [[UIBarButtonItem alloc] initWithBarButtonSystemItem:UIBarButtonSystemItemAdd target:self action:@selector(myCallback:)]; self.navigationItem.rightBarButtonItem = addButton; [addButton release]; ```
Yes, that `[+]` button is a default button, provided by Apple. It is the `UIBarButtonSystemItemAdd` identifier. Here's some code to get it working: ``` // Create the Add button UIBarButtonItem *addButton = [[UIBarButtonItem alloc] initWithBarButtonSystemItem:UIBarButtonSystemItemAdd target:self action:@selector(someMethod)]; // Display it self.navigationItem.rightBarButtonItem = addButton; // Release the button [addButton release]; ``` You will need to define `someMethod`, so your program has code to run when the button is tapped.
151,322
Recently, I read a lot of good articles about how to do good encapsulation. And when I say "good encapsulation", I am not talking about hiding private fields with public properties; I am talking about preventing users of your API from doing wrong things. Here are two good articles about this subject: <http://blog.ploeh.dk/2011/05/24/PokayokeDesignFromSmellToFragrance.aspx> <http://lostechies.com/derickbailey/2011/03/28/encapsulation-youre-doing-it-wrong/> At my job, the majority of our applications are not destined for other programmers but rather for the customers. About 80% of the application code is at the top of the structure (Not used by other code). For this reason, there is probably no chance ever that this code will be used by other application. An example of encapsulation that prevents users from doing wrong things with your API is returning an IEnumerable instead of IList when you don't want to give the ability to the user to add or remove items in the list. My question is: When can encapsulation be considered simply OOP purism, keeping in mind that each hour of programming is charged to the customer? I want to create code that is maintainable and easy to read and use, but when I am not building a public API (to be used by other programmers), where can we draw the line between perfect code and not so perfect code?
2012/06/02
[ "https://softwareengineering.stackexchange.com/questions/151322", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/44035/" ]
Answer: When you have your interface complete, then automatically you are done with encapsulation. It does not matter if implemenation or consumption part is incomplete, you are done since interface is accepted as final. **Proper development tools should reduce cost more than tools cost themself.** You suggest that if encapsulation or whatever property is not relevant to market offer, if customer does not care then the property has no value. Correct. And customer cares nearly about no internal property of code. So why this and other measurable properties of code exist ? Why deveoper should care ? I think the reason is money as well: any labor intensive and costly work in software development will call for a cure. Encapsulation is targeted not at the customer but at user of library. You saying you do not have external users, but for your own code you yourself are the user number 1. * If you introduce risk of errors into daily use, then you **increase the cost of development**. * If you spend on reducing the risk, you will **increase the cost of development**. Market and evolution keep forcing this choice. Choose the least increase. This is all understood well. But you are asking about this particular feature. It is not the hardest one to maintain. It is definitely cost effective. But be aware about laws of human nature and economy. Tools have their own market. The labeled cost for some can be $0, but there is always hidden cost in terms of time spent on adoption. And this market is flooded with methodologies and practices with negative value.
Encapsulation exists to protect your class invariants. This is the primary measure for 'how much is enough'. Any way to break invariants breaks class semantics and is bad (tm). A secondary concern is limiting visibility and as such the number of places that can/will access data and thus increase coupling and/or number of dependencies. This needs to be done with care though. As requirements change, often times that decision to limit what the class exposes leads to awkward hacks to deal with the new requirement. This though is one of those design concerns that comes with experience. In doubt, favor encapsulation. The concerns are regardless of 'public' API or not. Even for internal code, new or forgetful or sleepy programmers *will* write bad code. If your class needs to be resistant to bad code, then do so.
151,322
Recently, I read a lot of good articles about how to do good encapsulation. And when I say "good encapsulation", I am not talking about hiding private fields with public properties; I am talking about preventing users of your API from doing wrong things. Here are two good articles about this subject: <http://blog.ploeh.dk/2011/05/24/PokayokeDesignFromSmellToFragrance.aspx> <http://lostechies.com/derickbailey/2011/03/28/encapsulation-youre-doing-it-wrong/> At my job, the majority of our applications are not destined for other programmers but rather for the customers. About 80% of the application code is at the top of the structure (Not used by other code). For this reason, there is probably no chance ever that this code will be used by other application. An example of encapsulation that prevents users from doing wrong things with your API is returning an IEnumerable instead of IList when you don't want to give the ability to the user to add or remove items in the list. My question is: When can encapsulation be considered simply OOP purism, keeping in mind that each hour of programming is charged to the customer? I want to create code that is maintainable and easy to read and use, but when I am not building a public API (to be used by other programmers), where can we draw the line between perfect code and not so perfect code?
2012/06/02
[ "https://softwareengineering.stackexchange.com/questions/151322", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/44035/" ]
The fact that your code is not being written as a public API is not really the point--the maintainability you mention is. Yes, application development is a cost center, and the customer does not want to pay for unnecessary work. However, a badly designed or implemented application is going to cost the customer a lot more money when they decide that it needs another feature, or (as will certainly happen) the business rules change. Good OO principles are there because they help make it safer to modify and append the code base. So, the customer may not directly care what your code looks like, but the next guy who has to modify it certainly will. If the encapsulation (as you're defining it) is not there, it's going to take him a lot longer and be much riskier for him to do what he needs to do to serve the customer's needs.
Encapsulation exists to protect your class invariants. This is the primary measure for 'how much is enough'. Any way to break invariants breaks class semantics and is bad (tm). A secondary concern is limiting visibility and as such the number of places that can/will access data and thus increase coupling and/or number of dependencies. This needs to be done with care though. As requirements change, often times that decision to limit what the class exposes leads to awkward hacks to deal with the new requirement. This though is one of those design concerns that comes with experience. In doubt, favor encapsulation. The concerns are regardless of 'public' API or not. Even for internal code, new or forgetful or sleepy programmers *will* write bad code. If your class needs to be resistant to bad code, then do so.
14,790,045
I am trying to insert a new column into a CSV file after existing data. For example, my CSV file currently contains: ``` Heading 1 Heading 2 1 1 0 2 1 0 ``` I have a list of integers in format: ``` [1,0,1,2,1,2,1,1] ``` **How can i insert this list into the CSV file under 'Header 2'?** So far all i have been able to achieve is adding the new list of integers underneath the current data, for example: ``` Heading 1 Heading 2 1 1 0 2 1 0 1 0 1 2 1 2 1 1 ``` Using the code: ``` #Open CSV file with open('C:\Data.csv','wb') as g: #New writer gw = csv.writer(g) #Add headings gw.writerow(["Heading 1","Heading 2"]) #Write First list of data to heading 1 (orgList) gw.writerows([orgList[item]] for item in column) #For each value in new integer list, add row to CSV for val in newList: gw.writerow([val]) ```
2013/02/09
[ "https://Stackoverflow.com/questions/14790045", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1987845/" ]
I guess this question is already kind of old but I came across the same problem. Its not any "best practice" solution but here is what I do. In `twig` you can use this `{{ path('yourRouteName') }}` thing in a perfect way. So in my `twig` file I have a structure like that: ``` ... <a href="{{ path('myRoute') }}"> //results in e.g http://localhost/app_dev.php/myRoute <div id="clickMe">Click</div> </a> ``` Now if someone clicks the div, I do the following in my `.js file`: ``` $('#clickMe').on('click', function(event) { event.preventDefault(); //prevents the default a href behaviour var url = $(this).closest('a').attr('href'); $.ajax({ url: url, method: 'POST', success: function(data) { console.log('GENIUS!'); } }); }); ``` I know that this is not a solution for every situation where you want to trigger an Ajax request but I just leave this here and perhaps somebody thinks it's useful :)
Since this jQuery ajax function is placed on twig side and the url points to your application you can insert routing path ``` $.ajax({ url: '{{ path("your_ajax_routing") }}', data: "url="+urlinput, dataType: 'html', timeout: 5000, success: function(data, status){ // DO Stuff here } }); ```
14,790,045
I am trying to insert a new column into a CSV file after existing data. For example, my CSV file currently contains: ``` Heading 1 Heading 2 1 1 0 2 1 0 ``` I have a list of integers in format: ``` [1,0,1,2,1,2,1,1] ``` **How can i insert this list into the CSV file under 'Header 2'?** So far all i have been able to achieve is adding the new list of integers underneath the current data, for example: ``` Heading 1 Heading 2 1 1 0 2 1 0 1 0 1 2 1 2 1 1 ``` Using the code: ``` #Open CSV file with open('C:\Data.csv','wb') as g: #New writer gw = csv.writer(g) #Add headings gw.writerow(["Heading 1","Heading 2"]) #Write First list of data to heading 1 (orgList) gw.writerows([orgList[item]] for item in column) #For each value in new integer list, add row to CSV for val in newList: gw.writerow([val]) ```
2013/02/09
[ "https://Stackoverflow.com/questions/14790045", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1987845/" ]
I ended up using the jsrouting-bundle Once installed I could simply do the following: ``` $.ajax({ url: Routing.generate('urlgetter'), data: "url="+urlinput, dataType: 'html', timeout: 5000, success: function(data, status){ // DO Stuff here } }); ``` where urlgetter is a route defined in routing.yml like: ``` urlgetter: pattern: /ajax/urlgetter defaults: { _controller: MyAjaxBundle:SomeController:urlgetter } options: expose: true ``` notice the expose: true option has to be set for the route to work with jsrouting-bundle
Since this jQuery ajax function is placed on twig side and the url points to your application you can insert routing path ``` $.ajax({ url: '{{ path("your_ajax_routing") }}', data: "url="+urlinput, dataType: 'html', timeout: 5000, success: function(data, status){ // DO Stuff here } }); ```
24,923,412
I am attempting to use Tkinter for the first time on my computer, and I am getting the error in the title, "NameError: name 'Tk' is not defined", citing the "line root = Tk()". I have not been able to get Tkinter to work in any form. I am currently on a macbook pro using python 2.7.5. I have tried re-downloading python multiple times but it is still not working. Anyone have any ideas as to why it isn't working? Any more information needed from me? Thanks in advance ``` #!/usr/bin/python from Tkinter import * root = Tk() canvas = Canvas(root, width=300, height=200) canvas.pack() canvas.create_rectangle( 0, 0, 150, 150, fill="yellow") canvas.create_rectangle(100, 50, 250, 100, fill="orange", width=5) canvas.create_rectangle( 50, 100, 150, 200, fill="green", outline="red", width=3) canvas.create_rectangle(125, 25, 175, 190, fill="purple", width=0) root.mainloop() ```
2014/07/24
[ "https://Stackoverflow.com/questions/24923412", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3871003/" ]
You have some other module that is taking the name "Tkinter", shadowing the one you actually want. Rename or remove it. ``` import Tkinter print Tkinter.__file__ ```
your code is right but the indent is wrong in the import code, instead of using one space use two spaces and try to not type this command: ``` import tkinter ``` use this code: ``` from tkinter import * root = Tk() canvas = Canvas(root, width=300, height=200) canvas.pack() canvas.create_rectangle( 0, 0, 150, 150, fill="yellow") canvas.create_rectangle(100, 50, 250, 100, fill="orange", width=5) canvas.create_rectangle( 50, 100, 150, 200, fill="green", outline="red", width=3) canvas.create_rectangle(125, 25, 175, 190, fill="purple", width=0) root.mainloop() ``` the problem also could be typing "Tkinter", so type "tkinter" as python in case sensitive, I think this should work, it does for me
24,923,412
I am attempting to use Tkinter for the first time on my computer, and I am getting the error in the title, "NameError: name 'Tk' is not defined", citing the "line root = Tk()". I have not been able to get Tkinter to work in any form. I am currently on a macbook pro using python 2.7.5. I have tried re-downloading python multiple times but it is still not working. Anyone have any ideas as to why it isn't working? Any more information needed from me? Thanks in advance ``` #!/usr/bin/python from Tkinter import * root = Tk() canvas = Canvas(root, width=300, height=200) canvas.pack() canvas.create_rectangle( 0, 0, 150, 150, fill="yellow") canvas.create_rectangle(100, 50, 250, 100, fill="orange", width=5) canvas.create_rectangle( 50, 100, 150, 200, fill="green", outline="red", width=3) canvas.create_rectangle(125, 25, 175, 190, fill="purple", width=0) root.mainloop() ```
2014/07/24
[ "https://Stackoverflow.com/questions/24923412", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3871003/" ]
You have some other module that is taking the name "Tkinter", shadowing the one you actually want. Rename or remove it. ``` import Tkinter print Tkinter.__file__ ```
Please make sure your Python file name is not "tkinter.py", or it will show this error.
24,923,412
I am attempting to use Tkinter for the first time on my computer, and I am getting the error in the title, "NameError: name 'Tk' is not defined", citing the "line root = Tk()". I have not been able to get Tkinter to work in any form. I am currently on a macbook pro using python 2.7.5. I have tried re-downloading python multiple times but it is still not working. Anyone have any ideas as to why it isn't working? Any more information needed from me? Thanks in advance ``` #!/usr/bin/python from Tkinter import * root = Tk() canvas = Canvas(root, width=300, height=200) canvas.pack() canvas.create_rectangle( 0, 0, 150, 150, fill="yellow") canvas.create_rectangle(100, 50, 250, 100, fill="orange", width=5) canvas.create_rectangle( 50, 100, 150, 200, fill="green", outline="red", width=3) canvas.create_rectangle(125, 25, 175, 190, fill="purple", width=0) root.mainloop() ```
2014/07/24
[ "https://Stackoverflow.com/questions/24923412", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3871003/" ]
Please make sure your Python file name is not "tkinter.py", or it will show this error.
your code is right but the indent is wrong in the import code, instead of using one space use two spaces and try to not type this command: ``` import tkinter ``` use this code: ``` from tkinter import * root = Tk() canvas = Canvas(root, width=300, height=200) canvas.pack() canvas.create_rectangle( 0, 0, 150, 150, fill="yellow") canvas.create_rectangle(100, 50, 250, 100, fill="orange", width=5) canvas.create_rectangle( 50, 100, 150, 200, fill="green", outline="red", width=3) canvas.create_rectangle(125, 25, 175, 190, fill="purple", width=0) root.mainloop() ``` the problem also could be typing "Tkinter", so type "tkinter" as python in case sensitive, I think this should work, it does for me
280,990
I have a list: ``` \begin{enumerate}[label={[\Roman*]},ref={[\Roman*]}] \item\label{item1} The first item. \end{enumerate} ``` What I want to achieve is this: In the most instances, I want to refer to the items of the list by the format specified above (that is: `\ref{item1}` should in general produce "[I]"). However, in a few particular instances, I want to refer to the items by a custom reference formatting (more specifically, without the square brackets, by "1" instead of "[1]"), but I want still that the numbering is automatic (otherwise, I could use, for example `\hyperref[item1]{1}` to refer). Therefore, I would like to, for example, define a new command `\nobracketsref` such that `\nobracketsref{item1}` should produces "1". How this could be achieved? I would appreciate any help. PS. In the situation there is one twist that may affect the solution: I have two `.tex` files so that the list is in one file (the list document) and I am referring to its items from the other file (the main document). This is done in a usual way by involving in the preamble of the main document: ``` \usepackage{xr-hyper} \usepackage{hyperref} \externaldocument{the_list_document.tex} ```
2015/12/01
[ "https://tex.stackexchange.com/questions/280990", "https://tex.stackexchange.com", "https://tex.stackexchange.com/users/93296/" ]
Define `ref=` to use a macro, instead of the explicit brackets; such a macro can be redefined to do nothing when the brackets are not wanted. ``` \documentclass{article} \usepackage{enumitem,xparse} \usepackage[colorlinks]{hyperref} \makeatletter \NewDocumentCommand{\nobracketref}{sm}{% \begingroup\let\bracketref\@firstofone \IfBooleanTF{#1}{\ref*{#2}}{\ref{#2}}% \endgroup } \makeatother \NewDocumentCommand{\bracketref}{m}{[#1]} \begin{document} \begin{enumerate}[label={[\Roman*]},ref={\bracketref{\Roman*}}] \item\label{item1} The first item. \end{enumerate} With brackets: \ref{item1} (link) With brackets: \ref*{item1} (no link) Without brackets: \nobracketref{item1} (link) Without brackets: \nobracketref*{item1} (no link) \end{document} ``` I don't think `xr-hyper` is much of a concern here. [![enter image description here](https://i.stack.imgur.com/W0rtE.png)](https://i.stack.imgur.com/W0rtE.png)
Here's a LuaLaTeX-based solution. It sets up a LaTeX macro called `\nobracketref` which, in turn, invokes a Lua function called `nobrackets` which removes the outermost brackets in the function's argument. Observe that the Lua function tests whether the cross-reference is valid. If it is not valid, i.e., if `\ref` returns `??` instead of something like `[I]`, `\nobracketref` prints an empty string. Once the cross-reference is resolved correctly, `\nobracketref` will print `I`. [![enter image description here](https://i.stack.imgur.com/zpd1b.png)](https://i.stack.imgur.com/zpd1b.png) ``` \documentclass{article} \usepackage{enumitem,luacode} %% Lua-side code \begin{luacode} function nobrackets ( s ) if string.find ( s , "%?" ) then return "" else s = string.gsub ( s , "%[(.*)%]", "%1" ) return tex.sprint ( s ) end end \end{luacode} %% TeX-side code \newcommand\nobracketref[1]{% \directlua{ nobrackets ( \luastring{\ref{#1}} ) }} \begin{document} \begin{enumerate}[label={[\Roman*]},ref={[\Roman*]}] \item\label{item:1} The first item. \end{enumerate} Using \texttt{\textbackslash ref}: item \ref{item:1}. Using \texttt{\textbackslash nobracketref}: item \nobracketref{item:1}. \end{document} ``` A caveat: This solution is (for now) not compatible with the `hyperref` package. Do let me know if `hyperref` compatibility is a requirement for you.
280,990
I have a list: ``` \begin{enumerate}[label={[\Roman*]},ref={[\Roman*]}] \item\label{item1} The first item. \end{enumerate} ``` What I want to achieve is this: In the most instances, I want to refer to the items of the list by the format specified above (that is: `\ref{item1}` should in general produce "[I]"). However, in a few particular instances, I want to refer to the items by a custom reference formatting (more specifically, without the square brackets, by "1" instead of "[1]"), but I want still that the numbering is automatic (otherwise, I could use, for example `\hyperref[item1]{1}` to refer). Therefore, I would like to, for example, define a new command `\nobracketsref` such that `\nobracketsref{item1}` should produces "1". How this could be achieved? I would appreciate any help. PS. In the situation there is one twist that may affect the solution: I have two `.tex` files so that the list is in one file (the list document) and I am referring to its items from the other file (the main document). This is done in a usual way by involving in the preamble of the main document: ``` \usepackage{xr-hyper} \usepackage{hyperref} \externaldocument{the_list_document.tex} ```
2015/12/01
[ "https://tex.stackexchange.com/questions/280990", "https://tex.stackexchange.com", "https://tex.stackexchange.com/users/93296/" ]
Define `ref=` to use a macro, instead of the explicit brackets; such a macro can be redefined to do nothing when the brackets are not wanted. ``` \documentclass{article} \usepackage{enumitem,xparse} \usepackage[colorlinks]{hyperref} \makeatletter \NewDocumentCommand{\nobracketref}{sm}{% \begingroup\let\bracketref\@firstofone \IfBooleanTF{#1}{\ref*{#2}}{\ref{#2}}% \endgroup } \makeatother \NewDocumentCommand{\bracketref}{m}{[#1]} \begin{document} \begin{enumerate}[label={[\Roman*]},ref={\bracketref{\Roman*}}] \item\label{item1} The first item. \end{enumerate} With brackets: \ref{item1} (link) With brackets: \ref*{item1} (no link) Without brackets: \nobracketref{item1} (link) Without brackets: \nobracketref*{item1} (no link) \end{document} ``` I don't think `xr-hyper` is much of a concern here. [![enter image description here](https://i.stack.imgur.com/W0rtE.png)](https://i.stack.imgur.com/W0rtE.png)
You have a couple of options: 1. Remove the brackets using a delimited argument. 2. Create a new `\label` that uses `\Roman` instead of `[\Roman]`. The first option is implemented by means of your `\nobracketsref{<label>}` choice. The second option is implemented by means of a `\speciallabel{<what>}{<label>}`. `<what>` here is set to `\Roman{enumi}`, since `enumi` is the first-level counter within an `enumerate`. [![enter image description here](https://i.stack.imgur.com/vg5fI.png)](https://i.stack.imgur.com/vg5fI.png) ``` \documentclass{article} \usepackage{enumitem,hyperref,refcount} \makeatletter \newcommand*{\speciallabel}[2]{{% \protected@edef\@currentlabel{#1}% Update the current label \label{#2}% \label item }} \def\@removebrackets[#1]{#1} \newcommand*{\nobracketsref}[1]{{% \edef\@nobracketsref{\getrefnumber{#1}}% Retrieve reference \expandafter\edef\expandafter\@nobracketsref\@nobracketsref% Strip outer group \expandafter\edef\expandafter\@nobracketsref\expandafter{\expandafter\@removebrackets\@nobracketsref}% Strip [.] \hyperref[#1]{\@nobracketsref}% Reference }} \makeatother \begin{document} \begin{enumerate}[label={[\Roman*]},ref={[\Roman*]}] \item\label{item1}\speciallabel{\Roman{enumi}}{item2} The first item. \end{enumerate} \ref{item1} \quad \nobracketsref{item1} \quad \ref{item2} \end{document} ```
1,556,700
there is a page ($9\times 9$) how many different rectangles can you draw with odd area? rectangles are different if they are different on size and place. for example if a rectangle is $15$ squares, its area is odd. if a rectangle is $12$ squares, its area is even. I have no idea how to approach this question so I can't give my answer. Thanks!
2015/12/02
[ "https://math.stackexchange.com/questions/1556700", "https://math.stackexchange.com", "https://math.stackexchange.com/users/295268/" ]
If the area has to be odd, the length and breadth both have to be odd. Hence, we count the number of rectangles by first choosing a row and a column ($10 \cdot 10$ ways to do this), and then choosing another row and column which are at an odd distance from the chosen one ($5 \cdot 5$ ways to do this). But we have counted each rectangle four times -- by the first row/column and then again by the second row/column -- so we divide by 4 to get our final answer: $1/4 \cdot 10 \cdot 10 \cdot 5 \cdot 5 = 625$.
the number of squares with equal dimensions is $81$. so if number of squares is odd then the rectangle has an odd area. the total rectangles are given by ${10 \choose 2}.{10 \choose 2}=2025$ . you have rectangles now you just need to select two lines which are odd distances apart from each other . and its done. hope its clear.
31,633,191
What is the best way of removing element from a list by either a comparison to the second list or a list of indices. ``` val myList = List(Dog, Dog, Cat, Donkey, Dog, Donkey, Mouse, Cat, Cat, Cat, Cat) val toDropFromMylist = List(Cat, Cat, Dog) ``` Which the indices accordance with myList are: ``` val indices = List(2, 7, 0) ``` The end result expected to be as below: ``` newList = List(Dog, Donkey, Dog, Donkey, Mouse, Cat, Cat, Cat) ``` Any idea?
2015/07/26
[ "https://Stackoverflow.com/questions/31633191", "https://Stackoverflow.com", "https://Stackoverflow.com/users/79147/" ]
Something like this should do the trick: ``` myList .indices .filter(!indices.contains(_)) .map(myList) ```
This works: ``` myList .zipWithIndex .collect{case (x, n) if !indices.contains(n) => x} ``` Here's a complete, self-contained REPL transcript showing it working: ``` scala> case object Dog; case object Cat; case object Monkey; case object Mouse; case object Donkey defined object Dog defined object Cat defined object Monkey defined object Mouse defined object Donkey scala> val myList = List(Dog, Dog, Cat, Donkey, Dog, Donkey, Mouse, Cat, Cat, Cat, Cat) myList: List[Product with Serializable] = List(Dog, Dog, Cat, Donkey, Dog, Donkey, Mouse, Cat, Cat, Cat, Cat) scala> val indices = List(2, 7, 0) indices: List[Int] = List(2, 7, 0) scala> myList.zipWithIndex.collect{case (x, n) if !indices.contains(n) => x} res1: List[Product with Serializable] = List(Dog, Donkey, Dog, Donkey, Mouse, Cat, Cat, Cat) ``` Note that I didn't use `toDropFromMylist`; that may mean I have misunderstood your question.
31,633,191
What is the best way of removing element from a list by either a comparison to the second list or a list of indices. ``` val myList = List(Dog, Dog, Cat, Donkey, Dog, Donkey, Mouse, Cat, Cat, Cat, Cat) val toDropFromMylist = List(Cat, Cat, Dog) ``` Which the indices accordance with myList are: ``` val indices = List(2, 7, 0) ``` The end result expected to be as below: ``` newList = List(Dog, Donkey, Dog, Donkey, Mouse, Cat, Cat, Cat) ``` Any idea?
2015/07/26
[ "https://Stackoverflow.com/questions/31633191", "https://Stackoverflow.com", "https://Stackoverflow.com/users/79147/" ]
Using `toDropFromMylist` is actually the simplest. ``` scala> myList diff toDropFromMylist res0: List[String] = List(Dog, Donkey, Dog, Donkey, Mouse, Cat, Cat, Cat) ```
This works: ``` myList .zipWithIndex .collect{case (x, n) if !indices.contains(n) => x} ``` Here's a complete, self-contained REPL transcript showing it working: ``` scala> case object Dog; case object Cat; case object Monkey; case object Mouse; case object Donkey defined object Dog defined object Cat defined object Monkey defined object Mouse defined object Donkey scala> val myList = List(Dog, Dog, Cat, Donkey, Dog, Donkey, Mouse, Cat, Cat, Cat, Cat) myList: List[Product with Serializable] = List(Dog, Dog, Cat, Donkey, Dog, Donkey, Mouse, Cat, Cat, Cat, Cat) scala> val indices = List(2, 7, 0) indices: List[Int] = List(2, 7, 0) scala> myList.zipWithIndex.collect{case (x, n) if !indices.contains(n) => x} res1: List[Product with Serializable] = List(Dog, Donkey, Dog, Donkey, Mouse, Cat, Cat, Cat) ``` Note that I didn't use `toDropFromMylist`; that may mean I have misunderstood your question.
31,633,191
What is the best way of removing element from a list by either a comparison to the second list or a list of indices. ``` val myList = List(Dog, Dog, Cat, Donkey, Dog, Donkey, Mouse, Cat, Cat, Cat, Cat) val toDropFromMylist = List(Cat, Cat, Dog) ``` Which the indices accordance with myList are: ``` val indices = List(2, 7, 0) ``` The end result expected to be as below: ``` newList = List(Dog, Donkey, Dog, Donkey, Mouse, Cat, Cat, Cat) ``` Any idea?
2015/07/26
[ "https://Stackoverflow.com/questions/31633191", "https://Stackoverflow.com", "https://Stackoverflow.com/users/79147/" ]
Using `toDropFromMylist` is actually the simplest. ``` scala> myList diff toDropFromMylist res0: List[String] = List(Dog, Donkey, Dog, Donkey, Mouse, Cat, Cat, Cat) ```
Something like this should do the trick: ``` myList .indices .filter(!indices.contains(_)) .map(myList) ```
233,158
I am working on a web api and I am curios about the `HTTP SEARCH` verb and how you should use it. My first approach was, well you could use it surely for a search. But asp.net WebApi doesn't support the `SEARCH` verb. My question is basically, should I use `SEARCH` or is it not important? PS: I found the `SEARCH` verb in fiddler2 from telerik.
2014/03/21
[ "https://softwareengineering.stackexchange.com/questions/233158", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/82918/" ]
The HTTP protocol is defined by RFC documents. RFC2616 defines HTTP/1.1 the current release. SEARCH is not documented as a verb in this document.
AFAIK the `SEARCH` method is only a proposal and should not be used. Use `GET` instead.
3,639,718
I recreated a Select box and its dropdown function using this: ``` $(".selectBox").click(function(e) { if (!$("#dropDown").css("display") || $("#dropDown").css("display") == "none") $("#dropDown").slideDown(); else $("#dropDown").slideUp(); e.preventDefault(); }); ``` The only problem is that if you click away from the box, the dropdown stays. I'd like it to mimic a regular dropdown and close when you click away, so I thought I could do a 'body' click: ``` $('body').click(function(){ if ($("#dropdown").css("display") || $("#dropdown").css("display") != "none") $("#dropdown").slideUp(); }); ``` But now, when you click the Select box, the dropdown slides down and right back up. Any ideas what I'm doing wrong? Thanks very much in advance...
2010/09/03
[ "https://Stackoverflow.com/questions/3639718", "https://Stackoverflow.com", "https://Stackoverflow.com/users/324529/" ]
You also need to stop a click from *inside* the `#dropdown` from bubbling back up to `body`, like this: ``` $(".selectBox, #dropdown").click(function(e) { e.stopPropagation(); }); ``` We're using [`event.stopPropagation()`](http://api.jquery.com/event.stopPropagation/) it stops the click from bubbling up, causing the [`.slideUp()`](http://api.jquery.com/slideUp/). Also your other 2 event handlers can be simplified with [`:visible`](http://api.jquery.com/visible-selector/) or [`.slideToggle()`](http://api.jquery.com/slideToggle/), like this overall: ``` $(".selectBox").click(function(e) { $("#dropDown").slideToggle(); return false; //also prevents bubbling }); $("#dropdown").click(function(e) { e.stopPropagation(); }); $(document).click(function(){ $("#dropdown:visible").slideUp(); }); ```
Since you are setting an event handler that catches every click, you don't need another one on a child. ``` $(document).click(function(e) { if ($(e.target).closest('.selectBox').length) { $('#dropdown').slideToggle(); return false; } else { $('#dropdown:visible').slideUp(); } }); ```
28,429,768
Lets say I have a `ItemsControl`which is used to render buttons for a list of viewModels ``` <ItemsControl ItemsSource="{Binding PageViewModelTypes}"> <ItemsControl.ItemTemplate> <DataTemplate> <Button Content="{Binding Name}" CommandParameter="{Binding }" /> </DataTemplate> </ItemsControl.ItemTemplate> </ItemsControl> ``` The `PageViewModelTypes`are the view models which are available (For example `OtherViewModel`). For each of the types there is a `DataTemplate` setup with the according views. ``` <dx:DXWindow.Resources> <DataTemplate DataType="{x:Type generalDataViewModel:GeneralViewModel}"> <generalDataViewModel:GeneralView /> </DataTemplate> <DataTemplate DataType="{x:Type other:OtherViewModel}"> <other:OtherView /> </DataTemplate> </dx:DXWindow.Resources> ``` Is there any way of replacing the `PageViewModelTypes` with the corresponding template types for the `ItemsControl` within the view?
2015/02/10
[ "https://Stackoverflow.com/questions/28429768", "https://Stackoverflow.com", "https://Stackoverflow.com/users/931051/" ]
Bind the button content to the item content and your templates will be resolved to the actual types: ``` <ItemsControl.ItemTemplate> <DataTemplate> <Button Content="{Binding}" CommandParameter="{Binding }" /> </DataTemplate> </ItemsControl.ItemTemplate> ```
Unfortunately, your question is not at all clear. The most common scenario that could fit the vague description you've provided is to have each item in the `ItemsControl` displayed using a `DataTemplate` that corresponds to that type. Let's call that `Option A`. But the statement: > > replacing the `PageViewModelTypes` with the corresponding template types for the `ItemsControl` within the view > > > …could be construed as meaning you want an entirely different data source for the control. I.e. you want to selectively choose a different value for the `ItemsSource` property. Let's call that `Option B`. Then later, in the comments, you were asked: > > do you want to show the template when the user clicks the relevant button? > > > …and you responded "yes"! Even though that's a completely different behavior than either of the above two. Let's call that `Option C`. Maybe we can encourage you to provide much-needed clarification. But to do that, it seems most fruitful to start with the simplest, most common scenario. Here is an example of code that implements `Option A`: **XAML:** ``` <Window x:Class="TestSO28429768ButtonTemplate.MainWindow" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:local="clr-namespace:TestSO28429768ButtonTemplate" Title="MainWindow" Height="350" Width="525"> <Window.Resources> <local:ColorToBrushConverter x:Key="colorToBrushConverter1"/> <local:BaseViewModelCollection x:Key="itemsCollection"> <local:StringViewModel Text="Foo"/> <local:StringViewModel Text="Bar"/> <local:ColorViewModel Color="Yellow"/> <local:ColorViewModel Color="LightBlue"/> </local:BaseViewModelCollection> <DataTemplate DataType="{x:Type local:StringViewModel}"> <TextBlock Text="{Binding Text}"/> </DataTemplate> <DataTemplate DataType="{x:Type local:ColorViewModel}"> <Rectangle Width="50" Height="25" Fill="{Binding Path=Color, Converter={StaticResource colorToBrushConverter1}}" /> </DataTemplate> </Window.Resources> <Grid> <ItemsControl ItemsSource="{StaticResource itemsCollection}"> <ItemsControl.ItemTemplate> <DataTemplate> <Button Content="{Binding}"/> </DataTemplate> </ItemsControl.ItemTemplate> </ItemsControl> </Grid> </Window> ``` **C#:** ``` class BaseViewModelCollection : List<BaseViewModel> { } class BaseViewModel { } class StringViewModel : BaseViewModel { public string Text { get; set; } } class ColorViewModel : BaseViewModel { public Color Color { get; set; } } class ColorToBrushConverter : IValueConverter { public object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { return new SolidColorBrush((Color)value); } public object ConvertBack(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { throw new NotImplementedException(); } } /// <summary> /// Interaction logic for MainWindow.xaml /// </summary> public partial class MainWindow : Window { public MainWindow() { InitializeComponent(); } } ``` As you'll see, the `ItemsControl` displays the `Button` instances using its default panel, `StackPanel`. The `Content` of each `Button` is bound to the respective item in the `ItemsSource` collection, a list containing two each of the `StringViewModel` class and the `ColorViewModel` class. Through defined templates in the window's resources, the content presenter of the button uses the `DataTemplate` associated with each type of view model. Items corresponding to a `StringViewModel` get the template for that type, i.e. a `TextBlock` displaying the text of the view model. Likewise, items corresponding to a `ColorViewModel` instance get the template that displays a rectangle filled with the color from the view model. If the above does not *exactly* address your question (and it may well not), please edit your question to clarify what you are asking: * If the above is close, but not precisely what you wanted, please use the above as a reference and explain how what you want to do is different. * If the above has nothing to do with what you wanted, then ignore it. But do be *specific* about what you actually want, and use precise terminology. For example, if you really want to *replace* the `ItemsSource` with a different collection, then saying you want to replace the `PageViewModelTypes` collection makes sense. But if not, *don't* use a phrase that seems to say exactly that! Of course, if either `Option B` or `Option C` more closely match what you are trying to do, go ahead and use those as references for your clarifications. Finally, please check out the very helpful pages [How do I ask a good question?](https://stackoverflow.com/help/how-to-ask) and [How to create a Minimal, Complete, and Verifiable example](https://stackoverflow.com/help/mcve). They have lots of great information about how you can express yourself in a way that will allow others to *easily* understand what you mean. :)
39,070,614
Here is a program about the Fibonacci sequence. Each time the code branches off again you are calling the fibonacci function from within itself two times. ``` def fibonacci(number) if number < 2 number else fibonacci(number - 1) + fibonacci(number - 2) end end puts fibonacci(6) ``` The only thing I understand is that it adds the number from the previous number. This program was taken from my assignment. It says, "If you take all of those ones and zeros and add them together, you'll get the same answer you get when you run the code." [![enter image description here](https://i.stack.imgur.com/JoZlf.jpg)](https://i.stack.imgur.com/JoZlf.jpg) I really tried my best to understand how this code works but I failed. Can anyone out there who is so kind and explain to me in layman's term or in a way a dummy would understand what's happening on this code?
2016/08/22
[ "https://Stackoverflow.com/questions/39070614", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
This is just the direct 1:1 translation (with a simple twist) of the standard mathematical definition of the Fibonacci Function: ``` Fib(0) = 0 Fib(1) = 1 Fib(n) = Fib(n-2) + Fib(n-1) ``` Translated to Ruby, this becomes: ``` def fib(n) return 0 if n.zero? return 1 if n == 1 fib(n-2) + fib(n-1) end ``` It's easy to see that the first two cases can be combined: if n is 0, the result is 0, if n is 1, the result is 1. That's the same as saying if n is 0 or 1, the result is the same as n. And "n is 0 or 1" is the same as "n is less than 2": ``` def fib(n) return n if n < 2 fib(n-2) + fib(n-1) end ``` There's nothing special about this, it's the exact translation of the recursive definition of the mathematical Fibonacci function.
I don't really know which part baffles you but let me try. In the graph, a function f() denotes your fibonacci() and f(1) and f(0) are pre-defined as 1 and 0. As f(number) comes from f(number - 1) + f(number - 2) in your number = 2, f(2) = f(2 - 1) + f(2 - 2) = 1 + 0 = 1. Likewise, you can get f(3) = f(3 - 1) + f(3 - 2) = f(2) + f(1) = 1 + 1. You can proceed to number 6 in your graph with the code then you get the answer. And do not care about people who already forget when they were the babe. It's just like convention here :)