qid
int64
1
74.7M
question
stringlengths
16
65.1k
date
stringlengths
10
10
metadata
sequence
response_j
stringlengths
2
117k
response_k
stringlengths
3
61.5k
46,576,645
I had a Java project. But I cannot tell it is Spring based or Spring MVC based, or Spring xxx? How can I know that? The reason why I ask this is that I can refer to Spring tutorial or Spring MVC tutorial or Spring xxx if I know that. Please help. Thanks.
2017/10/05
[ "https://Stackoverflow.com/questions/46576645", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8565199/" ]
Spring is a framework which helps to connect different components together. There are many modules for IOC, AOP, Web MVC etc.Spring Framework is an open source application framework and inversion of control container for the Java platform. Spring MVC (Model–view–controller) is one component within the whole Spring Framework, to support development of web applications. If your project is maven based than below dependency will be present in pom.xml ``` <dependency> <groupId>org.springframework</groupId> <artifactId>spring-webmvc</artifactId> <version>${spring.version}</version> </dependency> ```
If in the DefaultConfig class there is `@EnableWebMvc`annotation then the project is a spring mvc project.
46,576,645
I had a Java project. But I cannot tell it is Spring based or Spring MVC based, or Spring xxx? How can I know that? The reason why I ask this is that I can refer to Spring tutorial or Spring MVC tutorial or Spring xxx if I know that. Please help. Thanks.
2017/10/05
[ "https://Stackoverflow.com/questions/46576645", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8565199/" ]
Spring is a framework which helps to connect different components together. There are many modules for IOC, AOP, Web MVC etc.Spring Framework is an open source application framework and inversion of control container for the Java platform. Spring MVC (Model–view–controller) is one component within the whole Spring Framework, to support development of web applications. If your project is maven based than below dependency will be present in pom.xml ``` <dependency> <groupId>org.springframework</groupId> <artifactId>spring-webmvc</artifactId> <version>${spring.version}</version> </dependency> ```
To checkout whether it is Spring based or Spring WebMVC please follow below steps : 1.Go through the gradle properties file or maven pom file to look out for the dependencies if you are using maven or gradle as build tool .(If not (I would suggest you to do so its makes life easy) then go through the lib folder of your project ) 2.Even if dependencies are there please look out for the classes which annotation are used there . For example if you find @RestController or @Controller then it is a spring WebMVC (org.springframework.web.\*) project .There may be chances someone has included the dependencies but have not used :P Please revert me back if you have any doubt Thanks Rajesh
10,572,035
My ultimate goal is to run a [twiki](http://twiki.org/) website for my research group. I have space on RedHat server that is running Apache, etc., but upon which I do not have root access. Since I cannot install perl modules with the current permissions, I've decided to manually install a local version of perl. Got that working no problem. The following modules are required to get twiki to work: * FreezeThaw - [http://search.cpan.org/~ilyaz/FreezeThaw](http://search.cpan.org/%7Eilyaz/FreezeThaw) * CGI::Session - [http://search.cpan.org/~markstos/CGI-Session](http://search.cpan.org/%7Emarkstos/CGI-Session) * Error - [http://search.cpan.org/~shlomif/Error](http://search.cpan.org/%7Eshlomif/Error) * GD - [http://search.cpan.org/~lds/GD](http://search.cpan.org/%7Elds/GD) * HTML::Tree - [http://search.cpan.org/~petek/HTML-Tree](http://search.cpan.org/%7Epetek/HTML-Tree) * Time-modules - [http://search.cpan.org/~muir/Time-modules](http://search.cpan.org/%7Emuir/Time-modules) I have installed FreezeThaw, CGI, Error, and it fails on GD with the following error: > > **UNRECOVERABLE ERROR** Could not find gdlib-config in the search path. > > > Please install libgd 2.0.28 or higher. If you want to try to > > > compile anyway, please rerun this script with the option --ignore\_missing\_gd. > > > In searching for how to get around this newest obstacle, I found a previous SO question: [How to install GD library with Strawberry Perl](https://stackoverflow.com/questions/1627143/how-to-install-gd-library-with-strawberry-perl) asked about installing this and the top answer [suggested manually compiling](http://www.libgd.org/FAQ_C_Compile) gdlib. You'll note, however, that that link is broken. The base site: <http://www.libgd.org/> is basically down saying to go to the project's [bitbucket](https://bitbucket.org/pierrejoye/gd-libgd) page. So I got the tarball from that page and am trying to install it. The following problems occur when I follow the instructions included. README.TXT says: "If the sources have been fetched from CVS, run bootstrap.sh [options]." Running bootstrap.sh yields: > > configure.ac:64: warning: macro `AM\_ICONV' not found in library > > > configure.ac:10: required directory ./config does not exist cp: cannot > > > create regular file `config/config.guess': No such file or directory > > > configure.ac:11: installing `config/config.guess' configure.ac:11: > > > error while copying cp: cannot create regular file > > > `config/config.sub': No such file or directory configure.ac:11: > > > installing `config/config.sub' configure.ac:11: error while > > > copying cp: cannot create regular file `config/install-sh': No such > > > file or directory configure.ac:28: installing `config/install-sh' > > > configure.ac:28: error while copying cp: cannot create regular > > > file `config/missing': No such file or directory configure.ac:28: > > > installing `config/missing' configure.ac:28: error while copying > > > configure.ac:577: required file `config/Makefile.in' not found > > > configure.ac:577: required file `config/gdlib-config.in' not found > > > configure.ac:577: required file `test/Makefile.in' not found > > > Makefile.am:14: Libtool library used but `LIBTOOL' is undefined > > > Makefile.am:14: The usual way to define `LIBTOOL' is to add > > > `AC_PROG_LIBTOOL' Makefile.am:14: to` configure.ac' and run > > > `aclocal' and` autoconf' again. Makefile.am:14: If `AC\_PROG\_LIBTOOL' > > > is in `configure.ac', make sure Makefile.am:14: its definition is in > > > aclocal's search path. cp: cannot create regular file > > > `config/depcomp': No such file or directory Makefile.am: installing > > > `config/depcomp' Makefile.am: error while copying Failed > > > And it says I should also install the following 3rd party libraries: 1. zlib, available from <http://www.gzip.org/zlib/> Data compression library 2. libpng, available from <http://www.libpng.org/pub/png/> Portable Network Graphics library; requires zlib 3. FreeType 2.x, available from <http://www.freetype.org/> Free, high-quality, and portable font engine 4. JPEG library, available from <http://www.ijg.org/> Portable JPEG compression/decompression library 5. XPM, available from <http://koala.ilog.fr/lehors/xpm.html> X Pixmap library Which I am ignoring for now. Switching to the generic instructions it says follow the advice in the INSTALL file; which says: "cd to the directory containing the package's source code and type ./configure to configure the package for your system." Which flat does not work: I've cd'ed into every directory of the tarball and running that command does nothing. So, trying to install twiki required me to install perl, which required me to install the perl modules: FreezeThaw, CGI, Error, HTML, Time-modules, and GD -- which itself required me to install gdlib -- which further suggested I install zlib, libpng, FreeType 2.x, JPEG library, and XPM. And of course, I'm stuck at the installing gdlib stage. **My question is**: what other process can possibly demean humanity to such a level? I cannot fathom the depths of cruelty that lay ahead of me as I dive ever deeper into this misery onion. Should I just end it all? Can meaning be brought from this madness? Will the sun come up tomorrow, and if so, does it even matter? But seriously, any suggestions on what to do differently/better would be much appreciated -- I can't remember what a child's laughter sounds like anymore.
2012/05/13
[ "https://Stackoverflow.com/questions/10572035", "https://Stackoverflow.com", "https://Stackoverflow.com/users/246856/" ]
Install the package `gd-devel`, it contains `/usr/bin/gdlib-config`.
This should work: ``` sudo apt-get -y install libgd2-xpm-dev build-essential ```
61,017,481
I have a set of points in one coordinate system and I want to rotate them to another coordinate system in Python. Based on [this answer](https://stackoverflow.com/a/34392459/3173758) I wrote the following Python function: ``` def change_of_basis(points, initial, final): ''' rotate points/vectors in a 3D coordinate system to a new coordinate system input: m x 3 array of points or vectors that have to be transformed from the initial to the final csys initial: sequence of sequences of floats representing the normalized axis of the csys that has to be transformed final: sequence of sequences of floats representing the normalized axis of the csys to which has to be transformed return: the points/vectors in the new coordinate system ''' x1, y1, z1 = initial x2, y2, z2 = final M11, M12, M13 = np.dot(x1, x2), np.dot(x1, y2), np.dot(x1, z2) M21, M22, M23 = np.dot(y1, x2), np.dot(y1, y2), np.dot(y1, z2) M31, M32, M33 = np.dot(z1, x2), np.dot(z1, y2), np.dot(z1, z2) # set up rotation matrix R = np.array([[M11, M12, M13], [M21, M22, M23], [M31, M32, M33]]) return np.linalg.inv(R).dot(points) ``` Running example: ``` initial = [[ 0.98078528 0. -0.19509032] [-0.19509032 0. -0.98078528] [ 0. 1. 0. ]] final = [[ 0.83335824 -0.08626633 -0.54595986] [-0.55273325 -0.13005679 -0.82314712] [ 0. 0.98774564 -0.15607226]] new_cys = change_of_basis(initial, initial, final ) ``` Plotting this gives the result visualized below. The intention is to transform the red/orange coordinate system to the yellow one but the result is the blue coordinate system. Can anyone see what mistake I am making and how to fix this? [![enter image description here](https://i.stack.imgur.com/vCFJ8.png)](https://i.stack.imgur.com/vCFJ8.png) EDIT: It worked to transform the coordinate system. I changed to the function above to what I have now. It allows me to transform the red to the yellow coordinate system. Now what I need is to transform a set of points in the first (red) coordinate system to a set of points in the second (yellow) coordinate system. I thought that this function would work but it does not, is the transformation different for a set of points?
2020/04/03
[ "https://Stackoverflow.com/questions/61017481", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3173758/" ]
I'm no expert in linear algebra, but I think your mistake is in not inverting the initial coordinate system. If A and B are your basis matrices, you are computing A \* B, but what you need to compute is A^{-1} \* B. Which makes sense - you multiply by A^{-1} to convert to the standard basis from A, then multiply by B to convert to from the standard basis to B. Here's another SO answer that talks about implementing this: [Change of basis in numpy](https://stackoverflow.com/questions/55082928/change-of-basis-in-numpy) EDIT: Peculiar that this version worked for the coordinate system. It's not R you need to invert. You are computing R = A \* B, so so by inverting R you get A^{-1} \* B^{-1}. You need to invert A first, *then* multiply.
Perhaps try using the transpose of the matrix to rotate your point cloud. See [Rotation Matrix](https://www.continuummechanics.org/rotationmatrix.html#:%7E:text=A%20transformation%20matrix%20describes%20the,the%20transpose%20of%20the%20other.), which suggests that the matrix used to rotate the coordinate system and the matrix used to rotate an object are transposes of each other. The link also has useful information about defining a general transform matrix between two coordinate systems: [picture of general transform matrix](https://i.stack.imgur.com/3EgGq.png) and the rotation matrix (which is the transpose): [picture of general rotation matrix](https://i.stack.imgur.com/KVlOl.png) "where (x′,x) represents the angle between the x′ and x axes, (x′,y) is the angle between the x′ and y axes, etc." [How to get the rotation matrix to transform between two 3d cartesian coordinate systems?](https://gamedev.stackexchange.com/a/26085) talks about using the identity transform as a reference between two coordinate systems when neither coordinate system is the coordinate system defined by the orthonormal vectors: (1,0,0); (0,1,0); (0,0,1)
32,392,228
I want to concatenate two dates with their times like below in SQL Server 2008. Something like this: ``` 2015-09-09 08:30 - 2015-09-09 09:30 ``` I tried this method but didn't work and I used casting as well. ``` CONVERT(DATETIME, CONVERT(CHAR(8), S.StartTime, 112)+ '-' + CONVERT(CHAR(8), S.endtime, 108)) AS 'OccupiedTime' ``` it is showing result like this ``` 2015-09-09 09:30:00:000 ```
2015/09/04
[ "https://Stackoverflow.com/questions/32392228", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4626423/" ]
``` CONVERT(CHAR(16), s.StartTime, 120) + '-' + CONVERT(CHAR(16), s.EndTime, 120) AS OccupiedTime ```
You need 2 parts of date - date only + time. You can have 2 strings and concatenate them: ``` SELECT REPLACE(CONVERT(VARCHAR(50),s.StartTime,103),'/','-') + ' ' + CONVERT(VARCHAR(5),s.StartTime,114) + ' - ' + REPLACE(CONVERT(VARCHAR(50),s.EndTime,103),'/','-') + ' ' + CONVERT(VARCHAR(5),s.EndTime,114) AS OccupiedDateTime ``` You can make quick check how it looks using: ``` SELECT REPLACE(CONVERT(VARCHAR(50),GETDATE(),103),'/','-') + ' ' + CONVERT(VARCHAR(5),GETDATE(),114) + ' - ' + REPLACE(CONVERT(VARCHAR(50),GETDATE(),103),'/','-') + ' ' + CONVERT(VARCHAR(5),GETDATE(),114) AS OccupiedDateTime ```
32,392,228
I want to concatenate two dates with their times like below in SQL Server 2008. Something like this: ``` 2015-09-09 08:30 - 2015-09-09 09:30 ``` I tried this method but didn't work and I used casting as well. ``` CONVERT(DATETIME, CONVERT(CHAR(8), S.StartTime, 112)+ '-' + CONVERT(CHAR(8), S.endtime, 108)) AS 'OccupiedTime' ``` it is showing result like this ``` 2015-09-09 09:30:00:000 ```
2015/09/04
[ "https://Stackoverflow.com/questions/32392228", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4626423/" ]
``` CONVERT(CHAR(16), s.StartTime, 120) + '-' + CONVERT(CHAR(16), s.EndTime, 120) AS OccupiedTime ```
You can use SQL Convert function and the conversion style parameter together as follows ``` declare @StartTime datetime = getdate() declare @EndTime datetime = getdate() select convert(varchar(16), @StartTime, 120) + ' - ' + convert(varchar(16), @EndTime, 120) ``` If you check the code, I used varchar(16) which removes unwanted milllisecond information after conversion. For more on [SQL Server Convert datetime function](http://www.kodyaz.com/articles/sql-format-date-format-datetime-t-sql-convert-function.aspx) on action please refer to given tutorial
4,117
@Caleb after having reviewed your comments on: [Did Jesus mean that heaven and earth would actually pass away in Matthew 24:25?](https://christianity.stackexchange.com/questions/34160/what-does-it-mean-for-heaven-to-pass-away) I am confused as I was under the impression that this site was to answer questions concerning Christianity. I was saved in a Southern Baptist church building and have attended mostly Southern Baptist churches, but have attended many other protestant churches and I have found that all that I have attended, are based on the Holy Bible. That having been said those churches do read certain Scriptures differently. Unless a question asks about a particular Denomination's interpretation, I feel that giving the Scripture references is proper. When a question asks for a general answer, why is not giving the Scripture reference sufficient and proper? When a question asks about what a particular Denomination thinks of any particular subject I do not answer it since I do not feel qualified to even speak for the Southern Baptist Convention, as I do not know all of the many decisions and specifically their reasoning for that assumption. For that reason although not abandoning the Baptist churches I felt the need to not only read the Bible thoroughly, but to study the Bible so that I could better understand how God intended man should worship him, and also how Jesus affected that precept, since that is the crux of Christianity. The comments section as I understand it is for clarification and to understand what either the questioner wants to understand or what the answerer is portraying. However it appears that most posters do not use it for that purpose and routinely vote to close a question that they do not fully understand, or post some innocuous comment to an answer. Having said all of that it would seem to me that the best answer to any question not specifying a Denomination would be to give the Scriptures and allow the Holy Spirit to do as Jesus proclaimed in John 14:26. And it is also my belief that the Holy Spirit will give the correct scriptures for that answer as Jesus said in Luke 12:12.
2014/10/31
[ "https://christianity.meta.stackexchange.com/questions/4117", "https://christianity.meta.stackexchange.com", "https://christianity.meta.stackexchange.com/users/5867/" ]
> > Having said all of that it would seem to me that the best answer to > any question not specifying a Denomination would be to give the > Scriptures and allow the Holy Spirit to do as Jesus proclaimed in John > 14:26. And it is also my belief that the Holy Spirit will give the > correct scriptures for that answer as Jesus said in Luke 12:12. > > > That is wrong. If a question is not scoped, as discussed in dozens of Meta posts, the appropriate action is to either edit it into shape, or vote to close and ***do not answer***. Personal exegesis is not allowed. Period. It doesn't matter how well you know your Bible, and how much study you have put into it, it is inappropriate to answer based on your own understanding of Scripture. We focus on the various teachings within Christianity, we do not offer up our own personal understanding. See the following if you are confused on this issue: * [What Christianity.StackExchange is (and more importantly, what it isn't)](https://christianity.meta.stackexchange.com/questions/1379/what-christianity-stackexchange-is-and-more-importantly-what-it-isnt) * [How we are different than other sites?](https://christianity.meta.stackexchange.com/questions/1808/how-we-are-different-than-other-sites) * [We can't handle the truth](https://christianity.meta.stackexchange.com/questions/3527/we-cant-handle-the-truth) The main reason for this is that other people can post answers based on their own personal interpretation, which turns this into a popularity context where the most popular doctrine "wins". This is something we have fought against on this site for a long tome, for very good reasons. See: * [Christianity.SE vs. Survivor](https://christianity.meta.stackexchange.com/questions/132/christianity-se-vs-survivor) * [Another reason this is not a Christian site](https://christianity.meta.stackexchange.com/questions/1457/another-reason-this-is-not-a-christian-site) What's wrong with answering general questions Biblically is that your answer is no more or less valid ***on this site*** than someone with another answer, with a different conclusion, also based on Scripture. This isn't a site for personal exegesis. We explain what various groups teach, we do not teach our own opinions and personal interpretations, and we don't argue over whose teaching is right. In addition to all that, there's the fact that experienced members are supposed to teach newcomers what types of questions are allowed. By answering questions that should be closed, you are perpetuating the problem. Simply by answering, you are sending the message that the question is allowable on the site. DON'T DO THAT! When you do that, you confuse them. Then they then come back later wondering why their questions are always closed even though some experienced members are providing answers. [I seem to remember you once asking why nobody bothered to explain to them what's on topic and what's off](https://christianity.meta.stackexchange.com/questions/3269/why-not-have-new-members-go-through-a-general-introduction-page). You weren't very happy about the fact that nobody bothered to correct your misunderstandings. Instead of doing the same thing that caused you frustration, please do your duty and help them understand how to participate within the guidelines.
The holy spirit doesn't exist, and certainly cannot be relied on to elucidate anyone's understanding of anything. Nor, of course, does God, in any shape or form whatsoever. This is a Q&A site on the subject of a rather strange social phenomenon, known as religion, in one of its branches, known as Christianity. --- Many other participants here might disagree with that, but the fact that we can all ask and answer questions on the subject nonetheless does go some way toward demonstrating what sort of site this is.
4,117
@Caleb after having reviewed your comments on: [Did Jesus mean that heaven and earth would actually pass away in Matthew 24:25?](https://christianity.stackexchange.com/questions/34160/what-does-it-mean-for-heaven-to-pass-away) I am confused as I was under the impression that this site was to answer questions concerning Christianity. I was saved in a Southern Baptist church building and have attended mostly Southern Baptist churches, but have attended many other protestant churches and I have found that all that I have attended, are based on the Holy Bible. That having been said those churches do read certain Scriptures differently. Unless a question asks about a particular Denomination's interpretation, I feel that giving the Scripture references is proper. When a question asks for a general answer, why is not giving the Scripture reference sufficient and proper? When a question asks about what a particular Denomination thinks of any particular subject I do not answer it since I do not feel qualified to even speak for the Southern Baptist Convention, as I do not know all of the many decisions and specifically their reasoning for that assumption. For that reason although not abandoning the Baptist churches I felt the need to not only read the Bible thoroughly, but to study the Bible so that I could better understand how God intended man should worship him, and also how Jesus affected that precept, since that is the crux of Christianity. The comments section as I understand it is for clarification and to understand what either the questioner wants to understand or what the answerer is portraying. However it appears that most posters do not use it for that purpose and routinely vote to close a question that they do not fully understand, or post some innocuous comment to an answer. Having said all of that it would seem to me that the best answer to any question not specifying a Denomination would be to give the Scriptures and allow the Holy Spirit to do as Jesus proclaimed in John 14:26. And it is also my belief that the Holy Spirit will give the correct scriptures for that answer as Jesus said in Luke 12:12.
2014/10/31
[ "https://christianity.meta.stackexchange.com/questions/4117", "https://christianity.meta.stackexchange.com", "https://christianity.meta.stackexchange.com/users/5867/" ]
Your answer seems innocuous at first because it's so generic. It represents a run of the mill semi-dispensational Baptist view that would be acceptable to many Protestant traditions—especially assorted western Baptist groups and and the most common non-denominational strains. It's likely to get a lot of upvotes because the demographics of this site are heavily weighted towards just that quarter. But the problem is, the question didn't ask for that view. After your answer, the post picked up a little bit stronger dispensational answer, then an LDS one. Do you see the lid coming off Pandora's box? LDS uses the same words but means very different things by them (esp. in this context of "heaven"), so there is a crazy apples-to-oranges thing brewing between answers. But that's only the beginning. **What would stop a Jehovah's Witness or other annihilationist sect from positing their exegesis of that passage as an answer?** By your reasoning: nothing. Left open, that question would have contradictory answers, some saying one thing and some another. The difference would be the different doctrinal frameworks represented, but that wouldn't be specified by all the answers, and even if it was **the voting patterns between them would be reflective of the popularity of the respective theologies, not the quality of the answer**. If we're to be consistent about this site not endorsing any particular branch of Christianity as "absolute truth" and not excluding minority "heretical" groups from having a valid shot at answering questions directed at them, this simply isn't an acceptable outcome. Your suggested pattern would drag all of the baggage of Christianity and conflict between traditions into every question on the site. We realized early in the life of the site that was not going to work and we have put a lot of effort into keeping it from happening. This is simply not the right venue to present answers that are judged on their "truth" value. If your hope is to provide spiritual direction to people looking for answers to life's persistent questions, this is the wrong place to do it. You've actually answered a lot of questions with problems along the same vein as this example. For the sake of your own time please stop. If a question isn't obviously scoped in a way that's going to be durable and sit well with out guidelines please don't waste your time answering it. I feel bad every time I see you (or others) answering questions without scope because I know you put effort into your posts and I hate to see that wasted because they are a mismatch for the venue. You seem like a nice fellow and you actually write some pretty quality stuff. I even agree with a lot (if not all) of your theology. But agreeing with you does not give me a reason to excuse the scope problems such generic answers represent. We are always getting on the case of nut-job posters that do things like post anti-Catholic rants on Catholic questions. The only way we have to keep that kind of contentions stuff cleaned up is the insistence on scoping rules for questions. It's not a perfect system, but it serves a purpose. If we let your answers slide just because they happen to be agreeable we loose the ability to fulfill the purpose of this site. There are plenty of better venues for sharing Scripture and helping people solve their spiritual problems by directing them to God's word. That ambition is an admirable use of your time and I encourage you to keep doing it, especially in the context of your local church. The Internet even has lots of opportunity for that sort of outreach, but this site isn't meant to be a venue for that.
The holy spirit doesn't exist, and certainly cannot be relied on to elucidate anyone's understanding of anything. Nor, of course, does God, in any shape or form whatsoever. This is a Q&A site on the subject of a rather strange social phenomenon, known as religion, in one of its branches, known as Christianity. --- Many other participants here might disagree with that, but the fact that we can all ask and answer questions on the subject nonetheless does go some way toward demonstrating what sort of site this is.
4,117
@Caleb after having reviewed your comments on: [Did Jesus mean that heaven and earth would actually pass away in Matthew 24:25?](https://christianity.stackexchange.com/questions/34160/what-does-it-mean-for-heaven-to-pass-away) I am confused as I was under the impression that this site was to answer questions concerning Christianity. I was saved in a Southern Baptist church building and have attended mostly Southern Baptist churches, but have attended many other protestant churches and I have found that all that I have attended, are based on the Holy Bible. That having been said those churches do read certain Scriptures differently. Unless a question asks about a particular Denomination's interpretation, I feel that giving the Scripture references is proper. When a question asks for a general answer, why is not giving the Scripture reference sufficient and proper? When a question asks about what a particular Denomination thinks of any particular subject I do not answer it since I do not feel qualified to even speak for the Southern Baptist Convention, as I do not know all of the many decisions and specifically their reasoning for that assumption. For that reason although not abandoning the Baptist churches I felt the need to not only read the Bible thoroughly, but to study the Bible so that I could better understand how God intended man should worship him, and also how Jesus affected that precept, since that is the crux of Christianity. The comments section as I understand it is for clarification and to understand what either the questioner wants to understand or what the answerer is portraying. However it appears that most posters do not use it for that purpose and routinely vote to close a question that they do not fully understand, or post some innocuous comment to an answer. Having said all of that it would seem to me that the best answer to any question not specifying a Denomination would be to give the Scriptures and allow the Holy Spirit to do as Jesus proclaimed in John 14:26. And it is also my belief that the Holy Spirit will give the correct scriptures for that answer as Jesus said in Luke 12:12.
2014/10/31
[ "https://christianity.meta.stackexchange.com/questions/4117", "https://christianity.meta.stackexchange.com", "https://christianity.meta.stackexchange.com/users/5867/" ]
> > Having said all of that it would seem to me that the best answer to > any question not specifying a Denomination would be to give the > Scriptures and allow the Holy Spirit to do as Jesus proclaimed in John > 14:26. And it is also my belief that the Holy Spirit will give the > correct scriptures for that answer as Jesus said in Luke 12:12. > > > That is wrong. If a question is not scoped, as discussed in dozens of Meta posts, the appropriate action is to either edit it into shape, or vote to close and ***do not answer***. Personal exegesis is not allowed. Period. It doesn't matter how well you know your Bible, and how much study you have put into it, it is inappropriate to answer based on your own understanding of Scripture. We focus on the various teachings within Christianity, we do not offer up our own personal understanding. See the following if you are confused on this issue: * [What Christianity.StackExchange is (and more importantly, what it isn't)](https://christianity.meta.stackexchange.com/questions/1379/what-christianity-stackexchange-is-and-more-importantly-what-it-isnt) * [How we are different than other sites?](https://christianity.meta.stackexchange.com/questions/1808/how-we-are-different-than-other-sites) * [We can't handle the truth](https://christianity.meta.stackexchange.com/questions/3527/we-cant-handle-the-truth) The main reason for this is that other people can post answers based on their own personal interpretation, which turns this into a popularity context where the most popular doctrine "wins". This is something we have fought against on this site for a long tome, for very good reasons. See: * [Christianity.SE vs. Survivor](https://christianity.meta.stackexchange.com/questions/132/christianity-se-vs-survivor) * [Another reason this is not a Christian site](https://christianity.meta.stackexchange.com/questions/1457/another-reason-this-is-not-a-christian-site) What's wrong with answering general questions Biblically is that your answer is no more or less valid ***on this site*** than someone with another answer, with a different conclusion, also based on Scripture. This isn't a site for personal exegesis. We explain what various groups teach, we do not teach our own opinions and personal interpretations, and we don't argue over whose teaching is right. In addition to all that, there's the fact that experienced members are supposed to teach newcomers what types of questions are allowed. By answering questions that should be closed, you are perpetuating the problem. Simply by answering, you are sending the message that the question is allowable on the site. DON'T DO THAT! When you do that, you confuse them. Then they then come back later wondering why their questions are always closed even though some experienced members are providing answers. [I seem to remember you once asking why nobody bothered to explain to them what's on topic and what's off](https://christianity.meta.stackexchange.com/questions/3269/why-not-have-new-members-go-through-a-general-introduction-page). You weren't very happy about the fact that nobody bothered to correct your misunderstandings. Instead of doing the same thing that caused you frustration, please do your duty and help them understand how to participate within the guidelines.
Exegesis questions don't normally need denominational scoping. Considering the question is about an idiom not eschatology I'm not sure why Caleb thought it needed closing. I think I'll vote to reopen it. I edited it and then @Bye has since edited it again. I think it should be on-topic, but I'm not sure whether either of us have got the right wording.
4,117
@Caleb after having reviewed your comments on: [Did Jesus mean that heaven and earth would actually pass away in Matthew 24:25?](https://christianity.stackexchange.com/questions/34160/what-does-it-mean-for-heaven-to-pass-away) I am confused as I was under the impression that this site was to answer questions concerning Christianity. I was saved in a Southern Baptist church building and have attended mostly Southern Baptist churches, but have attended many other protestant churches and I have found that all that I have attended, are based on the Holy Bible. That having been said those churches do read certain Scriptures differently. Unless a question asks about a particular Denomination's interpretation, I feel that giving the Scripture references is proper. When a question asks for a general answer, why is not giving the Scripture reference sufficient and proper? When a question asks about what a particular Denomination thinks of any particular subject I do not answer it since I do not feel qualified to even speak for the Southern Baptist Convention, as I do not know all of the many decisions and specifically their reasoning for that assumption. For that reason although not abandoning the Baptist churches I felt the need to not only read the Bible thoroughly, but to study the Bible so that I could better understand how God intended man should worship him, and also how Jesus affected that precept, since that is the crux of Christianity. The comments section as I understand it is for clarification and to understand what either the questioner wants to understand or what the answerer is portraying. However it appears that most posters do not use it for that purpose and routinely vote to close a question that they do not fully understand, or post some innocuous comment to an answer. Having said all of that it would seem to me that the best answer to any question not specifying a Denomination would be to give the Scriptures and allow the Holy Spirit to do as Jesus proclaimed in John 14:26. And it is also my belief that the Holy Spirit will give the correct scriptures for that answer as Jesus said in Luke 12:12.
2014/10/31
[ "https://christianity.meta.stackexchange.com/questions/4117", "https://christianity.meta.stackexchange.com", "https://christianity.meta.stackexchange.com/users/5867/" ]
Your answer seems innocuous at first because it's so generic. It represents a run of the mill semi-dispensational Baptist view that would be acceptable to many Protestant traditions—especially assorted western Baptist groups and and the most common non-denominational strains. It's likely to get a lot of upvotes because the demographics of this site are heavily weighted towards just that quarter. But the problem is, the question didn't ask for that view. After your answer, the post picked up a little bit stronger dispensational answer, then an LDS one. Do you see the lid coming off Pandora's box? LDS uses the same words but means very different things by them (esp. in this context of "heaven"), so there is a crazy apples-to-oranges thing brewing between answers. But that's only the beginning. **What would stop a Jehovah's Witness or other annihilationist sect from positing their exegesis of that passage as an answer?** By your reasoning: nothing. Left open, that question would have contradictory answers, some saying one thing and some another. The difference would be the different doctrinal frameworks represented, but that wouldn't be specified by all the answers, and even if it was **the voting patterns between them would be reflective of the popularity of the respective theologies, not the quality of the answer**. If we're to be consistent about this site not endorsing any particular branch of Christianity as "absolute truth" and not excluding minority "heretical" groups from having a valid shot at answering questions directed at them, this simply isn't an acceptable outcome. Your suggested pattern would drag all of the baggage of Christianity and conflict between traditions into every question on the site. We realized early in the life of the site that was not going to work and we have put a lot of effort into keeping it from happening. This is simply not the right venue to present answers that are judged on their "truth" value. If your hope is to provide spiritual direction to people looking for answers to life's persistent questions, this is the wrong place to do it. You've actually answered a lot of questions with problems along the same vein as this example. For the sake of your own time please stop. If a question isn't obviously scoped in a way that's going to be durable and sit well with out guidelines please don't waste your time answering it. I feel bad every time I see you (or others) answering questions without scope because I know you put effort into your posts and I hate to see that wasted because they are a mismatch for the venue. You seem like a nice fellow and you actually write some pretty quality stuff. I even agree with a lot (if not all) of your theology. But agreeing with you does not give me a reason to excuse the scope problems such generic answers represent. We are always getting on the case of nut-job posters that do things like post anti-Catholic rants on Catholic questions. The only way we have to keep that kind of contentions stuff cleaned up is the insistence on scoping rules for questions. It's not a perfect system, but it serves a purpose. If we let your answers slide just because they happen to be agreeable we loose the ability to fulfill the purpose of this site. There are plenty of better venues for sharing Scripture and helping people solve their spiritual problems by directing them to God's word. That ambition is an admirable use of your time and I encourage you to keep doing it, especially in the context of your local church. The Internet even has lots of opportunity for that sort of outreach, but this site isn't meant to be a venue for that.
Exegesis questions don't normally need denominational scoping. Considering the question is about an idiom not eschatology I'm not sure why Caleb thought it needed closing. I think I'll vote to reopen it. I edited it and then @Bye has since edited it again. I think it should be on-topic, but I'm not sure whether either of us have got the right wording.
13,175,328
I have a using **Tweepy**, a python wrapper for Twitter.I am writing a small GUI application in Python which updates my twitter account. Currently, I am just testing if the I can get connected to Twitter, hence used test() call. I am behind Squid Proxy server.What changes should I make to snippet so that I should get my work done. Setting **http\_proxy** in bash shell did not help me. ``` def printTweet(self): #extract tweet string tweet_str = str(self.ui.tweet_txt.toPlainText()) ; #tweet string extracted. self.ui.tweet_txt.clear() ; self.tweet_on_twitter(str); def tweet_on_twitter(self,my_tweet) : auth = tweepy.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET); auth.set_access_token(ACCESS_KEY, ACCESS_SECRET) ; api = tweepy.API(auth) ; if api.test() : print 'Test successful' ; else : print 'Test unsuccessful'; ```
2012/11/01
[ "https://Stackoverflow.com/questions/13175328", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1505819/" ]
I guess you should set 'https\_proxy' instead. On my linux, I use this: ``` > export HTTPS_PROXY="http://xxxx:8888" ``` before running my Tweepy script. Tweep uses 'requests' package for sending request, read <http://docs.python-requests.org/en/master/user/advanced/#proxies> for more.
**EDIT: Turns out this isn't a viable answer, but I'm leaving it here for reference** --- Since a quick glance of the code shows that tweepy is using urllib2.urlopen & co., the easiest is possibly to just override the default opener... ``` # 'x.x.x.x' = IP of squid server your_squid_server = urllib2.ProxyHandler({'http': 'x.x.x.x', 'https': 'x.x.x.x'}) new_opener = urllib2.build_opener(your_squid_server) urllib2.install_opener(new_opener) ``` Haven't get an environment to check that at the moment though... **Do the above before importing tweepy to make sure the new opener is in effect**
13,175,328
I have a using **Tweepy**, a python wrapper for Twitter.I am writing a small GUI application in Python which updates my twitter account. Currently, I am just testing if the I can get connected to Twitter, hence used test() call. I am behind Squid Proxy server.What changes should I make to snippet so that I should get my work done. Setting **http\_proxy** in bash shell did not help me. ``` def printTweet(self): #extract tweet string tweet_str = str(self.ui.tweet_txt.toPlainText()) ; #tweet string extracted. self.ui.tweet_txt.clear() ; self.tweet_on_twitter(str); def tweet_on_twitter(self,my_tweet) : auth = tweepy.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET); auth.set_access_token(ACCESS_KEY, ACCESS_SECRET) ; api = tweepy.API(auth) ; if api.test() : print 'Test successful' ; else : print 'Test unsuccessful'; ```
2012/11/01
[ "https://Stackoverflow.com/questions/13175328", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1505819/" ]
I guess you should set 'https\_proxy' instead. On my linux, I use this: ``` > export HTTPS_PROXY="http://xxxx:8888" ``` before running my Tweepy script. Tweep uses 'requests' package for sending request, read <http://docs.python-requests.org/en/master/user/advanced/#proxies> for more.
The proxy support in tweepy is severely lacking; there is a [patch available](https://github.com/tweepy/tweepy/pull/152) that aims to fix that problem. The patch switches Tweepy from using `httplib` directly to using `urllib2` instead, which means it'd honour the `http_proxy` environment variable.
13,175,328
I have a using **Tweepy**, a python wrapper for Twitter.I am writing a small GUI application in Python which updates my twitter account. Currently, I am just testing if the I can get connected to Twitter, hence used test() call. I am behind Squid Proxy server.What changes should I make to snippet so that I should get my work done. Setting **http\_proxy** in bash shell did not help me. ``` def printTweet(self): #extract tweet string tweet_str = str(self.ui.tweet_txt.toPlainText()) ; #tweet string extracted. self.ui.tweet_txt.clear() ; self.tweet_on_twitter(str); def tweet_on_twitter(self,my_tweet) : auth = tweepy.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET); auth.set_access_token(ACCESS_KEY, ACCESS_SECRET) ; api = tweepy.API(auth) ; if api.test() : print 'Test successful' ; else : print 'Test unsuccessful'; ```
2012/11/01
[ "https://Stackoverflow.com/questions/13175328", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1505819/" ]
I guess you should set 'https\_proxy' instead. On my linux, I use this: ``` > export HTTPS_PROXY="http://xxxx:8888" ``` before running my Tweepy script. Tweep uses 'requests' package for sending request, read <http://docs.python-requests.org/en/master/user/advanced/#proxies> for more.
this is an old question, but hopefully this helps. <https://bitbucket.org/sakito/tweepy> provides tweepy with urllib merged into it; the proxy settings works well. there is a little problem with the stream (in my case, at least), but it is usable with a little tweak.
686,742
I used bash for a long time, now I want to use zsh, only one problem: .bashrc conflicts with zsh How it is supposed to look like: ``` archcoolC# ``` How it looks like with a bashrc: (with colors here, (echo $PS1)) ``` \[[1m\]\[[38;5;1m\][\[[38;5;3m\]\u\[[38;5;2m\]@\[[38;5;4m\]\h \[[38;5;5m\]\W\[[38;5;1m\]]\[[38;5;7m\]\$ \[[m(B\] ``` Anyone knows the fix? FYI: this is on arch and on ubuntu
2022/01/17
[ "https://unix.stackexchange.com/questions/686742", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/510390/" ]
Zsh comes with colored prompts out of box.Try ``` autoload -U promptinit && promptinit ``` then `prompt -l` lists available prompts, `-p fire` previews the "fire" prompt, `-s fire` sets it. When you are ready add a prompt add below the autoload line above: ``` prompt fade red ```
Don't export the PS1 shell variable, it's not meant to be exported and the few programs that usually use PS1 are other unix shells that usually have their own prompt escapes to display things.
13,465,939
EDIT: I have successfully been able to calculate the value I was trying to get, but instead of calculating that value for each row, it is just calculating it once and posting that value everywhere. How do I make it recalculate for each row using the code I have? Picture: <http://img515.imageshack.us/img515/9064/example2w.png> New Code: ``` <html> <head> <title>PHP-MySQL Project 4</title> <div align="center"> <p> PHP-MySQL Project 4 <br/> By: Ryan Strouse </p> </div> </head> <body bgcolor="#99FFFF"> <?php $DBName = "surveys"; $DBConnect = @mysqli_connect("localhost", "students", "password") Or die("<p>Unable to connect to the database server.</p>" . "<p>Error code " . mysqli_connect_errno() . ": " . mysqli_connect_error()) . "</p>"; if (!$DBConnect) { echo "<p> The database server is not available.</p>"; } else { echo "<p> Successfully connected to the database $DBName</p>"; } mysqli_select_db($DBConnect, $DBName); echo "<p>Database -'$DBName'- found</p>"; $SQLstring = "SELECT * FROM surveys WHERE surveyCode = 'GEI001'"; $QueryResult = @mysqli_query($DBConnect, $SQLstring); echo $SQLstring; $row = mysqli_fetch_assoc($QueryResult); $count_surveys = $row['surveyResponses']; echo "<p>Total Responses: $count_surveys</p>"; $SQLstring2 = "SELECT * FROM results WHERE surveyCode = 'GEI001'"; $QueryResult2 = @mysqli_query($DBConnect, $SQLstring2); echo $SQLstring2; echo "<br/>"; $Row = mysqli_fetch_assoc($QueryResult2); $SQLstring3 = "SELECT * FROM surveys, results"; $QueryResult3 = @mysqli_query($DBConnect, $SQLstring3); $fetchrow = mysqli_fetch_assoc($QueryResult3); $result_amount = (($fetchrow['resultResponses'] / $fetchrow['surveyResponses']) * 100); echo "<table>"; echo "<tr><th>Commercial</th> <th>Views</th> <th>Percentage</th></tr>"; do { echo "<tr><td>{$Row['resultDescription']}</td>"; echo "<td>{$Row['resultResponses']}</td>"; echo "<td>$result_amount</td></tr>"; $Row = mysqli_fetch_assoc($QueryResult3); } while ($Row); echo "</table>"; ?> <center> <h3><a href="Survey1.html">Return To Main Page</a></h3> <h3><a href="../Menu.html">Return to Menu</a></h3> </center> </body> <footer> <div align="center"> &copy; Copyright Ryan Strouse &copy; </div> </footer> </html> ``` I have two database tables and I am successfully pulling in column data into a table. The third cell of the table I would like to calculate a percentage out of some of the columns from the database. I'm not sure how to code this... I've tried to come up with something in the SELECT statement from another thread I found with no luck. Here is a picture of the query I'm trying to get to work: <http://img696.imageshack.us/img696/3862/examplegw.png> ``` <html> <head> <title>PHP-MySQL Project 4</title> </head> <body bgcolor="#99FFFF"> <?php $DBName = "surveys"; $DBConnect = @mysqli_connect("localhost", "students", "password") Or die("<p>Unable to connect to the database server.</p>" . "<p>Error code " . mysqli_connect_errno() . ": " . mysqli_connect_error()) . "</p>"; if (!$DBConnect) { echo "<p> The database server is not available.</p>"; } else { echo "<p> Successfully connected to the database $DBName</p>"; } mysqli_select_db($DBConnect, $DBName); echo "<p>Database -'$DBName'- found</p>"; $SQLstring = "SELECT * FROM surveys WHERE surveyCode = 'GEI001'"; $QueryResult = @mysqli_query($DBConnect, $SQLstring); echo $SQLstring; $row = mysqli_fetch_assoc($QueryResult); $count_surveys = $row['surveyResponses']; echo "<p>Total Responses: $count_surveys</p>"; $SQLstring2 = "SELECT * FROM results WHERE surveyCode = 'GEI001'"; $QueryResult2 = @mysqli_query($DBConnect, $SQLstring2); echo $SQLstring2; echo "<br/>"; $Row = mysqli_fetch_assoc($QueryResult2); //this is where I am trying to calculate the value and then below it display in table //cell # 3 $SQLstring3 = "SELECT *,((resultResponses/surveyResponses)*100) AS AMOUNT FROM surveys, results"; $QueryResult3 = @mysqli_query($DBConnect, $SQLstring3); do { echo "<table>"; echo "<tr><th>Commercial</th> <th>Views</th> <th>Percentage</th></tr>"; echo "<tr><td>{$Row['resultDescription']}</td>"; echo "<td>{$Row['resultResponses']}</td>"; echo "<td>$QueryResult3</td></tr>"; $Row = mysqli_fetch_assoc($QueryResult); } while ($Row); echo "</table>"; ?> <center> <h3><a href="Survey1.html">Return To Main Page</a></h3> <h3><a href="../Menu.html">Return to Menu</a></h3> </center> </body> <footer> </footer> </html> ```
2012/11/20
[ "https://Stackoverflow.com/questions/13465939", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1821676/" ]
I think that it may lag because the program downloads the image each time you draw it with your `Graphics` object. You should use a cache system for your image or download it once for all the program execution.
My guess, and it's only a guess because you don't tell us, but could you possibly be trying to read in the Image from the URL from within a Swing or AWT `paint(...)` or `paintComponent(...)` method? If so, don't do this. Read the image in once, and then *use* it in the `paintComponent(...)` method. If this doesn't help, please do tell us the details we'll need to know to be able to help you.
13,465,939
EDIT: I have successfully been able to calculate the value I was trying to get, but instead of calculating that value for each row, it is just calculating it once and posting that value everywhere. How do I make it recalculate for each row using the code I have? Picture: <http://img515.imageshack.us/img515/9064/example2w.png> New Code: ``` <html> <head> <title>PHP-MySQL Project 4</title> <div align="center"> <p> PHP-MySQL Project 4 <br/> By: Ryan Strouse </p> </div> </head> <body bgcolor="#99FFFF"> <?php $DBName = "surveys"; $DBConnect = @mysqli_connect("localhost", "students", "password") Or die("<p>Unable to connect to the database server.</p>" . "<p>Error code " . mysqli_connect_errno() . ": " . mysqli_connect_error()) . "</p>"; if (!$DBConnect) { echo "<p> The database server is not available.</p>"; } else { echo "<p> Successfully connected to the database $DBName</p>"; } mysqli_select_db($DBConnect, $DBName); echo "<p>Database -'$DBName'- found</p>"; $SQLstring = "SELECT * FROM surveys WHERE surveyCode = 'GEI001'"; $QueryResult = @mysqli_query($DBConnect, $SQLstring); echo $SQLstring; $row = mysqli_fetch_assoc($QueryResult); $count_surveys = $row['surveyResponses']; echo "<p>Total Responses: $count_surveys</p>"; $SQLstring2 = "SELECT * FROM results WHERE surveyCode = 'GEI001'"; $QueryResult2 = @mysqli_query($DBConnect, $SQLstring2); echo $SQLstring2; echo "<br/>"; $Row = mysqli_fetch_assoc($QueryResult2); $SQLstring3 = "SELECT * FROM surveys, results"; $QueryResult3 = @mysqli_query($DBConnect, $SQLstring3); $fetchrow = mysqli_fetch_assoc($QueryResult3); $result_amount = (($fetchrow['resultResponses'] / $fetchrow['surveyResponses']) * 100); echo "<table>"; echo "<tr><th>Commercial</th> <th>Views</th> <th>Percentage</th></tr>"; do { echo "<tr><td>{$Row['resultDescription']}</td>"; echo "<td>{$Row['resultResponses']}</td>"; echo "<td>$result_amount</td></tr>"; $Row = mysqli_fetch_assoc($QueryResult3); } while ($Row); echo "</table>"; ?> <center> <h3><a href="Survey1.html">Return To Main Page</a></h3> <h3><a href="../Menu.html">Return to Menu</a></h3> </center> </body> <footer> <div align="center"> &copy; Copyright Ryan Strouse &copy; </div> </footer> </html> ``` I have two database tables and I am successfully pulling in column data into a table. The third cell of the table I would like to calculate a percentage out of some of the columns from the database. I'm not sure how to code this... I've tried to come up with something in the SELECT statement from another thread I found with no luck. Here is a picture of the query I'm trying to get to work: <http://img696.imageshack.us/img696/3862/examplegw.png> ``` <html> <head> <title>PHP-MySQL Project 4</title> </head> <body bgcolor="#99FFFF"> <?php $DBName = "surveys"; $DBConnect = @mysqli_connect("localhost", "students", "password") Or die("<p>Unable to connect to the database server.</p>" . "<p>Error code " . mysqli_connect_errno() . ": " . mysqli_connect_error()) . "</p>"; if (!$DBConnect) { echo "<p> The database server is not available.</p>"; } else { echo "<p> Successfully connected to the database $DBName</p>"; } mysqli_select_db($DBConnect, $DBName); echo "<p>Database -'$DBName'- found</p>"; $SQLstring = "SELECT * FROM surveys WHERE surveyCode = 'GEI001'"; $QueryResult = @mysqli_query($DBConnect, $SQLstring); echo $SQLstring; $row = mysqli_fetch_assoc($QueryResult); $count_surveys = $row['surveyResponses']; echo "<p>Total Responses: $count_surveys</p>"; $SQLstring2 = "SELECT * FROM results WHERE surveyCode = 'GEI001'"; $QueryResult2 = @mysqli_query($DBConnect, $SQLstring2); echo $SQLstring2; echo "<br/>"; $Row = mysqli_fetch_assoc($QueryResult2); //this is where I am trying to calculate the value and then below it display in table //cell # 3 $SQLstring3 = "SELECT *,((resultResponses/surveyResponses)*100) AS AMOUNT FROM surveys, results"; $QueryResult3 = @mysqli_query($DBConnect, $SQLstring3); do { echo "<table>"; echo "<tr><th>Commercial</th> <th>Views</th> <th>Percentage</th></tr>"; echo "<tr><td>{$Row['resultDescription']}</td>"; echo "<td>{$Row['resultResponses']}</td>"; echo "<td>$QueryResult3</td></tr>"; $Row = mysqli_fetch_assoc($QueryResult); } while ($Row); echo "</table>"; ?> <center> <h3><a href="Survey1.html">Return To Main Page</a></h3> <h3><a href="../Menu.html">Return to Menu</a></h3> </center> </body> <footer> </footer> </html> ```
2012/11/20
[ "https://Stackoverflow.com/questions/13465939", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1821676/" ]
`https://www.dropbox.com/s/xpc49t8xpqt8dir/pacman%20down.jpg` is return HTML text not image data. This is a hack, but try `https://www.dropbox.com/s/xpc49t8xpqt8dir/pacman%20down.jpg?dl=1` instead. Be warned though, it's possible that drop box could change this query in the future. ![enter image description here](https://i.stack.imgur.com/xypAM.png) ``` public class TestURL02 { public static void main(String[] args) { new TestURL02(); } public TestURL02() { EventQueue.invokeLater(new Runnable() { @Override public void run() { try { UIManager.setLookAndFeel(UIManager.getSystemLookAndFeelClassName()); } catch (ClassNotFoundException | InstantiationException | IllegalAccessException | UnsupportedLookAndFeelException ex) { } JFrame frame = new JFrame(); frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); frame.setLayout(new BorderLayout()); frame.add(new PacPane()); frame.pack(); frame.setLocationRelativeTo(null); frame.setVisible(true); } }); } public class PacPane extends JPanel { private BufferedImage image; public PacPane() { InputStream is = null; try { URL url = new URL("https://www.dropbox.com/s/xpc49t8xpqt8dir/pacman%20down.jpg?dl=1"); // StringBuilder sb = new StringBuilder(1024); // byte[] buffer = new byte[1024 * 1024]; // is = url.openStream(); // int in = -1; // while ((in = is.read(buffer)) != -1) { // sb.append(new String(buffer)); // } // System.out.println(sb.toString()); image = ImageIO.read(url); } catch (IOException exp) { exp.printStackTrace(); } finally { try { is.close(); } catch (Exception e) { } } } @Override public Dimension getPreferredSize() { return image == null ? super.getPreferredSize() : new Dimension(image.getWidth(), image.getHeight()); } @Override protected void paintComponent(Graphics g) { super.paintComponent(g); if (image != null) { int x = (getWidth() - image.getWidth()) / 2; int y = (getHeight() - image.getHeight()) / 2; g.drawImage(image, x, y, this); } } } } ```
My guess, and it's only a guess because you don't tell us, but could you possibly be trying to read in the Image from the URL from within a Swing or AWT `paint(...)` or `paintComponent(...)` method? If so, don't do this. Read the image in once, and then *use* it in the `paintComponent(...)` method. If this doesn't help, please do tell us the details we'll need to know to be able to help you.
6,331,288
I am developing a issue logger for my project and am running into an issue when analyzing the logged data. The problem being that this table grows very fast and that the filters used to search on data in the table can vary in almost every way, seeing as we're not always interested in the same fields. So indexes aren't really an option. The table is currently on a MySQL database, with the following structure: ``` CREATE TABLE `log_issues` ( `id` int(11) unsigned NOT NULL AUTO_INCREMENT, `id_user` int(11) DEFAULT NULL, `type` varchar(50) NOT NULL, `title` varchar(100) NOT NULL DEFAULT '', `message` mediumtext NOT NULL, `debug` mediumtext, `duration` float DEFAULT NULL, `date` datetime NOT NULL, PRIMARY KEY (`id`), KEY `date` (`date`,`title`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; ``` Now my question is, how can I run queries on this table when it has millions of entries without having to wait forever for a result? For example just filtering on the id of a user takes forever. I know I can place an index on the id\_user portion, but I might want to combine it with other fields, or due to the way the query is generated by the tool that views these logs it might not utilize the indexes properly. I think I might be better off using MongoDB or a different NoSQL database, but I don't have any experience with them. Do document based database have an easier time filtering a large dataset without indexes or will I always be stuck with this problem no matter the database? **To summarize:** I have a table with a large amount of data, indexes can't be used (least not if they need to be ordered) and I need to get results without waiting for over 10 seconds. What technologies can I use? Any suggestions would be much appreciated.
2011/06/13
[ "https://Stackoverflow.com/questions/6331288", "https://Stackoverflow.com", "https://Stackoverflow.com/users/490321/" ]
Generally no. In principle, actors live outside the system boundary while Use Cases (and the system(s) that realise them) live inside. However, more useful is to ask why you have this scenario. Perhaps you can explain further?
Maybe. For example a cron job which performs a nightly summarisation function can be shown as an actor. As with all UML diagrams, if the diagram is useful to the people that are using it, it's OK.
6,331,288
I am developing a issue logger for my project and am running into an issue when analyzing the logged data. The problem being that this table grows very fast and that the filters used to search on data in the table can vary in almost every way, seeing as we're not always interested in the same fields. So indexes aren't really an option. The table is currently on a MySQL database, with the following structure: ``` CREATE TABLE `log_issues` ( `id` int(11) unsigned NOT NULL AUTO_INCREMENT, `id_user` int(11) DEFAULT NULL, `type` varchar(50) NOT NULL, `title` varchar(100) NOT NULL DEFAULT '', `message` mediumtext NOT NULL, `debug` mediumtext, `duration` float DEFAULT NULL, `date` datetime NOT NULL, PRIMARY KEY (`id`), KEY `date` (`date`,`title`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; ``` Now my question is, how can I run queries on this table when it has millions of entries without having to wait forever for a result? For example just filtering on the id of a user takes forever. I know I can place an index on the id\_user portion, but I might want to combine it with other fields, or due to the way the query is generated by the tool that views these logs it might not utilize the indexes properly. I think I might be better off using MongoDB or a different NoSQL database, but I don't have any experience with them. Do document based database have an easier time filtering a large dataset without indexes or will I always be stuck with this problem no matter the database? **To summarize:** I have a table with a large amount of data, indexes can't be used (least not if they need to be ordered) and I need to get results without waiting for over 10 seconds. What technologies can I use? Any suggestions would be much appreciated.
2011/06/13
[ "https://Stackoverflow.com/questions/6331288", "https://Stackoverflow.com", "https://Stackoverflow.com/users/490321/" ]
Generally no. In principle, actors live outside the system boundary while Use Cases (and the system(s) that realise them) live inside. However, more useful is to ask why you have this scenario. Perhaps you can explain further?
UML is all about communicating a design decision to those that need to know is clear and concise manner. If it is advantageous to the clarity of this aim then using an actor to represent a sub part of the design then do it. So long as you make it very clear that this is what you are communicating and that both are part of the same model. As an example: I have experience in designing and coding embedded systems where blocks of code might be running on different processors/controllers or even in a different enclosure. But they are all part of the same application and therefore design model. Another way of looking at it on say, a windows machine, is an application that relies on a windows service for it's normal operation. The service might be an actor to the GUI application and the application might be an actor to the service. @sfinnie is right in principle, it is sometimes useful to communicate things outside of this rule. After all, you'll never have to compile your UML, just explain it ;) .
2,134,204
I understand how Enums work in C#, and I get what the Flags attribute brings to the table. I saw this question, [here](https://stackoverflow.com/questions/1792437/c-enums-with-flags-attribute). Which recommends the first flavor, but doesn't provide any reason/justification for it. Is there a difference in the way in which these two are defined, is one better than the other? What are the advantages to using the first synax as instead of the second? I've always used the second flavor when defining Flags type Enums... have I been doing it wrong all this time? ``` [Serializable] [Flags] public enum SiteRoles { User = 1 << 0, Admin = 1 << 1, Helpdesk = 1 << 2 } ``` Is that not the same as ``` [Serializable] [Flags] public enum SiteRoles { User = 1, Admin = 2, Helpdesk = 4 } ```
2010/01/25
[ "https://Stackoverflow.com/questions/2134204", "https://Stackoverflow.com", "https://Stackoverflow.com/users/86860/" ]
The main advantage with the first one is that you don't need to calculate the correct values for each flag since the compiler will do it for you. Apart from that they are the same.
AFAIK its a readability debate. Some would say the first is more readable because you have the actual index of the flag on the right hand side of the '<<'.
2,134,204
I understand how Enums work in C#, and I get what the Flags attribute brings to the table. I saw this question, [here](https://stackoverflow.com/questions/1792437/c-enums-with-flags-attribute). Which recommends the first flavor, but doesn't provide any reason/justification for it. Is there a difference in the way in which these two are defined, is one better than the other? What are the advantages to using the first synax as instead of the second? I've always used the second flavor when defining Flags type Enums... have I been doing it wrong all this time? ``` [Serializable] [Flags] public enum SiteRoles { User = 1 << 0, Admin = 1 << 1, Helpdesk = 1 << 2 } ``` Is that not the same as ``` [Serializable] [Flags] public enum SiteRoles { User = 1, Admin = 2, Helpdesk = 4 } ```
2010/01/25
[ "https://Stackoverflow.com/questions/2134204", "https://Stackoverflow.com", "https://Stackoverflow.com/users/86860/" ]
Consider more complex samples: ``` [Flags] public enum SiteRoles { User = 1 << 12, Admin = 1 << 13, Helpdesk = 1 << 15, AdvancedUser = User | Helpdesk, //or (1<<12)|(1<<13) } [Flags] public enum SiteRoles { User = 4096, //not so obvious! Admin = 8192, Helpdesk = 16384, AdvancedUser = 12288, //! } [Flags] public enum SiteRoles { User = 0x1000, //we can use hexademical digits Admin = 0x2000, Helpdesk = 0x4000, AdvancedUser = 0x3000, //it much simpler calculate binary operator OR with hexademicals } ``` This samples shows that in this case first version is MUCH MORE readable. Decimal literals is not the best way to represent flag constants. And for more information about bitwise operations (that also can be used to represent flag constants) see <http://en.wikipedia.org/wiki/Bitwise_operation>
AFAIK its a readability debate. Some would say the first is more readable because you have the actual index of the flag on the right hand side of the '<<'.
2,134,204
I understand how Enums work in C#, and I get what the Flags attribute brings to the table. I saw this question, [here](https://stackoverflow.com/questions/1792437/c-enums-with-flags-attribute). Which recommends the first flavor, but doesn't provide any reason/justification for it. Is there a difference in the way in which these two are defined, is one better than the other? What are the advantages to using the first synax as instead of the second? I've always used the second flavor when defining Flags type Enums... have I been doing it wrong all this time? ``` [Serializable] [Flags] public enum SiteRoles { User = 1 << 0, Admin = 1 << 1, Helpdesk = 1 << 2 } ``` Is that not the same as ``` [Serializable] [Flags] public enum SiteRoles { User = 1, Admin = 2, Helpdesk = 4 } ```
2010/01/25
[ "https://Stackoverflow.com/questions/2134204", "https://Stackoverflow.com", "https://Stackoverflow.com/users/86860/" ]
The main advantage with the first one is that you don't need to calculate the correct values for each flag since the compiler will do it for you. Apart from that they are the same.
Consider more complex samples: ``` [Flags] public enum SiteRoles { User = 1 << 12, Admin = 1 << 13, Helpdesk = 1 << 15, AdvancedUser = User | Helpdesk, //or (1<<12)|(1<<13) } [Flags] public enum SiteRoles { User = 4096, //not so obvious! Admin = 8192, Helpdesk = 16384, AdvancedUser = 12288, //! } [Flags] public enum SiteRoles { User = 0x1000, //we can use hexademical digits Admin = 0x2000, Helpdesk = 0x4000, AdvancedUser = 0x3000, //it much simpler calculate binary operator OR with hexademicals } ``` This samples shows that in this case first version is MUCH MORE readable. Decimal literals is not the best way to represent flag constants. And for more information about bitwise operations (that also can be used to represent flag constants) see <http://en.wikipedia.org/wiki/Bitwise_operation>
17,131,554
The datastructure design I've chosen is proving very awkward to execute, so rather than ask for your expert opinion on how to execute it, I'm hoping you can suggest a more natural data structure for what I'm trying to do, which is as follows. I'm reading in rows of data. Each column is a single variable (Animal, Color, Crop, ... - there are 45 of them). Each row of data has a value for the variable of that column - you don't know the values or the number of rows in advance. ``` Animal Color Crop ... ------------------------------------- cat red oat cat blue hay dog blue oat bat blue corn cat red corn dog gray corn ... ... ... ``` When I'm done reading, it should capture each Variable, each value that variable took, and how many times that variable took that value, like so: ``` Animal [cat, 3][dog,2][bat, 1]... Color [blue, 3][red,2][gray,1]... Crop [corn,3][oat, 2][hay,1]... ... ``` I've tried several approaches, the closest I've gotten is with a GUAVA multi map of hash maps, like so: ``` Map<String, Integer> eqCnts = new HashMap<String, Integer>(); Multimap<String, Map> ed3Dcnt = HashMultimap.create(); for (int i = 0; i + 1 < header.length; i++) { System.out.format("Got a variable of %s\n", tmpStrKey = header[i]); ed3Dcnt.put(tmpStrKey, new HashMap<String, Integer>()); } ``` It seems I've created exactly what I want just fine, but it's extremely awkward and tedious to work with, and also it behaves in mysterious ways (for one thing, even though the "ed3Dcnt.put()" inserted a HashMap, the corresponding ".get()" does not return a HashMap, but rather a Collection, which creates a whole new set of problems.) Note that I'd like to sort the result on the values, from highest to lowest, but I think I can do that easily enough. So if you please, a suggestion on a better choice of data structure design? If there isn't a clearly better design choice, how do I use the Collection that the .get() returns, when all I want is the single HashMap that I put in that slot? Thanks very much - Ed
2013/06/16
[ "https://Stackoverflow.com/questions/17131554", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2229973/" ]
Seems to me that the best fit is: ``` HashMap<String, HashMap<String, Integer>> map= new HashMap<String, HashMap<String, Integer>>(); ``` Now, to add *header* inner maps: ``` for (int i = 0; i + 1 < header.length; i++) { System.out.format("Got a variable of %s\n", tmpStrKey = header[i]); map.put(tmpStrKey, new HashMap<String, Integer>()); } ``` And to increment a value in the inner map: ``` //we are in some for loop for ( ... ) { String columnKey = "animal"; //lets say we are here in the for loop for ( ... ) { String columnValue = "cat"; //assume we are here HashMap<String, Integer> innerMap = map.get(columnKey); //increment occurence Integer count = innerMap.get(columnValue); if (count == null) { count = 0; } innerMap.put(columnValue, ++count); } } ```
1) The map inside your multimap is commonly referred to as a cardinality map. For creating a cardinality map from a collection of values, I usually use [CollectionUtils.getCardinalityMap](http://commons.apache.org/proper/commons-collections/javadocs/api-release/org/apache/commons/collections/CollectionUtils.html#getCardinalityMap%28java.util.Collection%29) from Apache Commons Collections, although that isn't generified so you'll need one unsafe (but known to be safe) cast. If you want to build the map using Guava I think you should first put the values for a variable in a `Set<String>` (to get the set of unique values) and then use [Iterables.frequency()](http://google-collections.googlecode.com/svn/trunk/javadoc/com/google/common/collect/Iterables.html#frequency%28java.lang.Iterable,%20java.lang.Object%29) for each value to get the count. (EDIT: or even easier: use `ImmutableMultiset.copyOf(collection)` to get the cardinality map as a `Multiset`) Anyway, the resulting cardinality map is a `Map<String, Integer` such as you're already using. 2) I don't see why you need a Multimap. After all you want to map each variable to a cardinality map, so I'd use `Map<String, Map<String, Integer>>`. EDIT: or use `Map<String, Multiset<String>>` if you decide to use a Multiset as your cardinality map.
17,131,554
The datastructure design I've chosen is proving very awkward to execute, so rather than ask for your expert opinion on how to execute it, I'm hoping you can suggest a more natural data structure for what I'm trying to do, which is as follows. I'm reading in rows of data. Each column is a single variable (Animal, Color, Crop, ... - there are 45 of them). Each row of data has a value for the variable of that column - you don't know the values or the number of rows in advance. ``` Animal Color Crop ... ------------------------------------- cat red oat cat blue hay dog blue oat bat blue corn cat red corn dog gray corn ... ... ... ``` When I'm done reading, it should capture each Variable, each value that variable took, and how many times that variable took that value, like so: ``` Animal [cat, 3][dog,2][bat, 1]... Color [blue, 3][red,2][gray,1]... Crop [corn,3][oat, 2][hay,1]... ... ``` I've tried several approaches, the closest I've gotten is with a GUAVA multi map of hash maps, like so: ``` Map<String, Integer> eqCnts = new HashMap<String, Integer>(); Multimap<String, Map> ed3Dcnt = HashMultimap.create(); for (int i = 0; i + 1 < header.length; i++) { System.out.format("Got a variable of %s\n", tmpStrKey = header[i]); ed3Dcnt.put(tmpStrKey, new HashMap<String, Integer>()); } ``` It seems I've created exactly what I want just fine, but it's extremely awkward and tedious to work with, and also it behaves in mysterious ways (for one thing, even though the "ed3Dcnt.put()" inserted a HashMap, the corresponding ".get()" does not return a HashMap, but rather a Collection, which creates a whole new set of problems.) Note that I'd like to sort the result on the values, from highest to lowest, but I think I can do that easily enough. So if you please, a suggestion on a better choice of data structure design? If there isn't a clearly better design choice, how do I use the Collection that the .get() returns, when all I want is the single HashMap that I put in that slot? Thanks very much - Ed
2013/06/16
[ "https://Stackoverflow.com/questions/17131554", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2229973/" ]
You can remove some of the oddity by replacing your `Map<String, Integer>` by a [Multiset](http://code.google.com/p/guava-libraries/wiki/NewCollectionTypesExplained#Multiset). [A multiset (or a bag)](http://en.wikipedia.org/wiki/Multiset) is a set that allows duplicate elements - and counts them. You throw in an apple, a pear, and an apple again. It remembers that it has two apples and a pear. Basically, it's what you imagine under a `Map<String, Integer>` which you just used. ``` Multiset<String> eqCounts = HashMultiset.create(); ``` --- > > the corresponding ".get()" does not return a HashMap, but rather a > Collection > > > This is because you used a generic 'Multimap' interface. The docs say: > > You rarely use the Multimap interface directly, however; more often > you'll use [`ListMultimap`](http://docs.guava-libraries.googlecode.com/git/javadoc/com/google/common/collect/ListMultimap.html)or [`SetMultimap`](http://docs.guava-libraries.googlecode.com/git/javadoc/com/google/common/collect/SetMultimap.html), which map keys to a List or a > Set respectively. > > > --- So, to stick to your original design: * Each column will be a `Multiset<String>` which will store and count your values. * You'll have a `Map<String, Multiset<String>>` (key is a header, value is the column) where you'll put the columns like this: ``` Map<String, Multiset<String>> columns = Maps.newHashMap(); for (int i = 0; i < headers.length; i++) { System.out.format("Got a variable of %s\n", headers[i]); columns.put(headers[i], HashMultiset.<String>create()); } ``` Read a line and put the values where they belong: ``` String[] values = line.split(" "); for (int i = 0; i < headers.length; i++) { columns.get(headers[i]).add(values[i]); } ``` --- All that said, you can see that the outer `HashMap` is kind of redundant and the whole thing still could be improved (though it's good enough, I think). To improve it more, you can try of these: 1. Use an array of `Multiset` instead of a `HashMap`. Afterall, you know the number of columns beforehand. 2. If you're uncomfortable with creating generic arrays, use a `List.` 3. And probably the best: Create a class `Column` like this: ``` private static class Column { private final String header; private final Multiset<String> values; private Column(String header) { this.header = header; this.values = HashMultiset.create(); } } ``` And instead of using `String[]` for headers and a `Map<String, Multiset<String>>` for their values, use a `Column[]`. You can create this array in place of creating the `headers` array.
1) The map inside your multimap is commonly referred to as a cardinality map. For creating a cardinality map from a collection of values, I usually use [CollectionUtils.getCardinalityMap](http://commons.apache.org/proper/commons-collections/javadocs/api-release/org/apache/commons/collections/CollectionUtils.html#getCardinalityMap%28java.util.Collection%29) from Apache Commons Collections, although that isn't generified so you'll need one unsafe (but known to be safe) cast. If you want to build the map using Guava I think you should first put the values for a variable in a `Set<String>` (to get the set of unique values) and then use [Iterables.frequency()](http://google-collections.googlecode.com/svn/trunk/javadoc/com/google/common/collect/Iterables.html#frequency%28java.lang.Iterable,%20java.lang.Object%29) for each value to get the count. (EDIT: or even easier: use `ImmutableMultiset.copyOf(collection)` to get the cardinality map as a `Multiset`) Anyway, the resulting cardinality map is a `Map<String, Integer` such as you're already using. 2) I don't see why you need a Multimap. After all you want to map each variable to a cardinality map, so I'd use `Map<String, Map<String, Integer>>`. EDIT: or use `Map<String, Multiset<String>>` if you decide to use a Multiset as your cardinality map.
48,993,680
Using android studio for the first time. While 'run "app"' on a device connected by USB, it's giving me an error in Build. Why am i getting this? and how to resolve the error? ``` 2 actionable tasks: 2 executed Executing tasks: [:app:assembleDebug] :app:buildInfoDebugLoader [Fatal Error] :1:1: Premature end of file. FAILED :app:buildInfoGeneratorDebug FAILURE: Build failed with an exception. * What went wrong: Execution failed for task ':app:buildInfoDebugLoader'. > Exception while loading build-info.xml : org.xml.sax.SAXParseException;lineNumber: 1; columnNumber: 1; Premature end of file. at com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(DOMParser.java:257) at com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(DocumentBuilderImpl.java:339) at com.android.utils.XmlUtils.parseDocument(XmlUtils.java:509) at com.android.utils.XmlUtils.parseUtfXmlFile(XmlUtils.java:524) at com.android.build.gradle.internal.incremental.InstantRunBuildContext.loadFromXmlFile(InstantRunBuildContext.java:763) at com.android.build.gradle.internal.incremental.BuildInfoLoaderTask.executeAction(BuildInfoLoaderTask.java:58) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.gradle.internal.reflect.JavaMethod.invoke(JavaMethod.java:73) at org.gradle.api.internal.project.taskfactory.DefaultTaskClassInfoStore$StandardTaskAction.doExecute(DefaultTaskClassInfoStore.java:141) at org.gradle.api.internal.project.taskfactory.DefaultTaskClassInfoStore$StandardTaskAction.execute(DefaultTaskClassInfoStore.java:134) at org.gradle.api.internal.project.taskfactory.DefaultTaskClassInfoStore$StandardTaskAction.execute(DefaultTaskClassInfoStore.java:121) at org.gradle.api.internal.AbstractTask$TaskActionWrapper.execute(AbstractTask.java:731) at org.gradle.api.internal.AbstractTask$TaskActionWrapper.execute(AbstractTask.java:705) at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter$1.run(ExecuteActionsTaskExecuter.java:122) at org.gradle.internal.progress.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:336) at org.gradle.internal.progress.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:328) at org.gradle.internal.progress.DefaultBuildOperationExecutor.execute(DefaultBuildOperationExecutor.java:197) at org.gradle.internal.progress.DefaultBuildOperationExecutor.run(DefaultBuildOperationExecutor.java:107) at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeAction(ExecuteActionsTaskExecuter.java:111) at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeActions(ExecuteActionsTaskExecuter.java:92) at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.execute(ExecuteActionsTaskExecuter.java:70) at org.gradle.api.internal.tasks.execution.SkipUpToDateTaskExecuter.execute(SkipUpToDateTaskExecuter.java:63) at org.gradle.api.internal.tasks.execution.ResolveTaskOutputCachingStateExecuter.execute(ResolveTaskOutputCachingStateExecuter.java:54) at org.gradle.api.internal.tasks.execution.ValidatingTaskExecuter.execute(ValidatingTaskExecuter.java:58) at org.gradle.api.internal.tasks.execution.SkipEmptySourceFilesTaskExecuter.execute(SkipEmptySourceFilesTaskExecuter.java:88) at org.gradle.api.internal.tasks.execution.ResolveTaskArtifactStateTaskExecuter.execute(ResolveTaskArtifactStateTaskExecuter.java:52) at org.gradle.api.internal.tasks.execution.SkipTaskWithNoActionsExecuter.execute(SkipTaskWithNoActionsExecuter.java:52) at org.gradle.api.internal.tasks.execution.SkipOnlyIfTaskExecuter.execute(SkipOnlyIfTaskExecuter.java:54) at org.gradle.api.internal.tasks.execution.ExecuteAtMostOnceTaskExecuter.execute(ExecuteAtMostOnceTaskExecuter.java:43) at org.gradle.api.internal.tasks.execution.CatchExceptionTaskExecuter.execute(CatchExceptionTaskExecuter.java:34) at org.gradle.execution.taskgraph.DefaultTaskGraphExecuter$EventFiringTaskWorker$1.run(DefaultTaskGraphExecuter.java:248) at org.gradle.internal.progress.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:336) at org.gradle.internal.progress.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:328) at org.gradle.internal.progress.DefaultBuildOperationExecutor.execute(DefaultBuildOperationExecutor.java:197) at org.gradle.internal.progress.DefaultBuildOperationExecutor.run(DefaultBuildOperationExecutor.java:107) at org.gradle.execution.taskgraph.DefaultTaskGraphExecuter$EventFiringTaskWorker.execute(DefaultTaskGraphExecuter.java:241) at org.gradle.execution.taskgraph.DefaultTaskGraphExecuter$EventFiringTaskWorker.execute(DefaultTaskGraphExecuter.java:230) at org.gradle.execution.taskgraph.DefaultTaskPlanExecutor$TaskExecutorWorker.processTask(DefaultTaskPlanExecutor.java:124) at org.gradle.execution.taskgraph.DefaultTaskPlanExecutor$TaskExecutorWorker.access$200(DefaultTaskPlanExecutor.java:80) at org.gradle.execution.taskgraph.DefaultTaskPlanExecutor$TaskExecutorWorker$1.execute(DefaultTaskPlanExecutor.java:105) at org.gradle.execution.taskgraph.DefaultTaskPlanExecutor$TaskExecutorWorker$1.execute(DefaultTaskPlanExecutor.java:99) at org.gradle.execution.taskgraph.DefaultTaskExecutionPlan.execute(DefaultTaskExecutionPlan.java:625) at org.gradle.execution.taskgraph.DefaultTaskExecutionPlan.executeWithTask(DefaultTaskExecutionPlan.java:580) at org.gradle.execution.taskgraph.DefaultTaskPlanExecutor$TaskExecutorWorker.run(DefaultTaskPlanExecutor.java:99) at org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:63) at org.gradle.internal.concurrent.ManagedExecutorImpl$1.run(ManagedExecutorImpl.java:46) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at org.gradle.internal.concurrent.ThreadFactoryImpl$ManagedThreadRunnable.run(ThreadFactoryImpl.java:55) at java.lang.Thread.run(Thread.java:745) ``` Also I'm getting to many errors after installation, which are getting resolved by downloading some files but is it like common to get errors even after complete installation from the official website?
2018/02/26
[ "https://Stackoverflow.com/questions/48993680", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8817816/" ]
try cleaning your project and rebuilding it.
I found the solution! In Android Studio, go to : >> Build >> Rebuild Project and then run the project again. This works for me, I hope it works for others who have the same issue.
34,374,856
I'm trying to use android studio on my pc with Linux Mint 64 bit. I've installed java-8-oracle following this: [Error to run Android Studio](https://stackoverflow.com/questions/16601334/error-to-run-android-studio) Now, when I start a new project, it returns a message to me with: "Error:Could not determine Java version using executable /usr/lib/jvm/java-1.5.0-gcj-4.8-amd64/bin/java." Now, I'm following this: [Java version determination error](https://stackoverflow.com/questions/34072267/java-version-determination-error) but at the moment I can't fix it... Can you help me? Many thanks!
2015/12/19
[ "https://Stackoverflow.com/questions/34374856", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5437037/" ]
Faced this problem. Modified path to /usr/lib/jvm/java-8-oracle through File -> Project Structure -> JDK location Error message disappeared
My enviroment text didnt allow me change JAVA\_HOME location .For this reason I went to file->project structure and changed my jdk directory and it worked .By the way I am using XUbuntu 14.04.
7,200,682
A while back, my android map app stopped getting Google Satellite tiles. Now that it's moved up to being the most important issue, I've traced the code and found that it creates requests like this one: > > <http://khm3.google.com/kh/v=65&x=30147&y=19664&z=15&s=> > > > Following the link showed it was broken. The guy who wrote the code was the only one to work on the app before me, left before I was employed, and documented nothing. I have no idea what this link is supposed to do, as I can't find it in the Google Map Api, even the deprecated versions. Does anyone have any idea what this link used to connect to, why it no longer works, and how to go about fixing it?
2011/08/26
[ "https://Stackoverflow.com/questions/7200682", "https://Stackoverflow.com", "https://Stackoverflow.com/users/637087/" ]
khm3.google.com/kh/ is a link to the Google satellite tiles. Each tile is 256 pixels by 256 pixels. I'm guessing the v parameter is a version number. The higher the version number, the more recent the satellite images. The highest valid version number as I'm typing this is 104. Google is only going to keep so many versions of these tiles. The x and y parameters are the x and y location of the tile on the earth. 0, 0 starts at approximately 80 degrees of latitude north, at the international date line west. x increments to the east, and y increments to the south in a [Mercator projection](http://en.wikipedia.org/wiki/Mercator_projection). The z parameter is a level parameter that ranges from 10 to 15. * Level 10 has a set of 1024 x 1024 tiles. * Level 11 has a set of 2048 x 2048 tiles. * Level 12 has a set of 4096 x 4096 tiles. * Level 13 has a set of 8192 x 8192 tiles. * Level 14 has a set of 16384 x 16384 tiles. * Level 15 has a set of 32768 x 32768 tiles. To see the scales of these levels, you can look at this [Open Street Map text file](http://trac.openstreetmap.org/browser/applications/rendering/mapnik/zoom-to-scale.txt). For example, level 15 is 17,061 meters per pixel. It appears that x and y are normalized for a given level. If you specify an x or a y greater than 1024 at level 10, you get the tile that's x % 1024 (remainder) or y % 1024. This [Slippery Map Tiles link](http://wiki.openstreetmap.org/wiki/Slippy_map_tilenames) gives you the formulas to convert from latitude / longitude to tile number, and tile number to latitude / longitude. This link is undocumented and unsupported by Google. It could change at any time.
I've discovered the answer on my own. The `v` parameter is, I guess, a version number or something. I increased it to 90 and it worked again. I still can't find documentation on this thing, though, so I'm concerned that the app will have to be manually updated and recompiled whenever that number changes.
219,703
I have TH9 village that is linked to my Note 8.0. I want to leave this village intact and create a new village on my new Galaxy S6. I have a new Gmail and Google+ account to link the new village to. When I open Clash of Clans for the first time on the new device it asks me to login to my TH9 village. **What do I do to make my new device's village separate from my android tablet's village?**
2015/05/17
[ "https://gaming.stackexchange.com/questions/219703", "https://gaming.stackexchange.com", "https://gaming.stackexchange.com/users/112923/" ]
If the question comes up like this: **Load village?** Do you want to load Chief \_\_\_\_'s village with Town Hall level 9? Warning: progress in the current game will be lost. Then simply tap the "Cancel" button when you open the game. If that is not the prompt that you are given, please specify more clearly (possibly provide screenshots) what exactly happens when you open Clash of Clans.
When you download Clash of Clans on your new device, log out of your Apple ID before you try to open it. Then open it (when logged out of your Apple ID) and start a new village. After you have made some small progress on your village, try to log back in to your Apple ID. Then when you open up Clash of Clans again on your device, it *should* have separated from your other account.
219,703
I have TH9 village that is linked to my Note 8.0. I want to leave this village intact and create a new village on my new Galaxy S6. I have a new Gmail and Google+ account to link the new village to. When I open Clash of Clans for the first time on the new device it asks me to login to my TH9 village. **What do I do to make my new device's village separate from my android tablet's village?**
2015/05/17
[ "https://gaming.stackexchange.com/questions/219703", "https://gaming.stackexchange.com", "https://gaming.stackexchange.com/users/112923/" ]
When it asks to load your TH9 village, say No. Then go into settings and Log In to Google+ with your new email.
When you download Clash of Clans on your new device, log out of your Apple ID before you try to open it. Then open it (when logged out of your Apple ID) and start a new village. After you have made some small progress on your village, try to log back in to your Apple ID. Then when you open up Clash of Clans again on your device, it *should* have separated from your other account.
1,687,801
I am starting a project to create an "**object versioning**" feature for our software (.NET 3.5 / SQL Server 2008), basically it needs to do this: * a user is looking at a **customer**: + last name is "**Smith-Johnson**" + **2 addresses** (saved in different table) + **1 product** purchased + **1 employee contact** (saved in different table) + 200 mails (saved in different table) * user clicks on "**View Past State**" button and chooses "**September 25, 2009 15:00**" which brings up a separate view which shows the same customer: + last name is "**Smith**" (name has changed since then) + **1 address** (since then an address was added) + **1 product** purchased (but it is **different** that the above since he bought this, returned it later and bought a new one) + **2 employee contacts** (since one had been deleted since then) + 10 mails In thinking about the problem in general a number of issues come up: * at what level should changes be logged, e.g. at the **database** **level** (log every change in every property of every table) or the **object level** (serialize and store every object and its dependences after after change) * how will **changes in the structure** **of the tables** be handled, e.g. if column "LastName" changes to "Surname", how to track the data in the columns as all belonging to the same column (so the versioning service doesn't report "this customer didn't have a Surname on September 25th" but instead knows to look in Lastname. * what **supportive technologies** in the .NET / SQL Server 2008 area exist that might support this, e.g. I am looking into the [Change Tracking](http://msdn.microsoft.com/en-us/library/cc280462.aspx) feature of SQL Server 2008 * what kind of patterns exist for versioning, e.g. I'm thinking of the **Command Pattern** which can be used to create an Undo feature in application. * This version service does **not** need to perform rollbacks. It just needs to be able to **show** the state of objects their dependencies. **What has been your experience implementing versioning features into software? What advice do you have?**
2009/11/06
[ "https://Stackoverflow.com/questions/1687801", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4639/" ]
I've worked on software with similar features to this. Instead of data being updated in the database, every change was inserted as a new record. Each row in the database has a start and an end date. The latest record is the one with no end date. From there, you can query the state of the data at any give date just by searching for the records that were active at that time. The obvious downsides are storage, and the fact that you have to abstract certain aspects of your data layer to make the historical tracking transparent to whoever is calling it.
A couple years ago I designed a system to version objects so that they could be copied between databases in different environments (Dev, Staging, QA, Production) I used a hand-rolled AOP approach at the time. Spring.NET or some other IoC framework that supports AOP would be better today. Each modification to an objects' property was stored as a Change in a centralized database. Each Change had a version number. We used the Change(s) to record actions taken against objects in one database and replay them in a target, effectively copying the changes. If you're using an ORM, you should be able to change column names in your database without worrying about property names if you go with this kind of solution. If you do change a property name, Change records can be updated with the new name. One approach among many, but it worked well for us and I think it could be a good start for your case.
1,687,801
I am starting a project to create an "**object versioning**" feature for our software (.NET 3.5 / SQL Server 2008), basically it needs to do this: * a user is looking at a **customer**: + last name is "**Smith-Johnson**" + **2 addresses** (saved in different table) + **1 product** purchased + **1 employee contact** (saved in different table) + 200 mails (saved in different table) * user clicks on "**View Past State**" button and chooses "**September 25, 2009 15:00**" which brings up a separate view which shows the same customer: + last name is "**Smith**" (name has changed since then) + **1 address** (since then an address was added) + **1 product** purchased (but it is **different** that the above since he bought this, returned it later and bought a new one) + **2 employee contacts** (since one had been deleted since then) + 10 mails In thinking about the problem in general a number of issues come up: * at what level should changes be logged, e.g. at the **database** **level** (log every change in every property of every table) or the **object level** (serialize and store every object and its dependences after after change) * how will **changes in the structure** **of the tables** be handled, e.g. if column "LastName" changes to "Surname", how to track the data in the columns as all belonging to the same column (so the versioning service doesn't report "this customer didn't have a Surname on September 25th" but instead knows to look in Lastname. * what **supportive technologies** in the .NET / SQL Server 2008 area exist that might support this, e.g. I am looking into the [Change Tracking](http://msdn.microsoft.com/en-us/library/cc280462.aspx) feature of SQL Server 2008 * what kind of patterns exist for versioning, e.g. I'm thinking of the **Command Pattern** which can be used to create an Undo feature in application. * This version service does **not** need to perform rollbacks. It just needs to be able to **show** the state of objects their dependencies. **What has been your experience implementing versioning features into software? What advice do you have?**
2009/11/06
[ "https://Stackoverflow.com/questions/1687801", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4639/" ]
Martin Fowler documented some interesting patterns as he is developing the next version of Patterns of Enterprise Application Architecture that you might find interesting: * [Audit Log](http://martinfowler.com/eaaDev/AuditLog.html) * [Snapshot](http://martinfowler.com/eaaDev/Snapshot.html) * [Temporal Object](http://martinfowler.com/eaaDev/TemporalObject.html) * [Effectivity](http://martinfowler.com/eaaDev/Effectivity.html) * [Temporal Property](http://martinfowler.com/eaaDev/TemporalProperty.html) * [Time Point](http://martinfowler.com/eaaDev/TimePoint.html)
A couple years ago I designed a system to version objects so that they could be copied between databases in different environments (Dev, Staging, QA, Production) I used a hand-rolled AOP approach at the time. Spring.NET or some other IoC framework that supports AOP would be better today. Each modification to an objects' property was stored as a Change in a centralized database. Each Change had a version number. We used the Change(s) to record actions taken against objects in one database and replay them in a target, effectively copying the changes. If you're using an ORM, you should be able to change column names in your database without worrying about property names if you go with this kind of solution. If you do change a property name, Change records can be updated with the new name. One approach among many, but it worked well for us and I think it could be a good start for your case.
49,244,675
I am working on an exercise in the Udemy Advanced Webdeveloper Bootcamp. The exercise asked to come up with a page of 32 boxes that randomly change colour (every x seconds). My solution is not exactly that. I change the color of all 32 boxes at the same time. It almost works. I get random 32 boxes initially, but does not change the color later. My console tells me I am doing something wrong with the setState. But I cannot figure out what. I think my changeColor is a pure function: ``` import React, { Component } from 'react'; import './App.css'; class Box extends Component { render() { var divStyle = { backgroundColor: this.props.color } return( <div className="box" style={divStyle}></div> ); } } class BoxRow extends Component { render() { const numOfBoxesInRow = 8; const boxes = []; for(var i=0; i < numOfBoxesInRow; i++) { boxes.push(<Box color={this.props.colors[i]} key={i+1}/>); } return( <div className="boxesWrapper"> {boxes} </div> ); } } class BoxTable extends Component { constructor(props) { super(props); this.getRandom = this.getRandom.bind(this); this.changeColors = this.changeColors.bind(this); this.state = { randomColors: this.getRandom(this.props.allColors, 32) // hardcoding }; this.changeColors(); } changeColors() { setInterval( this.setState({randomColors: this.getRandom(this.props.allColors, 32)}), 5000); } getRandom(arr, n) { var result = new Array(n), len = arr.length, taken = new Array(len); if (n > len) throw new RangeError("getRandom: more elements taken than available"); while (n--) { var x = Math.floor(Math.random() * len); result[n] = arr[x in taken ? taken[x] : x]; taken[x] = --len in taken ? taken[len] : len; } return result; } render () { const numOfRows = 4; const rows = []; for(let i=0; i < numOfRows; i++) { rows.push( <BoxRow colors={this.state.randomColors.slice(8*i,8*(1+i))} key={i+1}/> ) } return ( <div className="rowsWrapper"> {rows} </div> ); } } BoxTable.defaultProps = { allColors: ["AliceBlue","AntiqueWhite","Aqua","Aquamarine","Azure","Beige", "Bisque","Black","BlanchedAlmond","Blue","BlueViolet","Brown","BurlyWood", "CadetBlue","Chartreuse","Chocolate","Coral","CornflowerBlue","Cornsilk", "Crimson","Cyan","DarkBlue","DarkCyan","DarkGoldenRod","DarkGray","DarkGrey", "DarkGreen","DarkKhaki","DarkMagenta","DarkOliveGreen","Darkorange", "DarkOrchid","DarkRed","DarkSalmon","DarkSeaGreen","DarkSlateBlue", "DarkSlateGray","DarkSlateGrey","DarkTurquoise","DarkViolet","DeepPink", "DeepSkyBlue","DimGray","DimGrey","DodgerBlue","FireBrick","FloralWhite", "ForestGreen","Fuchsia","Gainsboro","GhostWhite","Gold","GoldenRod","Gray", "Grey","Green","GreenYellow","HoneyDew","HotPink","IndianRed","Indigo", "Ivory","Khaki","Lavender","LavenderBlush","LawnGreen","LemonChiffon", "LightBlue","LightCoral","LightCyan","LightGoldenRodYellow","LightGray", "LightGrey","LightGreen","LightPink","LightSalmon","LightSeaGreen", "LightSkyBlue","LightSlateGray","LightSlateGrey","LightSteelBlue", "LightYellow","Lime","LimeGreen","Linen","Magenta","Maroon", "MediumAquaMarine","MediumBlue","MediumOrchid","MediumPurple", "MediumSeaGreen","MediumSlateBlue","MediumSpringGreen","MediumTurquoise", "MediumVioletRed","MidnightBlue","MintCream","MistyRose","Moccasin", "NavajoWhite","Navy","OldLace","Olive","OliveDrab","Orange","OrangeRed", "Orchid","PaleGoldenRod","PaleGreen","PaleTurquoise","PaleVioletRed", "PapayaWhip","PeachPuff","Peru","Pink","Plum","PowderBlue","Purple", "Red","RosyBrown","RoyalBlue","SaddleBrown","Salmon","SandyBrown", "SeaGreen","SeaShell","Sienna","Silver","SkyBlue","SlateBlue","SlateGray", "SlateGrey","Snow","SpringGreen","SteelBlue","Tan","Teal","Thistle", "Tomato","Turquoise","Violet","Wheat","White","WhiteSmoke","Yellow","YellowGreen"] } export default BoxTable ```
2018/03/12
[ "https://Stackoverflow.com/questions/49244675", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2812237/" ]
The following PR provides a solution for the above question. <https://github.com/saaskit/saaskit/pull/96> ~~The PR have been merged with the "master" branch now.~~ It wasn't merged yet (November 2018)
I have found a good way to get per tenant options for any type of ASP.NET Core options, including cookie or openID Connect. I have wrapped this up into a framework called Finbuckle.MultiTenant. It basically boils down to a setup that looks like this: ``` services.AddMultiTenant(). WithInMemoryStore()). WithRouteStrategy(). WithPerTenantOptionsConfig<CookieAuthenticationOptions>((o, tenantContext) => o.Cookie.Name += tenantContext.Id); ``` See my here for more information if you are curious: <https://www.finbuckle.com/MultiTenant>
69,595,197
I am new to Java and learning through a task. I have tried to create a program but when I input my name it is raising a `InputMismatchException` exception. Here is my code: **Name.java** ``` package LearnJava; public class Name { String firstName,surName; public Name(String fName, String sName) { super(); this.firstName = fName; this.surName = sName; } public String getName() { return firstName; } public void setName(String fName) { this.firstName = fName; } public String getSname() { return surName; } public void setSname(String sName) { this.surName = sName; } } ```
2021/10/16
[ "https://Stackoverflow.com/questions/69595197", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9824695/" ]
Thanks to [Jim](https://stackoverflow.com/users/17011740/jim) I used the console.log() to check what was going on. And indeed the `data` from function inside `new_paste()` wasn't being returned to `fe`. (I had messed up the return scopes basically) Here is the final code after fixes & scope resolutions ```js const { SlashCommandBuilder } = require('@discordjs/builders'); const { REST } = require('@discordjs/rest'); const { Routes } = require('discord-api-types/v9'); const { token, pasteUser, pastePass, pasteKey } = require('../config.json'); const paste = require('better-pastebin'); const rest = new REST({ version: '9' }).setToken(token); const date = new Date(); paste.setDevKey(pasteKey); paste.login(pasteUser, pastePass); module.exports = { data: new SlashCommandBuilder() .setName('export-ban-list') .setDescription('Exports ban list of current server'), async execute(interaction) { const bans = await rest.get( Routes.guildBans(interaction.guildId), ); await interaction.deferReply(`Found ${bans.length} bans. Exporting...`); console.log(`Found ${bans.length} bans. Exporting...`); let results = []; bans.forEach((v) => { results.push(v.user.id); }); results = JSON.stringify(results); console.log(results); const outputFile = `${interaction.guild.name}-${date}.txt`; paste.create({ contents: results, name: outputFile, expires: '1D', anonymous: 'true', }, function(success, data) { if (success) { return interaction.editReply(data); } else { return interaction.editReply('There was some unexpected error.'); } }); }, }; ``` And finally I get the proper pastebin url as output. Code hosted [here](https://github.com/MRDGH2821/Discord-Ban-Utils-Bot/blob/main/commands/export-ban-list.js)
I think your npm package better-pastebin has an error. I am not familiar with that npm package, so I can’t determine whether it has an error for you, but I think if you change the npm package, the error will not appear.
32,941,862
I have a 4D array that has two spatial directions, a month column and a year column. It gives a scalar value at each spatial point and for each month. I want to reshape this array to be 3D so that instead of the value being defined as x, y, month, year, it is just defined as x, y, month, where now the month column runs from 1-36 say with no year column instead of 1-12 with a year column of 1-3. How would I do this in Python? Thanks!
2015/10/05
[ "https://Stackoverflow.com/questions/32941862", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5408553/" ]
In the Beginning, was the For Loop: ``` $n = sizeof($name); for ($i=0; i < $n; $i++) { echo $name[$i]; echo $mat[$i]; } ```
This is one of the ways to do this: ``` foreach ($specs as $k => $name) { assert(isset($material[$k])); $mat = $material[$k]; } ``` If you have `['foo', 'bar']` and `[2 => 'mat1', 3 => 'mat2']` then this approach won't work but you can use `array_values` to discard keys first. Another apprach would be (which is very close to what you wanted, in fact): ``` while ((list($name) = each($specs)) && (list($mat) = each($material))) { } ``` This will terminate when one of them ends and will work if they are not indexed the same. However, if they are supposed to be indexed the same then perhaps the solution above is better. Hard to say in general.
32,941,862
I have a 4D array that has two spatial directions, a month column and a year column. It gives a scalar value at each spatial point and for each month. I want to reshape this array to be 3D so that instead of the value being defined as x, y, month, year, it is just defined as x, y, month, where now the month column runs from 1-36 say with no year column instead of 1-12 with a year column of 1-3. How would I do this in Python? Thanks!
2015/10/05
[ "https://Stackoverflow.com/questions/32941862", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5408553/" ]
You can use the [`MultipleIterator`](http://php.net/multipleiterator) of SPL. It's a bit verbose for this simple use case, but works well with all edge cases: ``` $iterator = new MultipleIterator(); $iterator->attachIterator(new ArrayIterator($specs)); $iterator->attachIterator(new ArrayIterator($material)); foreach ($iterator as $current) { $name = $current[0]; $mat = $current[1]; } ``` The default settings of the iterator are that it stops as soon as one of the arrays has no more elements and that you can access the current elements with a numeric key, in the order that the iterators have been attached (`$current[0]` and `$current[1]`). Examples for the different settings can be found in the [constructor documentation](http://php.net/manual/multipleiterator.construct.php).
Simply use a `for` loop. And inside that loop, extract values of your array: ``` For (I=0 to 100) { Echo array1[i]; Echo array2[i] } ```
32,941,862
I have a 4D array that has two spatial directions, a month column and a year column. It gives a scalar value at each spatial point and for each month. I want to reshape this array to be 3D so that instead of the value being defined as x, y, month, year, it is just defined as x, y, month, where now the month column runs from 1-36 say with no year column instead of 1-12 with a year column of 1-3. How would I do this in Python? Thanks!
2015/10/05
[ "https://Stackoverflow.com/questions/32941862", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5408553/" ]
You can not have two arrays in a `foreach` loop like that, but you can use `array_combine` to combine an array and later just print it out: ``` $arraye = array_combine($name, $material); foreach ($arraye as $k=> $a) { echo $k. ' '. $a ; } ``` [**Output**](http://codepad.viper-7.com/6XWUdW): ``` first 112 second 332 ``` But if any of the names don't have material then you must have an empty/null value in it, otherwise there is no way that you can sure which material belongs to which name. So I think you should have an array like: ``` $name = array('amy','john','morris','rahul'); $material = array('1w','4fr',null,'ff'); ``` Now you can just ``` if (count($name) == count($material)) { for ($i=0; $i < $count($name); $i++) { echo $name[$i]; echo $material[$i]; } ``` --- Just FYI: If you want to have multiple arrays in `foreach`, you can use `list`: ``` foreach ($array as list($arr1, $arr2)) {...} ``` Though first you need to do this: `$array = array($specs,$material)` ``` <?php $abc = array('first','second'); $add = array('112','332'); $array = array($abc,$add); foreach ($array as list($arr1, $arr2)) { echo $arr1; echo $arr2; } ``` The output will be: ``` first second 112 332 ``` And still I don't think it will serve your exact purpose, because it goes through the first array and then the second array.
Simply use a `for` loop. And inside that loop, extract values of your array: ``` For (I=0 to 100) { Echo array1[i]; Echo array2[i] } ```
32,941,862
I have a 4D array that has two spatial directions, a month column and a year column. It gives a scalar value at each spatial point and for each month. I want to reshape this array to be 3D so that instead of the value being defined as x, y, month, year, it is just defined as x, y, month, where now the month column runs from 1-36 say with no year column instead of 1-12 with a year column of 1-3. How would I do this in Python? Thanks!
2015/10/05
[ "https://Stackoverflow.com/questions/32941862", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5408553/" ]
In the Beginning, was the For Loop: ``` $n = sizeof($name); for ($i=0; i < $n; $i++) { echo $name[$i]; echo $mat[$i]; } ```
You can use the [`MultipleIterator`](http://php.net/multipleiterator) of SPL. It's a bit verbose for this simple use case, but works well with all edge cases: ``` $iterator = new MultipleIterator(); $iterator->attachIterator(new ArrayIterator($specs)); $iterator->attachIterator(new ArrayIterator($material)); foreach ($iterator as $current) { $name = $current[0]; $mat = $current[1]; } ``` The default settings of the iterator are that it stops as soon as one of the arrays has no more elements and that you can access the current elements with a numeric key, in the order that the iterators have been attached (`$current[0]` and `$current[1]`). Examples for the different settings can be found in the [constructor documentation](http://php.net/manual/multipleiterator.construct.php).
32,941,862
I have a 4D array that has two spatial directions, a month column and a year column. It gives a scalar value at each spatial point and for each month. I want to reshape this array to be 3D so that instead of the value being defined as x, y, month, year, it is just defined as x, y, month, where now the month column runs from 1-36 say with no year column instead of 1-12 with a year column of 1-3. How would I do this in Python? Thanks!
2015/10/05
[ "https://Stackoverflow.com/questions/32941862", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5408553/" ]
In the Beginning, was the For Loop: ``` $n = sizeof($name); for ($i=0; i < $n; $i++) { echo $name[$i]; echo $mat[$i]; } ```
You can not have two arrays in a `foreach` loop like that, but you can use `array_combine` to combine an array and later just print it out: ``` $arraye = array_combine($name, $material); foreach ($arraye as $k=> $a) { echo $k. ' '. $a ; } ``` [**Output**](http://codepad.viper-7.com/6XWUdW): ``` first 112 second 332 ``` But if any of the names don't have material then you must have an empty/null value in it, otherwise there is no way that you can sure which material belongs to which name. So I think you should have an array like: ``` $name = array('amy','john','morris','rahul'); $material = array('1w','4fr',null,'ff'); ``` Now you can just ``` if (count($name) == count($material)) { for ($i=0; $i < $count($name); $i++) { echo $name[$i]; echo $material[$i]; } ``` --- Just FYI: If you want to have multiple arrays in `foreach`, you can use `list`: ``` foreach ($array as list($arr1, $arr2)) {...} ``` Though first you need to do this: `$array = array($specs,$material)` ``` <?php $abc = array('first','second'); $add = array('112','332'); $array = array($abc,$add); foreach ($array as list($arr1, $arr2)) { echo $arr1; echo $arr2; } ``` The output will be: ``` first second 112 332 ``` And still I don't think it will serve your exact purpose, because it goes through the first array and then the second array.
32,941,862
I have a 4D array that has two spatial directions, a month column and a year column. It gives a scalar value at each spatial point and for each month. I want to reshape this array to be 3D so that instead of the value being defined as x, y, month, year, it is just defined as x, y, month, where now the month column runs from 1-36 say with no year column instead of 1-12 with a year column of 1-3. How would I do this in Python? Thanks!
2015/10/05
[ "https://Stackoverflow.com/questions/32941862", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5408553/" ]
You can use the [`MultipleIterator`](http://php.net/multipleiterator) of SPL. It's a bit verbose for this simple use case, but works well with all edge cases: ``` $iterator = new MultipleIterator(); $iterator->attachIterator(new ArrayIterator($specs)); $iterator->attachIterator(new ArrayIterator($material)); foreach ($iterator as $current) { $name = $current[0]; $mat = $current[1]; } ``` The default settings of the iterator are that it stops as soon as one of the arrays has no more elements and that you can access the current elements with a numeric key, in the order that the iterators have been attached (`$current[0]` and `$current[1]`). Examples for the different settings can be found in the [constructor documentation](http://php.net/manual/multipleiterator.construct.php).
This is one of the ways to do this: ``` foreach ($specs as $k => $name) { assert(isset($material[$k])); $mat = $material[$k]; } ``` If you have `['foo', 'bar']` and `[2 => 'mat1', 3 => 'mat2']` then this approach won't work but you can use `array_values` to discard keys first. Another apprach would be (which is very close to what you wanted, in fact): ``` while ((list($name) = each($specs)) && (list($mat) = each($material))) { } ``` This will terminate when one of them ends and will work if they are not indexed the same. However, if they are supposed to be indexed the same then perhaps the solution above is better. Hard to say in general.
32,941,862
I have a 4D array that has two spatial directions, a month column and a year column. It gives a scalar value at each spatial point and for each month. I want to reshape this array to be 3D so that instead of the value being defined as x, y, month, year, it is just defined as x, y, month, where now the month column runs from 1-36 say with no year column instead of 1-12 with a year column of 1-3. How would I do this in Python? Thanks!
2015/10/05
[ "https://Stackoverflow.com/questions/32941862", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5408553/" ]
In the Beginning, was the For Loop: ``` $n = sizeof($name); for ($i=0; i < $n; $i++) { echo $name[$i]; echo $mat[$i]; } ```
Simply use a `for` loop. And inside that loop, extract values of your array: ``` For (I=0 to 100) { Echo array1[i]; Echo array2[i] } ```
32,941,862
I have a 4D array that has two spatial directions, a month column and a year column. It gives a scalar value at each spatial point and for each month. I want to reshape this array to be 3D so that instead of the value being defined as x, y, month, year, it is just defined as x, y, month, where now the month column runs from 1-36 say with no year column instead of 1-12 with a year column of 1-3. How would I do this in Python? Thanks!
2015/10/05
[ "https://Stackoverflow.com/questions/32941862", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5408553/" ]
In the Beginning, was the For Loop: ``` $n = sizeof($name); for ($i=0; i < $n; $i++) { echo $name[$i]; echo $mat[$i]; } ```
Do it using a `for` loop... Check it below: ``` <?php $specs = array('a', 'b', 'c', 'd'); $material = array('x', 'y', 'z'); $count = count($specs) > count($material) ? count($specs) : count($material); for ($i=0;$i<$count;$i++ ) { if (isset($specs[$i])) echo $specs[$i]; if (isset($material[$i])) echo $material[$i]; } ?> ``` **OUTPUT** ``` axbyczd ```
32,941,862
I have a 4D array that has two spatial directions, a month column and a year column. It gives a scalar value at each spatial point and for each month. I want to reshape this array to be 3D so that instead of the value being defined as x, y, month, year, it is just defined as x, y, month, where now the month column runs from 1-36 say with no year column instead of 1-12 with a year column of 1-3. How would I do this in Python? Thanks!
2015/10/05
[ "https://Stackoverflow.com/questions/32941862", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5408553/" ]
You can not have two arrays in a `foreach` loop like that, but you can use `array_combine` to combine an array and later just print it out: ``` $arraye = array_combine($name, $material); foreach ($arraye as $k=> $a) { echo $k. ' '. $a ; } ``` [**Output**](http://codepad.viper-7.com/6XWUdW): ``` first 112 second 332 ``` But if any of the names don't have material then you must have an empty/null value in it, otherwise there is no way that you can sure which material belongs to which name. So I think you should have an array like: ``` $name = array('amy','john','morris','rahul'); $material = array('1w','4fr',null,'ff'); ``` Now you can just ``` if (count($name) == count($material)) { for ($i=0; $i < $count($name); $i++) { echo $name[$i]; echo $material[$i]; } ``` --- Just FYI: If you want to have multiple arrays in `foreach`, you can use `list`: ``` foreach ($array as list($arr1, $arr2)) {...} ``` Though first you need to do this: `$array = array($specs,$material)` ``` <?php $abc = array('first','second'); $add = array('112','332'); $array = array($abc,$add); foreach ($array as list($arr1, $arr2)) { echo $arr1; echo $arr2; } ``` The output will be: ``` first second 112 332 ``` And still I don't think it will serve your exact purpose, because it goes through the first array and then the second array.
Do it using a `for` loop... Check it below: ``` <?php $specs = array('a', 'b', 'c', 'd'); $material = array('x', 'y', 'z'); $count = count($specs) > count($material) ? count($specs) : count($material); for ($i=0;$i<$count;$i++ ) { if (isset($specs[$i])) echo $specs[$i]; if (isset($material[$i])) echo $material[$i]; } ?> ``` **OUTPUT** ``` axbyczd ```
32,941,862
I have a 4D array that has two spatial directions, a month column and a year column. It gives a scalar value at each spatial point and for each month. I want to reshape this array to be 3D so that instead of the value being defined as x, y, month, year, it is just defined as x, y, month, where now the month column runs from 1-36 say with no year column instead of 1-12 with a year column of 1-3. How would I do this in Python? Thanks!
2015/10/05
[ "https://Stackoverflow.com/questions/32941862", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5408553/" ]
You can use the [`MultipleIterator`](http://php.net/multipleiterator) of SPL. It's a bit verbose for this simple use case, but works well with all edge cases: ``` $iterator = new MultipleIterator(); $iterator->attachIterator(new ArrayIterator($specs)); $iterator->attachIterator(new ArrayIterator($material)); foreach ($iterator as $current) { $name = $current[0]; $mat = $current[1]; } ``` The default settings of the iterator are that it stops as soon as one of the arrays has no more elements and that you can access the current elements with a numeric key, in the order that the iterators have been attached (`$current[0]` and `$current[1]`). Examples for the different settings can be found in the [constructor documentation](http://php.net/manual/multipleiterator.construct.php).
Do it using a `for` loop... Check it below: ``` <?php $specs = array('a', 'b', 'c', 'd'); $material = array('x', 'y', 'z'); $count = count($specs) > count($material) ? count($specs) : count($material); for ($i=0;$i<$count;$i++ ) { if (isset($specs[$i])) echo $specs[$i]; if (isset($material[$i])) echo $material[$i]; } ?> ``` **OUTPUT** ``` axbyczd ```
54,281,129
I have a file that contains several Phone Number. Now I want to convert any line of this file to VCF file. So,first i defined e template model for VCF file that have a String "THISNUMBER" And i want to open file (thats have phone numbers) and replace thats lines to Template model (THISNUMBER) i write this Python code : ``` template = """BEGIN:VCARD VERSION:3.0 N:THISNUMBER;;; FN:THISNUMBER TEL;TYPE=CELL:THISNUM END:VCARD""" inputfile=open('D:/xxx/lst.txt','r') counter=1 for thisnumber in inputfile: thisnumber=thisnumber.rstrip() output=template.replace('THISNUMBER',thisnumber) outputFile=('D:/xxx/vcfs/%05i.vcf' % counter,'w') outputFile.write(output) output.close print ("writing file %i") % counter counter +=1 inputfile.close() ``` But I Give This ERROR : ``` Traceback (most recent call last): File "D:\xxx\a.py", line 16, in <module> outputFile.write(output) AttributeError: 'tuple' object has no attribute 'write' ```
2019/01/20
[ "https://Stackoverflow.com/questions/54281129", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10941975/" ]
Given ``` x <- '10021502' ``` we can use `formatC` to get desired output ``` formatC( x = as.integer(x), width = 12, # total width flag = "0", # pads zeros at beginning big.mark = "_", # mark between every big.interval before the decimal point big.interval = 4 # see above ) # [1] "0000_1002_1502" ```
Given: ``` x <- '10021502' ``` The following will do the trick ``` gsub("([0-9]{4})([0-9]{4})", "0000_\\1_\\2", x) ```
51,006,969
i'm getting the following error: > > Column not found: 1054 Unknown column 'image\_likes.gallery\_image\_id' > in 'where clause' (SQL: select \* from `image_likes` where > `image_likes`.`gallery_image_id` in (1) and `deleted_at` is null and > `user_id` = 1 and `image_likes`.`deleted_at` is null) > > > whenever i add the `function ($query)`. the line of code below gets the data, but i need the data to get the likes that corresponds with it. `$images = GalleryImage::with('user')->get()` this is what i have so far. **ImageLike.php** ``` <?php namespace App; use App\GalleryImage; use App\User; use Illuminate\Database\Eloquent\Model; use Illuminate\Database\Eloquent\SoftDeletes; class ImageLike extends Model { use SoftDeletes; protected $fillable = [ 'user_id', 'image_id' ]; } ``` **ImageController.php** ``` public function getImages() { $images = GalleryImage::with('user') ->with(['likes' => function ($query) { $query->whereNull('deleted_at'); $query->where('user_id', auth()->user()->id); }])->get(); return response()->json($images); } ``` **GalleryImage.php** ``` <?php namespace App; use App\User; use App\GalleryImage; use Illuminate\Database\Eloquent\Model; use Illuminate\Foundation\Auth\User as Authenticatable; class GalleryImage extends Authenticatable { protected $fillable = [ 'image_title', 'user_id', 'file_name', 'created_at' ]; protected $table = 'images'; public function user() { return $this->belongsTo(User::class); } public function likes() { return $this->hasMany(ImageLike::class); } public function likedByMe() { foreach($this->likes as $like) { if ($like->user_id == auth()->id()){ return true; } } return false; } } ```
2018/06/24
[ "https://Stackoverflow.com/questions/51006969", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
Change your relationship in model GalleryImage for `likes` ``` public function likes() { return $this->hasMany(ImageLike::class, 'image_id'); //here image_id is reference id of images table to likes table } ``` default laravel assume your model name and \_id as foreign key so it was looking for `gallery_image_id` for your GalleryImage model in `ImageLike` model but you have `image_id`. So if you have other than default then specific it in relationship. ``` return $this->hasMany('App\Comment', 'foreign_key', 'local_key'); ``` check details here <https://laravel.com/docs/5.6/eloquent-relationships#one-to-many>
``` public function likedByMe() { return $this->likes()->whereUserId(auth()->id)->count()>0; } ``` this is not regarding you problem but, you can re-write your `likedByMe()` like this in more efficient way
990,018
I have an ASP.NET app with a three layer architecture: * Presentation layer: ASP.NET * Bussiness Layer: C# library. * Data Access Layer: C# library with ADO.Net Entity Framework objects. Some methods on Bussiness layer would return ADO.NET entity objects but, data access layer is not visible at Presentation layer I can't do that. My question is: On a design view, Is it correct to expose Entity Objects in the Presentation Layer? I think I only have to link Data Layer library with ASP.NET app. Thank you!
2009/06/13
[ "https://Stackoverflow.com/questions/990018", "https://Stackoverflow.com", "https://Stackoverflow.com/users/68571/" ]
It's absolutely desirable to have your entity objects available for use and consumption in your presentation tier. That's what all the work is for. * Binding collection of objects to a grid/listview/dropdown * Splashing a single object (i.e. customer) onto a form for read/update/delete This makes your life easier by far. Otherwise you'd have to pass string after int after double after string between your presentation and business layers. These may be Entity objects or even your own POCO objects that were hydrated from the Entity objects. I would even go so far as to say that your Entites should be in their own assembly separate from the DAL.
I think no, it is not, the best way to do that is to separate data classes from behavior, and reference only data classes in presentation level.The good approach I think to use WCF see this [link](http://msdn.microsoft.com/en-us/magazine/cc700340.aspx)
990,018
I have an ASP.NET app with a three layer architecture: * Presentation layer: ASP.NET * Bussiness Layer: C# library. * Data Access Layer: C# library with ADO.Net Entity Framework objects. Some methods on Bussiness layer would return ADO.NET entity objects but, data access layer is not visible at Presentation layer I can't do that. My question is: On a design view, Is it correct to expose Entity Objects in the Presentation Layer? I think I only have to link Data Layer library with ASP.NET app. Thank you!
2009/06/13
[ "https://Stackoverflow.com/questions/990018", "https://Stackoverflow.com", "https://Stackoverflow.com/users/68571/" ]
It's absolutely desirable to have your entity objects available for use and consumption in your presentation tier. That's what all the work is for. * Binding collection of objects to a grid/listview/dropdown * Splashing a single object (i.e. customer) onto a form for read/update/delete This makes your life easier by far. Otherwise you'd have to pass string after int after double after string between your presentation and business layers. These may be Entity objects or even your own POCO objects that were hydrated from the Entity objects. I would even go so far as to say that your Entites should be in their own assembly separate from the DAL.
See [Supervising Controller](http://www.martinfowler.com/eaaDev/SupervisingPresenter.html) and [Passive View](http://www.martinfowler.com/eaaDev/PassiveScreen.html) If you pass the Entity, you are essentially Supervising controller. Otherwise you are Passive View. Supervising controller is less work, but less testable. Supervising Controller also says databinding is OK. Passive view is testable but a LOT more work. No databinding. Lots of properties. Typically I stick with Supervising Controller. You typically don't need that level of testability and it isn't worth the extra trouble.
990,018
I have an ASP.NET app with a three layer architecture: * Presentation layer: ASP.NET * Bussiness Layer: C# library. * Data Access Layer: C# library with ADO.Net Entity Framework objects. Some methods on Bussiness layer would return ADO.NET entity objects but, data access layer is not visible at Presentation layer I can't do that. My question is: On a design view, Is it correct to expose Entity Objects in the Presentation Layer? I think I only have to link Data Layer library with ASP.NET app. Thank you!
2009/06/13
[ "https://Stackoverflow.com/questions/990018", "https://Stackoverflow.com", "https://Stackoverflow.com/users/68571/" ]
I think no, it is not, the best way to do that is to separate data classes from behavior, and reference only data classes in presentation level.The good approach I think to use WCF see this [link](http://msdn.microsoft.com/en-us/magazine/cc700340.aspx)
See [Supervising Controller](http://www.martinfowler.com/eaaDev/SupervisingPresenter.html) and [Passive View](http://www.martinfowler.com/eaaDev/PassiveScreen.html) If you pass the Entity, you are essentially Supervising controller. Otherwise you are Passive View. Supervising controller is less work, but less testable. Supervising Controller also says databinding is OK. Passive view is testable but a LOT more work. No databinding. Lots of properties. Typically I stick with Supervising Controller. You typically don't need that level of testability and it isn't worth the extra trouble.
990,018
I have an ASP.NET app with a three layer architecture: * Presentation layer: ASP.NET * Bussiness Layer: C# library. * Data Access Layer: C# library with ADO.Net Entity Framework objects. Some methods on Bussiness layer would return ADO.NET entity objects but, data access layer is not visible at Presentation layer I can't do that. My question is: On a design view, Is it correct to expose Entity Objects in the Presentation Layer? I think I only have to link Data Layer library with ASP.NET app. Thank you!
2009/06/13
[ "https://Stackoverflow.com/questions/990018", "https://Stackoverflow.com", "https://Stackoverflow.com/users/68571/" ]
I suggest that you look into the concepts of View objects...or Data Transfer Objects (DTO). You might consider using a tool like AutoMapper or similar which will create a view specific domain object out of your entities. In general you may have screens that need an entity present to perform its work. But more often than not you will need to pass several different entities. In this case you are better off creating one DTO that contains all of these entities. By doing this you are adding a layer of separation between your presentation layer and your business layer. Often times your entities have more power than you might want to expose to your presentation layer. And...vice versa. Frequently you may need to get some UI messages out to the presentation layer based on some validation flagged in your business layer. Rather than make your ui more complex than it needs to be (by passing in your full entities) you can only pass in what the UI needs in the form of the DTO. Also, there is never a need for your business objects to care about anything specific to the presentation layer. I suggest that you not databind directly to anything as far back as the data access layer. Technically your presentation layer should know as little as possible about your business layer. In the case of MVP or MVC this is very easy to achieve by disconnecting the front end and the back end by way of this additional separation!
I think no, it is not, the best way to do that is to separate data classes from behavior, and reference only data classes in presentation level.The good approach I think to use WCF see this [link](http://msdn.microsoft.com/en-us/magazine/cc700340.aspx)
990,018
I have an ASP.NET app with a three layer architecture: * Presentation layer: ASP.NET * Bussiness Layer: C# library. * Data Access Layer: C# library with ADO.Net Entity Framework objects. Some methods on Bussiness layer would return ADO.NET entity objects but, data access layer is not visible at Presentation layer I can't do that. My question is: On a design view, Is it correct to expose Entity Objects in the Presentation Layer? I think I only have to link Data Layer library with ASP.NET app. Thank you!
2009/06/13
[ "https://Stackoverflow.com/questions/990018", "https://Stackoverflow.com", "https://Stackoverflow.com/users/68571/" ]
I suggest that you look into the concepts of View objects...or Data Transfer Objects (DTO). You might consider using a tool like AutoMapper or similar which will create a view specific domain object out of your entities. In general you may have screens that need an entity present to perform its work. But more often than not you will need to pass several different entities. In this case you are better off creating one DTO that contains all of these entities. By doing this you are adding a layer of separation between your presentation layer and your business layer. Often times your entities have more power than you might want to expose to your presentation layer. And...vice versa. Frequently you may need to get some UI messages out to the presentation layer based on some validation flagged in your business layer. Rather than make your ui more complex than it needs to be (by passing in your full entities) you can only pass in what the UI needs in the form of the DTO. Also, there is never a need for your business objects to care about anything specific to the presentation layer. I suggest that you not databind directly to anything as far back as the data access layer. Technically your presentation layer should know as little as possible about your business layer. In the case of MVP or MVC this is very easy to achieve by disconnecting the front end and the back end by way of this additional separation!
See [Supervising Controller](http://www.martinfowler.com/eaaDev/SupervisingPresenter.html) and [Passive View](http://www.martinfowler.com/eaaDev/PassiveScreen.html) If you pass the Entity, you are essentially Supervising controller. Otherwise you are Passive View. Supervising controller is less work, but less testable. Supervising Controller also says databinding is OK. Passive view is testable but a LOT more work. No databinding. Lots of properties. Typically I stick with Supervising Controller. You typically don't need that level of testability and it isn't worth the extra trouble.
46,899,860
Im using C# in VS2017. I have a json from where I get data, and this 3 classes: ``` public class Azar { public int id_juego { get; set; } public string timestamp { get; set; } public int id_tipojuego { get; set; } public string fecha { get; set; } public int? sorteo { get; set; } public object resultados { get; set; } public string tipo_juego { get; set; } public int tipo { get; set; } } public class ResultadoQuiniela { public string letras { get; set; } public int[] numeros { get; set; } } public class ResultadoTelekino { public int sorteo { get; set; } public int b1 { get; set; } public int b2 { get; set; } public int b3 { get; set; } public int b4 { get; set; } public int b5 { get; set; } public int b6 { get; set; } public int b7 { get; set; } public int b8 { get; set; } public int b9 { get; set; } public int b10 { get; set; } public int b11 { get; set; } public int b12 { get; set; } public int b13 { get; set; } public int b14 { get; set; } public int b15 { get; set; } public int cat1 { get; set; } public int cat2 { get; set; } public int cat3 { get; set; } public int cat4 { get; set; } public int cat5 { get; set; } public int cat6 { get; set; } public int cat7 { get; set; } public string prm1 { get; set; } public string prm2 { get; set; } public string prm3 { get; set; } public string prm4 { get; set; } public string prm5 { get; set; } public string prm6 { get; set; } public string prm7 { get; set; } public string pozo { get; set; } public string ext { get; set; } } ``` The Azar->resultados object can be any of the 2 classes, or ResultadoQuiniela or ResultadoTelekino. I don't know which of them is until I parse the json and see the id\_tipojuego. Once I know that I do: ``` if(curAzar.id_tipojuego == 25) { ResultadoQuiniela resultados = (ResultadoQuiniela)curAzar.resultados; } ``` But I get null results. I see that curAzar (is the result parsed from the json) was every attribute set BUT the resultados object. How can I store that information in a "neutral" way and then cast it to the right object? Declaring it as object and then cast it it doesn't store the value at all. **EDIT**: For parsing the json Im using Newtonsoft.Json
2017/10/23
[ "https://Stackoverflow.com/questions/46899860", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1720455/" ]
First change this: `<=s1.length()` to `<s1.length()`, otherwise you will get an `ArrayOutOfIndex` Exception. You can use `indexOf()`.
1. by your logic, it would make more sens to write ``` if(s3=s2.contains(s1.substring(i, i+n))); ``` since you are iterating through your first sprint, you are trying to figure out if any of the substrings of length N in s1 is contained in s2. 2. You boolean is not necessary, it does not add any value to the code. ``` public hasCheated(String s1, String s2, int n){ for(int i=0; i < s1.length; i++){ if(i + n < s1.length){ if(s2.contains(si.substring(i,i+n))){ return true; } } } return false } ``` Without contains or substring, you could use indexOf() as Pritam mentioned, but you would still need to replace the substring. To do so, you couls convert your string to charArray and work with indexes. It does increase the complexity. ``` public hasCheated(String s1, String s2, int n){ char[] tempCharArray; char[] charString = s1.toCharArray(); String comparedString = ""; for(int i=0; i < s1.length; i++){ if(i + n < s1.length){ for(int j = 0; j<n ; j++){ tempCharArray[j] = charString[i+j]; } comparedString = String(tempCharArray); if(s2.indexOf(comparedString) > 0){ return true; } } } return false } ``` I havn't looked closely at my code, or if any better methods existed, but this would work
46,899,860
Im using C# in VS2017. I have a json from where I get data, and this 3 classes: ``` public class Azar { public int id_juego { get; set; } public string timestamp { get; set; } public int id_tipojuego { get; set; } public string fecha { get; set; } public int? sorteo { get; set; } public object resultados { get; set; } public string tipo_juego { get; set; } public int tipo { get; set; } } public class ResultadoQuiniela { public string letras { get; set; } public int[] numeros { get; set; } } public class ResultadoTelekino { public int sorteo { get; set; } public int b1 { get; set; } public int b2 { get; set; } public int b3 { get; set; } public int b4 { get; set; } public int b5 { get; set; } public int b6 { get; set; } public int b7 { get; set; } public int b8 { get; set; } public int b9 { get; set; } public int b10 { get; set; } public int b11 { get; set; } public int b12 { get; set; } public int b13 { get; set; } public int b14 { get; set; } public int b15 { get; set; } public int cat1 { get; set; } public int cat2 { get; set; } public int cat3 { get; set; } public int cat4 { get; set; } public int cat5 { get; set; } public int cat6 { get; set; } public int cat7 { get; set; } public string prm1 { get; set; } public string prm2 { get; set; } public string prm3 { get; set; } public string prm4 { get; set; } public string prm5 { get; set; } public string prm6 { get; set; } public string prm7 { get; set; } public string pozo { get; set; } public string ext { get; set; } } ``` The Azar->resultados object can be any of the 2 classes, or ResultadoQuiniela or ResultadoTelekino. I don't know which of them is until I parse the json and see the id\_tipojuego. Once I know that I do: ``` if(curAzar.id_tipojuego == 25) { ResultadoQuiniela resultados = (ResultadoQuiniela)curAzar.resultados; } ``` But I get null results. I see that curAzar (is the result parsed from the json) was every attribute set BUT the resultados object. How can I store that information in a "neutral" way and then cast it to the right object? Declaring it as object and then cast it it doesn't store the value at all. **EDIT**: For parsing the json Im using Newtonsoft.Json
2017/10/23
[ "https://Stackoverflow.com/questions/46899860", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1720455/" ]
So Basically this would be my approach: @Pritam is right if you want to do it that way, but, assuming you don't have the possibility of using String.contains() and String.substring(). This is how i would do it. ``` import java.util.ArrayList; import java.util.Arrays; import java.util.List; import java.util.Scanner; public class exercise{ public static void main(String[] args) { String s1="home"; String s2="homework"; int n=4; exercise p=new exercise(); if(p.hasCheated(s1, s2, n)) { System.out.println("Student Cheated "); return; } System.out.println("Not Cheated"); } public boolean hasCheated(String s1,String s2, int N) { boolean s3=true; ArrayList<String> al=new ArrayList<String>(); ArrayList<String> bl=new ArrayList<String>(); al.addAll(getInfo(s1,N)); bl.addAll(getInfo(s2,N)); al.retainAll(bl); if(al.size()==0) { s3=false; } return s3; } public List<String> getInfo(String s,int n) { ArrayList<String> inf=new ArrayList<String>(); inf.clear(); String myStr=Arrays.toString(s.split("(?<=\\G.{4})")); Scanner sc=new Scanner(myStr).useDelimiter(","); while(sc.hasNext()) { String myString=sc.next().replaceAll("\\W", ""); inf.add(myString); } return inf; } } ```
First change this: `<=s1.length()` to `<s1.length()`, otherwise you will get an `ArrayOutOfIndex` Exception. You can use `indexOf()`.
46,899,860
Im using C# in VS2017. I have a json from where I get data, and this 3 classes: ``` public class Azar { public int id_juego { get; set; } public string timestamp { get; set; } public int id_tipojuego { get; set; } public string fecha { get; set; } public int? sorteo { get; set; } public object resultados { get; set; } public string tipo_juego { get; set; } public int tipo { get; set; } } public class ResultadoQuiniela { public string letras { get; set; } public int[] numeros { get; set; } } public class ResultadoTelekino { public int sorteo { get; set; } public int b1 { get; set; } public int b2 { get; set; } public int b3 { get; set; } public int b4 { get; set; } public int b5 { get; set; } public int b6 { get; set; } public int b7 { get; set; } public int b8 { get; set; } public int b9 { get; set; } public int b10 { get; set; } public int b11 { get; set; } public int b12 { get; set; } public int b13 { get; set; } public int b14 { get; set; } public int b15 { get; set; } public int cat1 { get; set; } public int cat2 { get; set; } public int cat3 { get; set; } public int cat4 { get; set; } public int cat5 { get; set; } public int cat6 { get; set; } public int cat7 { get; set; } public string prm1 { get; set; } public string prm2 { get; set; } public string prm3 { get; set; } public string prm4 { get; set; } public string prm5 { get; set; } public string prm6 { get; set; } public string prm7 { get; set; } public string pozo { get; set; } public string ext { get; set; } } ``` The Azar->resultados object can be any of the 2 classes, or ResultadoQuiniela or ResultadoTelekino. I don't know which of them is until I parse the json and see the id\_tipojuego. Once I know that I do: ``` if(curAzar.id_tipojuego == 25) { ResultadoQuiniela resultados = (ResultadoQuiniela)curAzar.resultados; } ``` But I get null results. I see that curAzar (is the result parsed from the json) was every attribute set BUT the resultados object. How can I store that information in a "neutral" way and then cast it to the right object? Declaring it as object and then cast it it doesn't store the value at all. **EDIT**: For parsing the json Im using Newtonsoft.Json
2017/10/23
[ "https://Stackoverflow.com/questions/46899860", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1720455/" ]
So Basically this would be my approach: @Pritam is right if you want to do it that way, but, assuming you don't have the possibility of using String.contains() and String.substring(). This is how i would do it. ``` import java.util.ArrayList; import java.util.Arrays; import java.util.List; import java.util.Scanner; public class exercise{ public static void main(String[] args) { String s1="home"; String s2="homework"; int n=4; exercise p=new exercise(); if(p.hasCheated(s1, s2, n)) { System.out.println("Student Cheated "); return; } System.out.println("Not Cheated"); } public boolean hasCheated(String s1,String s2, int N) { boolean s3=true; ArrayList<String> al=new ArrayList<String>(); ArrayList<String> bl=new ArrayList<String>(); al.addAll(getInfo(s1,N)); bl.addAll(getInfo(s2,N)); al.retainAll(bl); if(al.size()==0) { s3=false; } return s3; } public List<String> getInfo(String s,int n) { ArrayList<String> inf=new ArrayList<String>(); inf.clear(); String myStr=Arrays.toString(s.split("(?<=\\G.{4})")); Scanner sc=new Scanner(myStr).useDelimiter(","); while(sc.hasNext()) { String myString=sc.next().replaceAll("\\W", ""); inf.add(myString); } return inf; } } ```
1. by your logic, it would make more sens to write ``` if(s3=s2.contains(s1.substring(i, i+n))); ``` since you are iterating through your first sprint, you are trying to figure out if any of the substrings of length N in s1 is contained in s2. 2. You boolean is not necessary, it does not add any value to the code. ``` public hasCheated(String s1, String s2, int n){ for(int i=0; i < s1.length; i++){ if(i + n < s1.length){ if(s2.contains(si.substring(i,i+n))){ return true; } } } return false } ``` Without contains or substring, you could use indexOf() as Pritam mentioned, but you would still need to replace the substring. To do so, you couls convert your string to charArray and work with indexes. It does increase the complexity. ``` public hasCheated(String s1, String s2, int n){ char[] tempCharArray; char[] charString = s1.toCharArray(); String comparedString = ""; for(int i=0; i < s1.length; i++){ if(i + n < s1.length){ for(int j = 0; j<n ; j++){ tempCharArray[j] = charString[i+j]; } comparedString = String(tempCharArray); if(s2.indexOf(comparedString) > 0){ return true; } } } return false } ``` I havn't looked closely at my code, or if any better methods existed, but this would work
59,721,035
Problem ======= `useState` always triggers an update even when the data's values haven't changed. Here's a working demo of the problem: [demo](https://codepen.io/agconti/pen/RwNJLep?editors=0011) Background ---------- I'm using the `useState` hook to update an object and I'm trying to get it to only update when the values in that object change. Because React uses the [Object.is](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/is#Description) comparison algorithm to determine when it should update; objects with equivalent values still cause the component to re-render because they're different objects. Ex. This component will always re-render even though the value of the payload stays as `{ foo: 'bar' }` ```js const UseStateWithNewObject = () => { const [payload, setPayload] = useState({}); useEffect( () => { setInterval(() => { setPayload({ foo: 'bar' }); }, 500); }, [setPayload] ); renderCountNewObject += 1; return <h3>A new object, even with the same values, will always cause a render: {renderCountNewObject}</h3>; }; ``` Question ======== Is there away that I can implement something like `shouldComponentUpdate` with hooks to tell react to only re-render my component when the data changes?
2020/01/13
[ "https://Stackoverflow.com/questions/59721035", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2259303/" ]
I think we would need to see a better real life example of what you are tying to do, but from what you have shared I think the logic would need to move upstream to a point before the state gets set. For example, you could manually compare the incoming values in a `useEffect` before you update state, because this is basically what you are asking if React can do for you. There is a library `use-deep-compare-effect` <https://github.com/kentcdodds/use-deep-compare-effect> that may be of use to you in this case, taking care of a lot of the manual effort involved, but even then, this solution assumes the developer is going to manually decide (based on incoming props, etc) if the state should be updated. So for example: ``` const obj = {foo: 'bar'} const [state, setState] = useState(obj) useEffect(() => { // manually deep compare here before updating state if(obj.foo === state.foo) return setState(obj) },[obj]) ``` EDIT: Example using `useRef` if you don't use the value directly and don't need the component to update based on it: ``` const obj = {foo: 'bar'} const [state, setState] = useState(obj) const { current: payload } = useRef(obj) useEffect(() => { // always update the ref with the current value - won't affect renders payload = obj // Now manually deep compare here and only update the state if //needed/you want a re render if(obj.foo === state.foo) return setState(obj) },[obj]) ```
You can use memoized components, they will re-render only on prop changes. ``` const comparatorFunc = (prev, next) => { return prev.foo === next.foo } const MemoizedComponent = React.memo(({payload}) => { return (<div>{JSON.stringify(payload)}</div>) }, comparatorFunc); ```
59,721,035
Problem ======= `useState` always triggers an update even when the data's values haven't changed. Here's a working demo of the problem: [demo](https://codepen.io/agconti/pen/RwNJLep?editors=0011) Background ---------- I'm using the `useState` hook to update an object and I'm trying to get it to only update when the values in that object change. Because React uses the [Object.is](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/is#Description) comparison algorithm to determine when it should update; objects with equivalent values still cause the component to re-render because they're different objects. Ex. This component will always re-render even though the value of the payload stays as `{ foo: 'bar' }` ```js const UseStateWithNewObject = () => { const [payload, setPayload] = useState({}); useEffect( () => { setInterval(() => { setPayload({ foo: 'bar' }); }, 500); }, [setPayload] ); renderCountNewObject += 1; return <h3>A new object, even with the same values, will always cause a render: {renderCountNewObject}</h3>; }; ``` Question ======== Is there away that I can implement something like `shouldComponentUpdate` with hooks to tell react to only re-render my component when the data changes?
2020/01/13
[ "https://Stackoverflow.com/questions/59721035", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2259303/" ]
> > Is there away that I can implement something like shouldComponentUpdate with hooks to tell react to only re-render my component when the data changes? > > > Commonly, for state change you compare with previous value before rendering with functional `useState` or a reference using `useRef`: ``` // functional useState useEffect(() => { setInterval(() => { const curr = { foo: 'bar' }; setPayload(prev => (isEqual(prev, curr) ? prev : curr)); }, 500); }, [setPayload]); ``` ``` // with ref const prev = useRef(); useEffect(() => { setInterval(() => { const curr = { foo: 'bar' }; if (!isEqual(prev.current, curr)) { setPayload(curr); } }, 500); }, [setPayload]); useEffect(() => { prev.current = payload; }, [payload]); ``` --- For completeness, "re-render my component when the data changes?" may be referred to props too, so in this case, you should use [`React.memo`](https://reactjs.org/docs/react-api.html#reactmemo). > > If your function component renders the same result given the same props, you can wrap it in a call to React.memo for a performance boost in some cases by memoizing the result. This means that React will skip rendering the component, and reuse the last rendered result. > > > [![Edit affectionate-hellman-ujtbv](https://codesandbox.io/static/img/play-codesandbox.svg)](https://codesandbox.io/s/affectionate-hellman-ujtbv?fontsize=14&hidenavigation=1&theme=dark)
You can use memoized components, they will re-render only on prop changes. ``` const comparatorFunc = (prev, next) => { return prev.foo === next.foo } const MemoizedComponent = React.memo(({payload}) => { return (<div>{JSON.stringify(payload)}</div>) }, comparatorFunc); ```
59,721,035
Problem ======= `useState` always triggers an update even when the data's values haven't changed. Here's a working demo of the problem: [demo](https://codepen.io/agconti/pen/RwNJLep?editors=0011) Background ---------- I'm using the `useState` hook to update an object and I'm trying to get it to only update when the values in that object change. Because React uses the [Object.is](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/is#Description) comparison algorithm to determine when it should update; objects with equivalent values still cause the component to re-render because they're different objects. Ex. This component will always re-render even though the value of the payload stays as `{ foo: 'bar' }` ```js const UseStateWithNewObject = () => { const [payload, setPayload] = useState({}); useEffect( () => { setInterval(() => { setPayload({ foo: 'bar' }); }, 500); }, [setPayload] ); renderCountNewObject += 1; return <h3>A new object, even with the same values, will always cause a render: {renderCountNewObject}</h3>; }; ``` Question ======== Is there away that I can implement something like `shouldComponentUpdate` with hooks to tell react to only re-render my component when the data changes?
2020/01/13
[ "https://Stackoverflow.com/questions/59721035", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2259303/" ]
The generic solution to this that does not involve adding logic to your effects, is to split your components into: * uncontrolled container with state that renders... * dumb controlled stateless component that has been memoized with `React.memo` Your dumb component can be pure (as if it had `shouldComponentUpdate` implemented and your smart state handling component can be "dumb" and not worry about updating state to the same value. Example: Before ------ ``` export default function Foo() { const [state, setState] = useState({ foo: "1" }) const handler = useCallback(newValue => setState({ foo: newValue })) return ( <div> <SomeWidget onEvent={handler} /> Value: {{ state.foo }} </div> ) ``` After ----- ``` const FooChild = React.memo(({foo, handler}) => { return ( <div> <SomeWidget onEvent={handler} /> Value: {{ state.foo }} </div> ) }) export default function Foo() { const [state, setState] = useState({ foo: "1" }) const handler = useCallback(newValue => setState({ foo: newValue })) return <FooChild handler={handler} foo={state.foo} /> } ``` This gives you the separation of logic you are looking for.
You can use memoized components, they will re-render only on prop changes. ``` const comparatorFunc = (prev, next) => { return prev.foo === next.foo } const MemoizedComponent = React.memo(({payload}) => { return (<div>{JSON.stringify(payload)}</div>) }, comparatorFunc); ```
59,721,035
Problem ======= `useState` always triggers an update even when the data's values haven't changed. Here's a working demo of the problem: [demo](https://codepen.io/agconti/pen/RwNJLep?editors=0011) Background ---------- I'm using the `useState` hook to update an object and I'm trying to get it to only update when the values in that object change. Because React uses the [Object.is](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/is#Description) comparison algorithm to determine when it should update; objects with equivalent values still cause the component to re-render because they're different objects. Ex. This component will always re-render even though the value of the payload stays as `{ foo: 'bar' }` ```js const UseStateWithNewObject = () => { const [payload, setPayload] = useState({}); useEffect( () => { setInterval(() => { setPayload({ foo: 'bar' }); }, 500); }, [setPayload] ); renderCountNewObject += 1; return <h3>A new object, even with the same values, will always cause a render: {renderCountNewObject}</h3>; }; ``` Question ======== Is there away that I can implement something like `shouldComponentUpdate` with hooks to tell react to only re-render my component when the data changes?
2020/01/13
[ "https://Stackoverflow.com/questions/59721035", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2259303/" ]
If I understand well, you are trying to only call `setState` whenever the new value for the state has changed, thus preventing unnecessary rerenders when it has NOT changed. If that is the case you can take advantage of the callback form of [useState](https://reactjs.org/docs/hooks-reference.html#usestate) ``` const [state, setState] = useState({}); setState(prevState => { // here check for equality and return prevState if the same // If the same return prevState; // -> NO RERENDER ! // If different return {...prevState, ...updatedValues}; // Rerender }); ``` Here is a custom hook (in TypeScript) that does that for you automatically. It uses `isEqual` from lodash. But feel free to replace it with whatever equality function you see fit. ``` import { isEqual } from 'lodash'; import { useState } from 'react'; const useMemoizedState = <T>(initialValue: T): [T, (val: T) => void] => { const [state, _setState] = useState<T>(initialValue); const setState = (newState: T) => { _setState((prev) => { if (!isEqual(newState, prev)) { return newState; } else { return prev; } }); }; return [state, setState]; }; export default useMemoizedState; ``` Usage: ``` const [value, setValue] = useMemoizedState({ [...] }); ```
You can use memoized components, they will re-render only on prop changes. ``` const comparatorFunc = (prev, next) => { return prev.foo === next.foo } const MemoizedComponent = React.memo(({payload}) => { return (<div>{JSON.stringify(payload)}</div>) }, comparatorFunc); ```
59,721,035
Problem ======= `useState` always triggers an update even when the data's values haven't changed. Here's a working demo of the problem: [demo](https://codepen.io/agconti/pen/RwNJLep?editors=0011) Background ---------- I'm using the `useState` hook to update an object and I'm trying to get it to only update when the values in that object change. Because React uses the [Object.is](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/is#Description) comparison algorithm to determine when it should update; objects with equivalent values still cause the component to re-render because they're different objects. Ex. This component will always re-render even though the value of the payload stays as `{ foo: 'bar' }` ```js const UseStateWithNewObject = () => { const [payload, setPayload] = useState({}); useEffect( () => { setInterval(() => { setPayload({ foo: 'bar' }); }, 500); }, [setPayload] ); renderCountNewObject += 1; return <h3>A new object, even with the same values, will always cause a render: {renderCountNewObject}</h3>; }; ``` Question ======== Is there away that I can implement something like `shouldComponentUpdate` with hooks to tell react to only re-render my component when the data changes?
2020/01/13
[ "https://Stackoverflow.com/questions/59721035", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2259303/" ]
I think we would need to see a better real life example of what you are tying to do, but from what you have shared I think the logic would need to move upstream to a point before the state gets set. For example, you could manually compare the incoming values in a `useEffect` before you update state, because this is basically what you are asking if React can do for you. There is a library `use-deep-compare-effect` <https://github.com/kentcdodds/use-deep-compare-effect> that may be of use to you in this case, taking care of a lot of the manual effort involved, but even then, this solution assumes the developer is going to manually decide (based on incoming props, etc) if the state should be updated. So for example: ``` const obj = {foo: 'bar'} const [state, setState] = useState(obj) useEffect(() => { // manually deep compare here before updating state if(obj.foo === state.foo) return setState(obj) },[obj]) ``` EDIT: Example using `useRef` if you don't use the value directly and don't need the component to update based on it: ``` const obj = {foo: 'bar'} const [state, setState] = useState(obj) const { current: payload } = useRef(obj) useEffect(() => { // always update the ref with the current value - won't affect renders payload = obj // Now manually deep compare here and only update the state if //needed/you want a re render if(obj.foo === state.foo) return setState(obj) },[obj]) ```
> > Is there away that I can implement something like shouldComponentUpdate with hooks to tell react to only re-render my component when the data changes? > > > Commonly, for state change you compare with previous value before rendering with functional `useState` or a reference using `useRef`: ``` // functional useState useEffect(() => { setInterval(() => { const curr = { foo: 'bar' }; setPayload(prev => (isEqual(prev, curr) ? prev : curr)); }, 500); }, [setPayload]); ``` ``` // with ref const prev = useRef(); useEffect(() => { setInterval(() => { const curr = { foo: 'bar' }; if (!isEqual(prev.current, curr)) { setPayload(curr); } }, 500); }, [setPayload]); useEffect(() => { prev.current = payload; }, [payload]); ``` --- For completeness, "re-render my component when the data changes?" may be referred to props too, so in this case, you should use [`React.memo`](https://reactjs.org/docs/react-api.html#reactmemo). > > If your function component renders the same result given the same props, you can wrap it in a call to React.memo for a performance boost in some cases by memoizing the result. This means that React will skip rendering the component, and reuse the last rendered result. > > > [![Edit affectionate-hellman-ujtbv](https://codesandbox.io/static/img/play-codesandbox.svg)](https://codesandbox.io/s/affectionate-hellman-ujtbv?fontsize=14&hidenavigation=1&theme=dark)
59,721,035
Problem ======= `useState` always triggers an update even when the data's values haven't changed. Here's a working demo of the problem: [demo](https://codepen.io/agconti/pen/RwNJLep?editors=0011) Background ---------- I'm using the `useState` hook to update an object and I'm trying to get it to only update when the values in that object change. Because React uses the [Object.is](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/is#Description) comparison algorithm to determine when it should update; objects with equivalent values still cause the component to re-render because they're different objects. Ex. This component will always re-render even though the value of the payload stays as `{ foo: 'bar' }` ```js const UseStateWithNewObject = () => { const [payload, setPayload] = useState({}); useEffect( () => { setInterval(() => { setPayload({ foo: 'bar' }); }, 500); }, [setPayload] ); renderCountNewObject += 1; return <h3>A new object, even with the same values, will always cause a render: {renderCountNewObject}</h3>; }; ``` Question ======== Is there away that I can implement something like `shouldComponentUpdate` with hooks to tell react to only re-render my component when the data changes?
2020/01/13
[ "https://Stackoverflow.com/questions/59721035", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2259303/" ]
I think we would need to see a better real life example of what you are tying to do, but from what you have shared I think the logic would need to move upstream to a point before the state gets set. For example, you could manually compare the incoming values in a `useEffect` before you update state, because this is basically what you are asking if React can do for you. There is a library `use-deep-compare-effect` <https://github.com/kentcdodds/use-deep-compare-effect> that may be of use to you in this case, taking care of a lot of the manual effort involved, but even then, this solution assumes the developer is going to manually decide (based on incoming props, etc) if the state should be updated. So for example: ``` const obj = {foo: 'bar'} const [state, setState] = useState(obj) useEffect(() => { // manually deep compare here before updating state if(obj.foo === state.foo) return setState(obj) },[obj]) ``` EDIT: Example using `useRef` if you don't use the value directly and don't need the component to update based on it: ``` const obj = {foo: 'bar'} const [state, setState] = useState(obj) const { current: payload } = useRef(obj) useEffect(() => { // always update the ref with the current value - won't affect renders payload = obj // Now manually deep compare here and only update the state if //needed/you want a re render if(obj.foo === state.foo) return setState(obj) },[obj]) ```
The generic solution to this that does not involve adding logic to your effects, is to split your components into: * uncontrolled container with state that renders... * dumb controlled stateless component that has been memoized with `React.memo` Your dumb component can be pure (as if it had `shouldComponentUpdate` implemented and your smart state handling component can be "dumb" and not worry about updating state to the same value. Example: Before ------ ``` export default function Foo() { const [state, setState] = useState({ foo: "1" }) const handler = useCallback(newValue => setState({ foo: newValue })) return ( <div> <SomeWidget onEvent={handler} /> Value: {{ state.foo }} </div> ) ``` After ----- ``` const FooChild = React.memo(({foo, handler}) => { return ( <div> <SomeWidget onEvent={handler} /> Value: {{ state.foo }} </div> ) }) export default function Foo() { const [state, setState] = useState({ foo: "1" }) const handler = useCallback(newValue => setState({ foo: newValue })) return <FooChild handler={handler} foo={state.foo} /> } ``` This gives you the separation of logic you are looking for.
59,721,035
Problem ======= `useState` always triggers an update even when the data's values haven't changed. Here's a working demo of the problem: [demo](https://codepen.io/agconti/pen/RwNJLep?editors=0011) Background ---------- I'm using the `useState` hook to update an object and I'm trying to get it to only update when the values in that object change. Because React uses the [Object.is](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/is#Description) comparison algorithm to determine when it should update; objects with equivalent values still cause the component to re-render because they're different objects. Ex. This component will always re-render even though the value of the payload stays as `{ foo: 'bar' }` ```js const UseStateWithNewObject = () => { const [payload, setPayload] = useState({}); useEffect( () => { setInterval(() => { setPayload({ foo: 'bar' }); }, 500); }, [setPayload] ); renderCountNewObject += 1; return <h3>A new object, even with the same values, will always cause a render: {renderCountNewObject}</h3>; }; ``` Question ======== Is there away that I can implement something like `shouldComponentUpdate` with hooks to tell react to only re-render my component when the data changes?
2020/01/13
[ "https://Stackoverflow.com/questions/59721035", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2259303/" ]
> > Is there away that I can implement something like shouldComponentUpdate with hooks to tell react to only re-render my component when the data changes? > > > Commonly, for state change you compare with previous value before rendering with functional `useState` or a reference using `useRef`: ``` // functional useState useEffect(() => { setInterval(() => { const curr = { foo: 'bar' }; setPayload(prev => (isEqual(prev, curr) ? prev : curr)); }, 500); }, [setPayload]); ``` ``` // with ref const prev = useRef(); useEffect(() => { setInterval(() => { const curr = { foo: 'bar' }; if (!isEqual(prev.current, curr)) { setPayload(curr); } }, 500); }, [setPayload]); useEffect(() => { prev.current = payload; }, [payload]); ``` --- For completeness, "re-render my component when the data changes?" may be referred to props too, so in this case, you should use [`React.memo`](https://reactjs.org/docs/react-api.html#reactmemo). > > If your function component renders the same result given the same props, you can wrap it in a call to React.memo for a performance boost in some cases by memoizing the result. This means that React will skip rendering the component, and reuse the last rendered result. > > > [![Edit affectionate-hellman-ujtbv](https://codesandbox.io/static/img/play-codesandbox.svg)](https://codesandbox.io/s/affectionate-hellman-ujtbv?fontsize=14&hidenavigation=1&theme=dark)
The generic solution to this that does not involve adding logic to your effects, is to split your components into: * uncontrolled container with state that renders... * dumb controlled stateless component that has been memoized with `React.memo` Your dumb component can be pure (as if it had `shouldComponentUpdate` implemented and your smart state handling component can be "dumb" and not worry about updating state to the same value. Example: Before ------ ``` export default function Foo() { const [state, setState] = useState({ foo: "1" }) const handler = useCallback(newValue => setState({ foo: newValue })) return ( <div> <SomeWidget onEvent={handler} /> Value: {{ state.foo }} </div> ) ``` After ----- ``` const FooChild = React.memo(({foo, handler}) => { return ( <div> <SomeWidget onEvent={handler} /> Value: {{ state.foo }} </div> ) }) export default function Foo() { const [state, setState] = useState({ foo: "1" }) const handler = useCallback(newValue => setState({ foo: newValue })) return <FooChild handler={handler} foo={state.foo} /> } ``` This gives you the separation of logic you are looking for.
59,721,035
Problem ======= `useState` always triggers an update even when the data's values haven't changed. Here's a working demo of the problem: [demo](https://codepen.io/agconti/pen/RwNJLep?editors=0011) Background ---------- I'm using the `useState` hook to update an object and I'm trying to get it to only update when the values in that object change. Because React uses the [Object.is](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/is#Description) comparison algorithm to determine when it should update; objects with equivalent values still cause the component to re-render because they're different objects. Ex. This component will always re-render even though the value of the payload stays as `{ foo: 'bar' }` ```js const UseStateWithNewObject = () => { const [payload, setPayload] = useState({}); useEffect( () => { setInterval(() => { setPayload({ foo: 'bar' }); }, 500); }, [setPayload] ); renderCountNewObject += 1; return <h3>A new object, even with the same values, will always cause a render: {renderCountNewObject}</h3>; }; ``` Question ======== Is there away that I can implement something like `shouldComponentUpdate` with hooks to tell react to only re-render my component when the data changes?
2020/01/13
[ "https://Stackoverflow.com/questions/59721035", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2259303/" ]
If I understand well, you are trying to only call `setState` whenever the new value for the state has changed, thus preventing unnecessary rerenders when it has NOT changed. If that is the case you can take advantage of the callback form of [useState](https://reactjs.org/docs/hooks-reference.html#usestate) ``` const [state, setState] = useState({}); setState(prevState => { // here check for equality and return prevState if the same // If the same return prevState; // -> NO RERENDER ! // If different return {...prevState, ...updatedValues}; // Rerender }); ``` Here is a custom hook (in TypeScript) that does that for you automatically. It uses `isEqual` from lodash. But feel free to replace it with whatever equality function you see fit. ``` import { isEqual } from 'lodash'; import { useState } from 'react'; const useMemoizedState = <T>(initialValue: T): [T, (val: T) => void] => { const [state, _setState] = useState<T>(initialValue); const setState = (newState: T) => { _setState((prev) => { if (!isEqual(newState, prev)) { return newState; } else { return prev; } }); }; return [state, setState]; }; export default useMemoizedState; ``` Usage: ``` const [value, setValue] = useMemoizedState({ [...] }); ```
> > Is there away that I can implement something like shouldComponentUpdate with hooks to tell react to only re-render my component when the data changes? > > > Commonly, for state change you compare with previous value before rendering with functional `useState` or a reference using `useRef`: ``` // functional useState useEffect(() => { setInterval(() => { const curr = { foo: 'bar' }; setPayload(prev => (isEqual(prev, curr) ? prev : curr)); }, 500); }, [setPayload]); ``` ``` // with ref const prev = useRef(); useEffect(() => { setInterval(() => { const curr = { foo: 'bar' }; if (!isEqual(prev.current, curr)) { setPayload(curr); } }, 500); }, [setPayload]); useEffect(() => { prev.current = payload; }, [payload]); ``` --- For completeness, "re-render my component when the data changes?" may be referred to props too, so in this case, you should use [`React.memo`](https://reactjs.org/docs/react-api.html#reactmemo). > > If your function component renders the same result given the same props, you can wrap it in a call to React.memo for a performance boost in some cases by memoizing the result. This means that React will skip rendering the component, and reuse the last rendered result. > > > [![Edit affectionate-hellman-ujtbv](https://codesandbox.io/static/img/play-codesandbox.svg)](https://codesandbox.io/s/affectionate-hellman-ujtbv?fontsize=14&hidenavigation=1&theme=dark)
59,721,035
Problem ======= `useState` always triggers an update even when the data's values haven't changed. Here's a working demo of the problem: [demo](https://codepen.io/agconti/pen/RwNJLep?editors=0011) Background ---------- I'm using the `useState` hook to update an object and I'm trying to get it to only update when the values in that object change. Because React uses the [Object.is](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/is#Description) comparison algorithm to determine when it should update; objects with equivalent values still cause the component to re-render because they're different objects. Ex. This component will always re-render even though the value of the payload stays as `{ foo: 'bar' }` ```js const UseStateWithNewObject = () => { const [payload, setPayload] = useState({}); useEffect( () => { setInterval(() => { setPayload({ foo: 'bar' }); }, 500); }, [setPayload] ); renderCountNewObject += 1; return <h3>A new object, even with the same values, will always cause a render: {renderCountNewObject}</h3>; }; ``` Question ======== Is there away that I can implement something like `shouldComponentUpdate` with hooks to tell react to only re-render my component when the data changes?
2020/01/13
[ "https://Stackoverflow.com/questions/59721035", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2259303/" ]
If I understand well, you are trying to only call `setState` whenever the new value for the state has changed, thus preventing unnecessary rerenders when it has NOT changed. If that is the case you can take advantage of the callback form of [useState](https://reactjs.org/docs/hooks-reference.html#usestate) ``` const [state, setState] = useState({}); setState(prevState => { // here check for equality and return prevState if the same // If the same return prevState; // -> NO RERENDER ! // If different return {...prevState, ...updatedValues}; // Rerender }); ``` Here is a custom hook (in TypeScript) that does that for you automatically. It uses `isEqual` from lodash. But feel free to replace it with whatever equality function you see fit. ``` import { isEqual } from 'lodash'; import { useState } from 'react'; const useMemoizedState = <T>(initialValue: T): [T, (val: T) => void] => { const [state, _setState] = useState<T>(initialValue); const setState = (newState: T) => { _setState((prev) => { if (!isEqual(newState, prev)) { return newState; } else { return prev; } }); }; return [state, setState]; }; export default useMemoizedState; ``` Usage: ``` const [value, setValue] = useMemoizedState({ [...] }); ```
The generic solution to this that does not involve adding logic to your effects, is to split your components into: * uncontrolled container with state that renders... * dumb controlled stateless component that has been memoized with `React.memo` Your dumb component can be pure (as if it had `shouldComponentUpdate` implemented and your smart state handling component can be "dumb" and not worry about updating state to the same value. Example: Before ------ ``` export default function Foo() { const [state, setState] = useState({ foo: "1" }) const handler = useCallback(newValue => setState({ foo: newValue })) return ( <div> <SomeWidget onEvent={handler} /> Value: {{ state.foo }} </div> ) ``` After ----- ``` const FooChild = React.memo(({foo, handler}) => { return ( <div> <SomeWidget onEvent={handler} /> Value: {{ state.foo }} </div> ) }) export default function Foo() { const [state, setState] = useState({ foo: "1" }) const handler = useCallback(newValue => setState({ foo: newValue })) return <FooChild handler={handler} foo={state.foo} /> } ``` This gives you the separation of logic you are looking for.
9,422,404
The javadoc for `Runtime.availableProcessors()` in Java 1.6 is delightfully unspecific. Is it looking just at the hardware configuration, or also at the load? Is it smart enough to avoid being fooled by hyperthreading? Does it respect a limited set of processors via the linux `taskset` command? I can add one datapoint of my own: on a computer here with 12 cores and hyperthreading, Runtime.availableProcessors() indeed returns 24, which is not a good number to use in deciding how many threads to try to run. The machine was clearly not dead-idle, so it also can't have been looking at load in any effective way.
2012/02/23
[ "https://Stackoverflow.com/questions/9422404", "https://Stackoverflow.com", "https://Stackoverflow.com/users/131433/" ]
On Windows, [GetSystemInfo](http://msdn.microsoft.com/en-us/library/windows/desktop/ms724381%28v=vs.85%29.aspx) is used and `dwNumberOfProcessors` from the returned `SYSTEM_INFO` structure. This can be seen from `void os::win32::initialize_system_info()` and `int os::active_processor_count()` in `os_windows.cpp` of the OpenJDK source code. `dwNumberOfProcessors`, from the MSDN documentation says that it reports 'The number of logical processors in the current group', which means that hyperthreading will increase the number of CPUs reported. On Linux, `os::active_processor_count()` uses [sysconf](http://linux.die.net/man/3/sysconf): ``` int os::active_processor_count() { // Linux doesn't yet have a (official) notion of processor sets, // so just return the number of online processors. int online_cpus = ::sysconf(_SC_NPROCESSORS_ONLN); assert(online_cpus > 0 && online_cpus <= processor_count(), "sanity check"); return online_cpus; } ``` Where `_SC_NPROCESSORS_ONLN` documentation says 'The number of processors currently online (available).' This is not affected by the affinity of the process, and is also affected by hyperthreading.
AFAIK, it always gives you the total number of available CPUs even those not available for scheduling. I have a library which uses this fact to find reserved cpus. I read the /proc/cpuinfo and the default thread affinity of the process to work out what is available.
9,422,404
The javadoc for `Runtime.availableProcessors()` in Java 1.6 is delightfully unspecific. Is it looking just at the hardware configuration, or also at the load? Is it smart enough to avoid being fooled by hyperthreading? Does it respect a limited set of processors via the linux `taskset` command? I can add one datapoint of my own: on a computer here with 12 cores and hyperthreading, Runtime.availableProcessors() indeed returns 24, which is not a good number to use in deciding how many threads to try to run. The machine was clearly not dead-idle, so it also can't have been looking at load in any effective way.
2012/02/23
[ "https://Stackoverflow.com/questions/9422404", "https://Stackoverflow.com", "https://Stackoverflow.com/users/131433/" ]
On Windows, [GetSystemInfo](http://msdn.microsoft.com/en-us/library/windows/desktop/ms724381%28v=vs.85%29.aspx) is used and `dwNumberOfProcessors` from the returned `SYSTEM_INFO` structure. This can be seen from `void os::win32::initialize_system_info()` and `int os::active_processor_count()` in `os_windows.cpp` of the OpenJDK source code. `dwNumberOfProcessors`, from the MSDN documentation says that it reports 'The number of logical processors in the current group', which means that hyperthreading will increase the number of CPUs reported. On Linux, `os::active_processor_count()` uses [sysconf](http://linux.die.net/man/3/sysconf): ``` int os::active_processor_count() { // Linux doesn't yet have a (official) notion of processor sets, // so just return the number of online processors. int online_cpus = ::sysconf(_SC_NPROCESSORS_ONLN); assert(online_cpus > 0 && online_cpus <= processor_count(), "sanity check"); return online_cpus; } ``` Where `_SC_NPROCESSORS_ONLN` documentation says 'The number of processors currently online (available).' This is not affected by the affinity of the process, and is also affected by hyperthreading.
According to [Sun Bug 6673124](http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6673124): > > The code for `active_processor_count`, used by `Runtime.availableProcessors()` is as follows: > > > > ``` > int os::active_processor_count() { > int online_cpus = sysconf(_SC_NPROCESSORS_ONLN); > pid_t pid = getpid(); > psetid_t pset = PS_NONE; > // Are we running in a processor set? > if (pset_bind(PS_QUERY, P_PID, pid, &pset) == 0) { > if (pset != PS_NONE) { > uint_t pset_cpus; > // Query number of cpus in processor set > if (pset_info(pset, NULL, &pset_cpus, NULL) == 0) { > assert(pset_cpus > 0 && pset_cpus <= online_cpus, "sanity check"); > _processors_online = pset_cpus; > return pset_cpus; > } > } > } > // Otherwise return number of online cpus > return online_cpus; > } > > ``` > > This particular code may be Solaris-specific. But I would imagine that the behavior would be at least somewhat similar on other platforms.
38,932,921
What i'm trying to achieve is getting a controller instance from the main method, so i can call methods of the controller from another class and update the fxml. Anyways here's my code: Main class: ``` public class Main extends Application { Controller controller; @Override public void start(Stage primaryStage) throws Exception{ FXMLLoader fxmlLoader = new FXMLLoader(); Parent root = fxmlLoader.load(getClass().getResource("sample.fxml")); controller = fxmlLoader.getController(); primaryStage.setTitle("uPick Smart Service"); primaryStage.setScene(new Scene(root, 1600, 600)); primaryStage.show(); ConnectionHandling connectionHandling = new ConnectionHandling(); Thread X = new Thread (connectionHandling); X.start(); } public static void main(String args[]){ launch(args); } public Controller getController(){ return controller; } } ``` My controller class: ``` public class Controller { public HBox billbox; public int childnr; public void createBill() { System.out.println("Creating"); TableView<Item> bill = new TableView<>(); DropShadow dropShadow = new DropShadow(); dropShadow.setRadius(5.0); dropShadow.setOffsetX(3.0); dropShadow.setOffsetY(3.0); dropShadow.setColor(Color.color(0.4, 0.5, 0.5)); VBox fullbill = new VBox(); fullbill.setPadding(new Insets(1, 1, 1, 1)); fullbill.getStyleClass().add("fullbill"); TableColumn<Item, String> nameColumn = new TableColumn<>("Emri"); nameColumn.setMinWidth(200); nameColumn.setCellValueFactory(new PropertyValueFactory<>("name")); TableColumn<Item, String> quantsColumn = new TableColumn<>("Sasia"); quantsColumn.setMinWidth(50); quantsColumn.setCellValueFactory(new PropertyValueFactory<>("quants")); double tablewidth = nameColumn.getWidth() + quantsColumn.getWidth(); Label tablenrlabel = new Label("Table 5"); tablenrlabel.getStyleClass().add("tablenr-label"); tablenrlabel.setMinWidth(tablewidth); Button closebutton = new Button("Mbyll"); closebutton.setMinWidth(tablewidth); closebutton.getStyleClass().add("red-tint"); bill.setItems(getItem()); bill.setMinWidth(256); bill.setColumnResizePolicy(TableView.CONSTRAINED_RESIZE_POLICY); bill.getColumns().addAll(nameColumn, quantsColumn); fullbill.setEffect(dropShadow); fullbill.getChildren().addAll(tablenrlabel, bill, closebutton); billbox.getChildren().addAll(fullbill); childnr += 1; //Loops over every button in every vbox and gives it a seperate listener (the index of the button is hardcoded so it can cause problems if you add more items) for (int i = 0; i < childnr; i++) { VBox box = (VBox) billbox.getChildren().get(i); Button btn = (Button) box.getChildren().get(2); //if sudden issues change this btn.setId(Integer.valueOf(i).toString()); btn.setOnAction(new EventHandler<ActionEvent>() { @Override public void handle(ActionEvent event) { int index = billbox.getChildren().indexOf(btn.getParent()); billbox.getChildren().remove(index); childnr -= 1; System.out.println(btn.getId()); } }); } System.out.println("done"); } } ``` And the class trying to call controller method: ``` public class TakeOrder implements Runnable { Socket SOCK; Controller controller; //OrderIndexes: "Order","Waiter","Payment" private int NonConnectedDatas = 2; public TakeOrder(Socket X){ this.SOCK = X; } public void CheckConnection() throws IOException{ System.out.println("Checking connection"); if(!SOCK.isConnected()){ System.out.println("Dissconectiong"); for(int i = 0; i < ConnectionHandling.ConnectionArray.size(); i++){ if(ConnectionHandling.ConnectionArray.get(i) == SOCK){ ConnectionHandling.ConnectionArray.remove(i); } } } } public void run(){ try{ try{ CheckConnection(); ObjectInputStream ob = new ObjectInputStream(SOCK.getInputStream()); String[] structuredArray = (String[])ob.readObject(); String tablenr = structuredArray[0]; String index = structuredArray[1]; ArrayList<String> names = new ArrayList<>(); ArrayList<String> quants = new ArrayList<>(); int a = 0; int b = 0; switch (index) { case "Order": for (int i = NonConnectedDatas; i < structuredArray.length; i++) { if (i % 2 == 0) { names.add(a, structuredArray[i]); System.out.println(names.get(a)); a++; } else { quants.add(b, structuredArray[i]); System.out.println(quants.get(b)); b++; } } break; } Platform.runLater(new Runnable() { @Override public void run() { } }); }finally{ SOCK.close(); } }catch(Exception X){ System.out.print(X); } } } ``` And here's my error message: ``` Exception in thread "JavaFX Application Thread" java.lang.NullPointerException at sample.TakeOrder$1.run(TakeOrder.java:80) at com.sun.javafx.application.PlatformImpl.lambda$null$173(PlatformImpl.java:295) at java.security.AccessController.doPrivileged(Native Method) at com.sun.javafx.application.PlatformImpl.lambda$runLater$174(PlatformImpl.java:294) at com.sun.glass.ui.InvokeLaterDispatcher$Future.run(InvokeLaterDispatcher.java:95) at com.sun.glass.ui.win.WinApplication._runLoop(Native Method) at com.sun.glass.ui.win.WinApplication.lambda$null$148(WinApplication.java:191) at java.lang.Thread.run(Thread.java:745) ``` The ConnectionHandling class: ``` public class ConnectionHandling implements Runnable{ public static ArrayList<Socket> ConnectionArray = new ArrayList<Socket>(); public void run(){ System.out.println("Starting"); try{ final int PORT = 60123; ServerSocket SERVER = new ServerSocket(PORT); System.out.println("Waiting for clients"); while(true){ Socket SOCK = SERVER.accept(); ConnectionArray.add(SOCK); TakeOrder ORDER = new TakeOrder(SOCK); Thread X = new Thread(ORDER); X.setDaemon(true); X.start(); } }catch(Exception x){ System.out.print(x); } } ```
2016/08/13
[ "https://Stackoverflow.com/questions/38932921", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6535530/" ]
First, you are using the **`static`** [`FXMLLoader.load(URL)`](http://docs.oracle.com/javase/8/javafx/api/javafx/fxml/FXMLLoader.html#load-java.net.URL-) method. Because it's `static`, the controller property of the `FXMLLoader` instance you created isn't initialized by calling this `load` method. So `controller` in `Main` will be `null`. Instead, set the location and use the instance method [`load()`](http://docs.oracle.com/javase/8/javafx/api/javafx/fxml/FXMLLoader.html#load--): ``` @Override public void start(Stage primaryStage) throws Exception{ FXMLLoader fxmlLoader = new FXMLLoader(); fxmlLoader.setLocation(getClass().getResource("sample.fxml")); Parent root = fxmlLoader.load(); controller = fxmlLoader.getController(); primaryStage.setTitle("uPick Smart Service"); primaryStage.setScene(new Scene(root, 1600, 600)); primaryStage.show(); } ``` This doesn't entirely solve your problem, though. When you launch the application, JavaFX will create an instance of `Main` and call `start()` on that instance. With the changes above, the `controller` field of *that instance* will be properly initialized. However, in `TakeOrder.run()` you create another instance of `Main`: ``` Main main = new Main(); ``` The `controller` field for that instance won't be initialized (and even if you do initialize it, it's not the same as the instance you want). So you really need to arrange for `TakeOrder` to access the *controller instance created in the start method*. Here is the most straightforward fix to your code to make that work: ``` public class ConnectionHandling implements Runnable{ private final Controller controller ; public ConnectionHandling(Controller controller) { this.controller = controller ; } // ... public void run(){ // existing code ... TakeOrder ORDER = new TakeOrder(SOCK, connection); // ... } } ``` and ``` public class TakeOrder implements Runnable { Socket SOCK; Controller controller; //OrderIndexes: "Order","Waiter","Payment" private int NonConnectedDatas = 2; public TakeOrder(Socket X, Controller controller){ this.SOCK = X; this.controller = controller ; } // ... public void run() { // ... Platform.runLater(controller::createBill); // ... } } ``` and finally ``` public class Main extends Application { @Override public void start(Stage primaryStage) throws Exception{ FXMLLoader fxmlLoader = new FXMLLoader(); Parent root = fxmlLoader.load(getClass().getResource("sample.fxml")); Controller controller = fxmlLoader.getController(); primaryStage.setTitle("uPick Smart Service"); primaryStage.setScene(new Scene(root, 1600, 600)); primaryStage.show(); ConnectionHandling connectionHandling = new ConnectionHandling(controller); Thread X = new Thread (connectionHandling); X.start(); } public static void main(String args[]){ launch(args); } public Controller getController(){ return controller; } } ``` In general for applications such as this, you probably want to think about using a MVC or MVP approach (i.e. you need a *model* class, which would hold the services, such as your `ConnectionHandler`). You might also be interested in [this article](http://www.oracle.com/technetwork/articles/java/javafxinteg-2062777.html) on integrating services in JavaFX.
If you manually instantiate your `Application` which in this case is the `Main` class, then the `start()` function won't be called. Try launching your application with `Application.launch` instead of `Main main = new Main()` Also, the `launch` method does not return until the application has exited. Thus, you may want to somewhat reorder your call to getController() There's also another way to manage the instance of your controller and that is to use `setController(Object controller)` from your FXMLLoader. --- Docs: <https://docs.oracle.com/javase/8/javafx/api/javafx/application/Application.html#launch-java.lang.Class-java.lang.String...-> <https://docs.oracle.com/javase/8/javafx/api/javafx/fxml/FXMLLoader.html#setController-java.lang.Object->
70,855,520
I'm learning Dart & Flutter, but I'm struggling with some basic programming issues like the use of getters: ``` GoogleSignInAccount get user => _user!; ``` What's the equivalent of the "get" method? What does the `!` at the end of a variable mean? Thanks in advance!
2022/01/25
[ "https://Stackoverflow.com/questions/70855520", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17934894/" ]
That is a getter, in Java that code might look like: ``` public GoogleSignInAccount getGoogleUser(){ return this.user; } ``` Dart likes that code written more succinctly. In Dart private class members are denoted via a \_ in front of the variable/function name. So the variable that the getter is returning is a private variable, hence `_user`. The ! at the end of the variable name has to do with Dart null safety. Throwing a ! at the end of the variable name is functionally the equivalent of saying `assert _user != null;` what it actually does is cast a nullable data type to its non-nullable equivalent. In Dart, all data has two types, a nullable type, denoted by a ? at the end of the data type declaration, and a non nullable type. The \_user variable, it can be assumed from this code, is of the type GoogleSignInAccount? meaning that it can be null. However the getter wants to return a GoogleSignInAccount. Because there is no question mark on the end, the type the getter returns must NOT be null. So we put ! on the end of the \_user variable to denote that we want to cast the nullable type to its not null form. Note also that the name of this function is user and that in Dart you can have two functions of the same name if one is a getter and the other is a setter. To denote getter versus setter in a function declaration, you use the get and set keywords as you see above. If you want to get a good idea of how getters and setters look in Dart, make a class with some variables. Make sure all of the variable names start with \_ so that they are private, then right click in your IDE and tell it to generate some getters and setters for you. I believe both Android Studio and VSCode have the Generate option in their right click menu.
The exclamation mark at the end of the private variable tells the compiler that the variable is not null and the user data returned by the getter must not be null.
70,855,520
I'm learning Dart & Flutter, but I'm struggling with some basic programming issues like the use of getters: ``` GoogleSignInAccount get user => _user!; ``` What's the equivalent of the "get" method? What does the `!` at the end of a variable mean? Thanks in advance!
2022/01/25
[ "https://Stackoverflow.com/questions/70855520", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17934894/" ]
There are two things at play here: **1.** ``` GoogleSignInAccount get user => _user!; ``` does the same as ``` GoogleSignInAccount user() { return _user; } ``` Where the first one is called a getter. As a getter you can access it like a property. For example ``` myUser = userService.user; ``` With the function notation you access it like ``` myUser = userService.user(); ``` Since you do not calculate anything but only expose that private member the getter notation is more succinct. **2.** Regarding the `!`: Dart is null safe, which means types without a `?` can't be null. In your case the `_user` is a `GoogleSignInAccount?` which means it can be null. For example on startup when the user isn't signed in yet. The getter `GoogleSignInAccount get user` has the type `GoogleSignInAccount` which means the compiler gives you an error when you try to assign null to it. So **user can not be null. \_user could be null.** With the `!` you promise to the compiler that you know that the `_user` is in no case null when the getter user is called. *Example:* This could be the case if the user is loaded right on startup while a progress indicator is shown. When the user is loaded you start the whole app and now you can access the user with `user!`. You are sure that the user is loaded. If you somehow access the user before it's loaded (while it's still null) you get a runtime error. Null safety just helps you to think about wheter a variable can be null and to avoid NullPointerExceptions.
The exclamation mark at the end of the private variable tells the compiler that the variable is not null and the user data returned by the getter must not be null.
111,817
I work on a large C# application (approximately 450,000 lines of code), we constantly have problems with desktop heap and GDI handle leaks. WPF solves these issues, but I don't know what is the best way to upgrade (I expect this is going to take a long time). The application has only a few forms but these can contain many different sets of user-controls which are determined programatically. This is an internal company app so our release cycles are very short (typically 3 week release cycle). Is there some gradual upgrade path or do we have to take the hit in one massive effort?
2008/09/21
[ "https://Stackoverflow.com/questions/111817", "https://Stackoverflow.com", "https://Stackoverflow.com/users/19290/" ]
Do you use a lot of User controls for the pieces? WPF can host winform controls, so you could piecewise bring in parts into the main form.
There is a very interesting white paper on migrating a .NET 2.0 winform application toward WPF, see [Evolving toward a .NET 3.5 application](http://www.ythos.net/pdfs/EvolvingDotNet35Apps.pdf) Paper abstract: *In this paper, I’m going to outline some of the thought processes, decisions and issues we had to face when evolving a Microsoft .NET application from 1.x/2.x to 3.x. I’ll look at how we helped our client to adopt the new technology, and yet still maintained a release schedule acceptable to the business.*
111,817
I work on a large C# application (approximately 450,000 lines of code), we constantly have problems with desktop heap and GDI handle leaks. WPF solves these issues, but I don't know what is the best way to upgrade (I expect this is going to take a long time). The application has only a few forms but these can contain many different sets of user-controls which are determined programatically. This is an internal company app so our release cycles are very short (typically 3 week release cycle). Is there some gradual upgrade path or do we have to take the hit in one massive effort?
2008/09/21
[ "https://Stackoverflow.com/questions/111817", "https://Stackoverflow.com", "https://Stackoverflow.com/users/19290/" ]
You can start by creating a WPF host. Then you can use the <WindowsFormHost/> control to host your current application. Then, I suggest creating a library of your new controls in WPF. One at a time, you can create the controls (I suggest making them custom controls, not usercontrols). Within the style for each control, you can start with using the <ElementHost/> control to include the "old" windows forms control. Then you can take your time to refactor and recreate each control as complete WPF. I think it will still take an initial effort to create your control wrappers and design a WPF host for the application. I am not sure the size of the application and or the complexity of the user controls, so I'm not sure how much effort that would be for you. Relatively speaking, it is significantly less effort and much faster to get you application up and running in WPF this way. I wouldn't just do that and forget about it though, as you may run into issues with controls overlaying each other (Windows forms does not play well with WPF, especially with transparencies and other visuals) Please update us on the status of this project, or provide more technical information if you would like more specific guidance. Thanks :)
Do you use a lot of User controls for the pieces? WPF can host winform controls, so you could piecewise bring in parts into the main form.
111,817
I work on a large C# application (approximately 450,000 lines of code), we constantly have problems with desktop heap and GDI handle leaks. WPF solves these issues, but I don't know what is the best way to upgrade (I expect this is going to take a long time). The application has only a few forms but these can contain many different sets of user-controls which are determined programatically. This is an internal company app so our release cycles are very short (typically 3 week release cycle). Is there some gradual upgrade path or do we have to take the hit in one massive effort?
2008/09/21
[ "https://Stackoverflow.com/questions/111817", "https://Stackoverflow.com", "https://Stackoverflow.com/users/19290/" ]
WPF allows you to embed windows forms user controls into a WPF application, which may help you make the transition in smaller steps. Take a look at the [WindowsFormsHost](http://msdn.microsoft.com/en-us/library/system.windows.forms.integration.windowsformshost.aspx) class in the WPF documentation.
There is a very interesting white paper on migrating a .NET 2.0 winform application toward WPF, see [Evolving toward a .NET 3.5 application](http://www.ythos.net/pdfs/EvolvingDotNet35Apps.pdf) Paper abstract: *In this paper, I’m going to outline some of the thought processes, decisions and issues we had to face when evolving a Microsoft .NET application from 1.x/2.x to 3.x. I’ll look at how we helped our client to adopt the new technology, and yet still maintained a release schedule acceptable to the business.*
111,817
I work on a large C# application (approximately 450,000 lines of code), we constantly have problems with desktop heap and GDI handle leaks. WPF solves these issues, but I don't know what is the best way to upgrade (I expect this is going to take a long time). The application has only a few forms but these can contain many different sets of user-controls which are determined programatically. This is an internal company app so our release cycles are very short (typically 3 week release cycle). Is there some gradual upgrade path or do we have to take the hit in one massive effort?
2008/09/21
[ "https://Stackoverflow.com/questions/111817", "https://Stackoverflow.com", "https://Stackoverflow.com/users/19290/" ]
You can start by creating a WPF host. Then you can use the <WindowsFormHost/> control to host your current application. Then, I suggest creating a library of your new controls in WPF. One at a time, you can create the controls (I suggest making them custom controls, not usercontrols). Within the style for each control, you can start with using the <ElementHost/> control to include the "old" windows forms control. Then you can take your time to refactor and recreate each control as complete WPF. I think it will still take an initial effort to create your control wrappers and design a WPF host for the application. I am not sure the size of the application and or the complexity of the user controls, so I'm not sure how much effort that would be for you. Relatively speaking, it is significantly less effort and much faster to get you application up and running in WPF this way. I wouldn't just do that and forget about it though, as you may run into issues with controls overlaying each other (Windows forms does not play well with WPF, especially with transparencies and other visuals) Please update us on the status of this project, or provide more technical information if you would like more specific guidance. Thanks :)
WPF allows you to embed windows forms user controls into a WPF application, which may help you make the transition in smaller steps. Take a look at the [WindowsFormsHost](http://msdn.microsoft.com/en-us/library/system.windows.forms.integration.windowsformshost.aspx) class in the WPF documentation.
111,817
I work on a large C# application (approximately 450,000 lines of code), we constantly have problems with desktop heap and GDI handle leaks. WPF solves these issues, but I don't know what is the best way to upgrade (I expect this is going to take a long time). The application has only a few forms but these can contain many different sets of user-controls which are determined programatically. This is an internal company app so our release cycles are very short (typically 3 week release cycle). Is there some gradual upgrade path or do we have to take the hit in one massive effort?
2008/09/21
[ "https://Stackoverflow.com/questions/111817", "https://Stackoverflow.com", "https://Stackoverflow.com/users/19290/" ]
You can start by creating a WPF host. Then you can use the <WindowsFormHost/> control to host your current application. Then, I suggest creating a library of your new controls in WPF. One at a time, you can create the controls (I suggest making them custom controls, not usercontrols). Within the style for each control, you can start with using the <ElementHost/> control to include the "old" windows forms control. Then you can take your time to refactor and recreate each control as complete WPF. I think it will still take an initial effort to create your control wrappers and design a WPF host for the application. I am not sure the size of the application and or the complexity of the user controls, so I'm not sure how much effort that would be for you. Relatively speaking, it is significantly less effort and much faster to get you application up and running in WPF this way. I wouldn't just do that and forget about it though, as you may run into issues with controls overlaying each other (Windows forms does not play well with WPF, especially with transparencies and other visuals) Please update us on the status of this project, or provide more technical information if you would like more specific guidance. Thanks :)
There is a very interesting white paper on migrating a .NET 2.0 winform application toward WPF, see [Evolving toward a .NET 3.5 application](http://www.ythos.net/pdfs/EvolvingDotNet35Apps.pdf) Paper abstract: *In this paper, I’m going to outline some of the thought processes, decisions and issues we had to face when evolving a Microsoft .NET application from 1.x/2.x to 3.x. I’ll look at how we helped our client to adopt the new technology, and yet still maintained a release schedule acceptable to the business.*
111,817
I work on a large C# application (approximately 450,000 lines of code), we constantly have problems with desktop heap and GDI handle leaks. WPF solves these issues, but I don't know what is the best way to upgrade (I expect this is going to take a long time). The application has only a few forms but these can contain many different sets of user-controls which are determined programatically. This is an internal company app so our release cycles are very short (typically 3 week release cycle). Is there some gradual upgrade path or do we have to take the hit in one massive effort?
2008/09/21
[ "https://Stackoverflow.com/questions/111817", "https://Stackoverflow.com", "https://Stackoverflow.com/users/19290/" ]
I assume that you are not just looing for an ElementHost to put your vast Winforms app. That is anyway not a real porting to WPF. Consider the answers on this Thread [What are the bigger hurdles to overcome migrating from Winforms to WPF?](https://stackoverflow.com/questions/109620/what-are-the-bigger-hurdles-to-overcome-migrating-from-winforms-to-wpf), It will be very helpfull.
There is a very interesting white paper on migrating a .NET 2.0 winform application toward WPF, see [Evolving toward a .NET 3.5 application](http://www.ythos.net/pdfs/EvolvingDotNet35Apps.pdf) Paper abstract: *In this paper, I’m going to outline some of the thought processes, decisions and issues we had to face when evolving a Microsoft .NET application from 1.x/2.x to 3.x. I’ll look at how we helped our client to adopt the new technology, and yet still maintained a release schedule acceptable to the business.*
111,817
I work on a large C# application (approximately 450,000 lines of code), we constantly have problems with desktop heap and GDI handle leaks. WPF solves these issues, but I don't know what is the best way to upgrade (I expect this is going to take a long time). The application has only a few forms but these can contain many different sets of user-controls which are determined programatically. This is an internal company app so our release cycles are very short (typically 3 week release cycle). Is there some gradual upgrade path or do we have to take the hit in one massive effort?
2008/09/21
[ "https://Stackoverflow.com/questions/111817", "https://Stackoverflow.com", "https://Stackoverflow.com/users/19290/" ]
You can start by creating a WPF host. Then you can use the <WindowsFormHost/> control to host your current application. Then, I suggest creating a library of your new controls in WPF. One at a time, you can create the controls (I suggest making them custom controls, not usercontrols). Within the style for each control, you can start with using the <ElementHost/> control to include the "old" windows forms control. Then you can take your time to refactor and recreate each control as complete WPF. I think it will still take an initial effort to create your control wrappers and design a WPF host for the application. I am not sure the size of the application and or the complexity of the user controls, so I'm not sure how much effort that would be for you. Relatively speaking, it is significantly less effort and much faster to get you application up and running in WPF this way. I wouldn't just do that and forget about it though, as you may run into issues with controls overlaying each other (Windows forms does not play well with WPF, especially with transparencies and other visuals) Please update us on the status of this project, or provide more technical information if you would like more specific guidance. Thanks :)
I assume that you are not just looing for an ElementHost to put your vast Winforms app. That is anyway not a real porting to WPF. Consider the answers on this Thread [What are the bigger hurdles to overcome migrating from Winforms to WPF?](https://stackoverflow.com/questions/109620/what-are-the-bigger-hurdles-to-overcome-migrating-from-winforms-to-wpf), It will be very helpfull.
8,655,909
When trying to identify erroneous data (often needing manual review and removal), I'd like an easy way of seeing hidden characters, such as TAB, Space, Carriage return and Line feed. Is there a built-in way for this? In a similar question here on stackoverflow, regarding Oracle, a DUMP(fieldname) function was suggested, but I don't know if that woud make things easier even if a corresponding function would exist in SQL Server, since I need to see the Characters in their context. The best idea I could come up with was replacing the expected hidden characters with visible ones, like this: `SELECT REPLACE(REPLACE(REPLACE(REPLACE(myfield, ' ', '˙'), CHAR(13), '[CR]'), CHAR(10), '[LF]'), CHAR(9), '[TAB]') FROM mytable` Is there a better way? I don't like this way since there might be other less common hidden characters that are not taken into account by me such as vertical TAB etc... Turning on "show hidden characters", as you can do in almost any text editor, would be such a nice feature in SQL Server Query Analyzer, so I almost expect that it can be done somehow in SQL server as well... or at least that someone has an even better idea than mine, to show this kind of white space info. *I just noticed that there is a built-in way to see "white space", not in SQL Query Analyzer, but in the part of the interface that once was the SQL Enterprise manager. Right-click a table in SQL Management Studio Object Explorer tree, and select "Edit top 200 rows". In the result white space (at least CR LF) is visible as empty squares.*
2011/12/28
[ "https://Stackoverflow.com/questions/8655909", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1047366/" ]
``` select myfield, CAST(myfield as varbinary(max)) ... ```
I have faced the same problem with a character that I never managed to match with a where query - `CHARINDEX, LIKE, REPLACE`, etc. did not work. Then I have used a brute force solution which is awful, heavy but works: **Step 1**: make a copy of the complete data set - keep track of the original names with an source\_id referencing the pk of the source table (and keep this source id in all the subsequent tables). **Step 2**: `LTRIM RTRIM` the data, and replace all double spaces, tab, etc (basically all the CHAR(1) to CHAR(32) by one space. Lowercase the whole set as well. **Step 3**: replace all the special characters that you know (get the list of all the quotes, double quotes, etc.) by something from a-z (I suggest z). Basically replace everything that is not standard English characters by a z (using nested REPLACE of REPLACE in a loop). **Step 4**: split by word into a second copy, where each word is in a separate row - the split is a `SUBSTRING` based on the position of the space characters - at this point, we should miss the ones where there's a hidden space that we did not catche earlier. **Step 5**: split each word into a third copy, where each letter is in a separate row (I know it makes a very large table) - keep track of the charindex of each letter in a separate column. **Step 6**: Select everything in the above table which is not LIKE [a-z]. This is the list of the unidentified characters we want to exclude. From the output of step 6 we have enough data to make a series of substring of the source to select everything but the unknown character we want to exclude. **Note 1**: there are smart ways to optimize this, depending on the size of the original expression (steps 4, 5 and 6 can be made in one go). **Note 2**: this is not very fast, but the fastest way to get this done for a large data set, because the split of lines into words and words into letters is made by substring, which slices all the table into one character slices. However, this is quite heavy to build. With a smaller set, it may be enough to parse each record one by one and search for character which is not in a list of all English characters plus all special characters.
8,655,909
When trying to identify erroneous data (often needing manual review and removal), I'd like an easy way of seeing hidden characters, such as TAB, Space, Carriage return and Line feed. Is there a built-in way for this? In a similar question here on stackoverflow, regarding Oracle, a DUMP(fieldname) function was suggested, but I don't know if that woud make things easier even if a corresponding function would exist in SQL Server, since I need to see the Characters in their context. The best idea I could come up with was replacing the expected hidden characters with visible ones, like this: `SELECT REPLACE(REPLACE(REPLACE(REPLACE(myfield, ' ', '˙'), CHAR(13), '[CR]'), CHAR(10), '[LF]'), CHAR(9), '[TAB]') FROM mytable` Is there a better way? I don't like this way since there might be other less common hidden characters that are not taken into account by me such as vertical TAB etc... Turning on "show hidden characters", as you can do in almost any text editor, would be such a nice feature in SQL Server Query Analyzer, so I almost expect that it can be done somehow in SQL server as well... or at least that someone has an even better idea than mine, to show this kind of white space info. *I just noticed that there is a built-in way to see "white space", not in SQL Query Analyzer, but in the part of the interface that once was the SQL Enterprise manager. Right-click a table in SQL Management Studio Object Explorer tree, and select "Edit top 200 rows". In the result white space (at least CR LF) is visible as empty squares.*
2011/12/28
[ "https://Stackoverflow.com/questions/8655909", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1047366/" ]
Create a function that addresses all the whitespace possibilites and enable only those that seem appropriate: ``` SELECT dbo.ShowWhiteSpace(myfield) from mytable ``` Uncomment only those whitespace cases you want to test. For example: ``` CREATE FUNCTION dbo.ShowWhiteSpace (@str varchar(8000)) RETURNS varchar(8000) AS BEGIN DECLARE @ShowWhiteSpace varchar(8000); SET @ShowWhiteSpace = @str SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(32), '[?]') SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(13), '[CR]') SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(10), '[LF]') SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(9), '[TAB]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(1), '[SOH]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(2), '[STX]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(3), '[ETX]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(4), '[EOT]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(5), '[ENQ]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(6), '[ACK]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(7), '[BEL]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(8), '[BS]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(11), '[VT]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(12), '[FF]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(14), '[SO]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(15), '[SI]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(16), '[DLE]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(17), '[DC1]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(18), '[DC2]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(19), '[DC3]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(20), '[DC4]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(21), '[NAK]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(22), '[SYN]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(23), '[ETB]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(24), '[CAN]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(25), '[EM]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(26), '[SUB]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(27), '[ESC]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(28), '[FS]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(29), '[GS]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(30), '[RS]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(31), '[US]') RETURN(@ShowWhiteSpace) END ```
I have faced the same problem with a character that I never managed to match with a where query - `CHARINDEX, LIKE, REPLACE`, etc. did not work. Then I have used a brute force solution which is awful, heavy but works: **Step 1**: make a copy of the complete data set - keep track of the original names with an source\_id referencing the pk of the source table (and keep this source id in all the subsequent tables). **Step 2**: `LTRIM RTRIM` the data, and replace all double spaces, tab, etc (basically all the CHAR(1) to CHAR(32) by one space. Lowercase the whole set as well. **Step 3**: replace all the special characters that you know (get the list of all the quotes, double quotes, etc.) by something from a-z (I suggest z). Basically replace everything that is not standard English characters by a z (using nested REPLACE of REPLACE in a loop). **Step 4**: split by word into a second copy, where each word is in a separate row - the split is a `SUBSTRING` based on the position of the space characters - at this point, we should miss the ones where there's a hidden space that we did not catche earlier. **Step 5**: split each word into a third copy, where each letter is in a separate row (I know it makes a very large table) - keep track of the charindex of each letter in a separate column. **Step 6**: Select everything in the above table which is not LIKE [a-z]. This is the list of the unidentified characters we want to exclude. From the output of step 6 we have enough data to make a series of substring of the source to select everything but the unknown character we want to exclude. **Note 1**: there are smart ways to optimize this, depending on the size of the original expression (steps 4, 5 and 6 can be made in one go). **Note 2**: this is not very fast, but the fastest way to get this done for a large data set, because the split of lines into words and words into letters is made by substring, which slices all the table into one character slices. However, this is quite heavy to build. With a smaller set, it may be enough to parse each record one by one and search for character which is not in a list of all English characters plus all special characters.
8,655,909
When trying to identify erroneous data (often needing manual review and removal), I'd like an easy way of seeing hidden characters, such as TAB, Space, Carriage return and Line feed. Is there a built-in way for this? In a similar question here on stackoverflow, regarding Oracle, a DUMP(fieldname) function was suggested, but I don't know if that woud make things easier even if a corresponding function would exist in SQL Server, since I need to see the Characters in their context. The best idea I could come up with was replacing the expected hidden characters with visible ones, like this: `SELECT REPLACE(REPLACE(REPLACE(REPLACE(myfield, ' ', '˙'), CHAR(13), '[CR]'), CHAR(10), '[LF]'), CHAR(9), '[TAB]') FROM mytable` Is there a better way? I don't like this way since there might be other less common hidden characters that are not taken into account by me such as vertical TAB etc... Turning on "show hidden characters", as you can do in almost any text editor, would be such a nice feature in SQL Server Query Analyzer, so I almost expect that it can be done somehow in SQL server as well... or at least that someone has an even better idea than mine, to show this kind of white space info. *I just noticed that there is a built-in way to see "white space", not in SQL Query Analyzer, but in the part of the interface that once was the SQL Enterprise manager. Right-click a table in SQL Management Studio Object Explorer tree, and select "Edit top 200 rows". In the result white space (at least CR LF) is visible as empty squares.*
2011/12/28
[ "https://Stackoverflow.com/questions/8655909", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1047366/" ]
They way I did it was by selecting all of the data `select * from myTable` and then right-clicking on the result set and chose "Save results as..." a csv file. Opening the csv file in Notepad++ I saw the LF characters not visible in SQL Server result set.
You can always use the DATALENGTH Function to determine if you have extra white space characters in text fields. This won't make the text visible but will show you where there are extra white space characters. ``` SELECT DATALENGTH('MyTextData ') AS BinaryLength, LEN('MyTextData ') AS TextLength ``` This will produce 11 for BinaryLength and 10 for TextLength. In a table your SQL would like this: ``` SELECT * FROM tblA WHERE DATALENGTH(MyTextField) > LEN(MyTextField) ``` This function is usable in all versions of SQL Server beginning with 2005.
8,655,909
When trying to identify erroneous data (often needing manual review and removal), I'd like an easy way of seeing hidden characters, such as TAB, Space, Carriage return and Line feed. Is there a built-in way for this? In a similar question here on stackoverflow, regarding Oracle, a DUMP(fieldname) function was suggested, but I don't know if that woud make things easier even if a corresponding function would exist in SQL Server, since I need to see the Characters in their context. The best idea I could come up with was replacing the expected hidden characters with visible ones, like this: `SELECT REPLACE(REPLACE(REPLACE(REPLACE(myfield, ' ', '˙'), CHAR(13), '[CR]'), CHAR(10), '[LF]'), CHAR(9), '[TAB]') FROM mytable` Is there a better way? I don't like this way since there might be other less common hidden characters that are not taken into account by me such as vertical TAB etc... Turning on "show hidden characters", as you can do in almost any text editor, would be such a nice feature in SQL Server Query Analyzer, so I almost expect that it can be done somehow in SQL server as well... or at least that someone has an even better idea than mine, to show this kind of white space info. *I just noticed that there is a built-in way to see "white space", not in SQL Query Analyzer, but in the part of the interface that once was the SQL Enterprise manager. Right-click a table in SQL Management Studio Object Explorer tree, and select "Edit top 200 rows". In the result white space (at least CR LF) is visible as empty squares.*
2011/12/28
[ "https://Stackoverflow.com/questions/8655909", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1047366/" ]
They way I did it was by selecting all of the data `select * from myTable` and then right-clicking on the result set and chose "Save results as..." a csv file. Opening the csv file in Notepad++ I saw the LF characters not visible in SQL Server result set.
To find them, you can use this ``` ;WITH cte AS ( SELECT 0 AS CharCode UNION ALL SELECT CharCode + 1 FROM cte WHERE CharCode <31 ) SELECT * FROM mytable T cross join cte WHERE EXISTS (SELECT * FROM mytable Tx WHERE Tx.PKCol = T.PKCol AND Tx.MyField LIKE '%' + CHAR(cte.CharCode) + '%' ) ``` Replacing the EXISTS with a JOIN will allow you to REPLACE them, but you'll get multiple rows... I can't think of a way around that...
8,655,909
When trying to identify erroneous data (often needing manual review and removal), I'd like an easy way of seeing hidden characters, such as TAB, Space, Carriage return and Line feed. Is there a built-in way for this? In a similar question here on stackoverflow, regarding Oracle, a DUMP(fieldname) function was suggested, but I don't know if that woud make things easier even if a corresponding function would exist in SQL Server, since I need to see the Characters in their context. The best idea I could come up with was replacing the expected hidden characters with visible ones, like this: `SELECT REPLACE(REPLACE(REPLACE(REPLACE(myfield, ' ', '˙'), CHAR(13), '[CR]'), CHAR(10), '[LF]'), CHAR(9), '[TAB]') FROM mytable` Is there a better way? I don't like this way since there might be other less common hidden characters that are not taken into account by me such as vertical TAB etc... Turning on "show hidden characters", as you can do in almost any text editor, would be such a nice feature in SQL Server Query Analyzer, so I almost expect that it can be done somehow in SQL server as well... or at least that someone has an even better idea than mine, to show this kind of white space info. *I just noticed that there is a built-in way to see "white space", not in SQL Query Analyzer, but in the part of the interface that once was the SQL Enterprise manager. Right-click a table in SQL Management Studio Object Explorer tree, and select "Edit top 200 rows". In the result white space (at least CR LF) is visible as empty squares.*
2011/12/28
[ "https://Stackoverflow.com/questions/8655909", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1047366/" ]
Create a function that addresses all the whitespace possibilites and enable only those that seem appropriate: ``` SELECT dbo.ShowWhiteSpace(myfield) from mytable ``` Uncomment only those whitespace cases you want to test. For example: ``` CREATE FUNCTION dbo.ShowWhiteSpace (@str varchar(8000)) RETURNS varchar(8000) AS BEGIN DECLARE @ShowWhiteSpace varchar(8000); SET @ShowWhiteSpace = @str SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(32), '[?]') SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(13), '[CR]') SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(10), '[LF]') SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(9), '[TAB]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(1), '[SOH]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(2), '[STX]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(3), '[ETX]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(4), '[EOT]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(5), '[ENQ]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(6), '[ACK]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(7), '[BEL]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(8), '[BS]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(11), '[VT]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(12), '[FF]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(14), '[SO]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(15), '[SI]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(16), '[DLE]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(17), '[DC1]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(18), '[DC2]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(19), '[DC3]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(20), '[DC4]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(21), '[NAK]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(22), '[SYN]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(23), '[ETB]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(24), '[CAN]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(25), '[EM]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(26), '[SUB]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(27), '[ESC]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(28), '[FS]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(29), '[GS]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(30), '[RS]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(31), '[US]') RETURN(@ShowWhiteSpace) END ```
To find them, you can use this ``` ;WITH cte AS ( SELECT 0 AS CharCode UNION ALL SELECT CharCode + 1 FROM cte WHERE CharCode <31 ) SELECT * FROM mytable T cross join cte WHERE EXISTS (SELECT * FROM mytable Tx WHERE Tx.PKCol = T.PKCol AND Tx.MyField LIKE '%' + CHAR(cte.CharCode) + '%' ) ``` Replacing the EXISTS with a JOIN will allow you to REPLACE them, but you'll get multiple rows... I can't think of a way around that...
8,655,909
When trying to identify erroneous data (often needing manual review and removal), I'd like an easy way of seeing hidden characters, such as TAB, Space, Carriage return and Line feed. Is there a built-in way for this? In a similar question here on stackoverflow, regarding Oracle, a DUMP(fieldname) function was suggested, but I don't know if that woud make things easier even if a corresponding function would exist in SQL Server, since I need to see the Characters in their context. The best idea I could come up with was replacing the expected hidden characters with visible ones, like this: `SELECT REPLACE(REPLACE(REPLACE(REPLACE(myfield, ' ', '˙'), CHAR(13), '[CR]'), CHAR(10), '[LF]'), CHAR(9), '[TAB]') FROM mytable` Is there a better way? I don't like this way since there might be other less common hidden characters that are not taken into account by me such as vertical TAB etc... Turning on "show hidden characters", as you can do in almost any text editor, would be such a nice feature in SQL Server Query Analyzer, so I almost expect that it can be done somehow in SQL server as well... or at least that someone has an even better idea than mine, to show this kind of white space info. *I just noticed that there is a built-in way to see "white space", not in SQL Query Analyzer, but in the part of the interface that once was the SQL Enterprise manager. Right-click a table in SQL Management Studio Object Explorer tree, and select "Edit top 200 rows". In the result white space (at least CR LF) is visible as empty squares.*
2011/12/28
[ "https://Stackoverflow.com/questions/8655909", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1047366/" ]
To find them, you can use this ``` ;WITH cte AS ( SELECT 0 AS CharCode UNION ALL SELECT CharCode + 1 FROM cte WHERE CharCode <31 ) SELECT * FROM mytable T cross join cte WHERE EXISTS (SELECT * FROM mytable Tx WHERE Tx.PKCol = T.PKCol AND Tx.MyField LIKE '%' + CHAR(cte.CharCode) + '%' ) ``` Replacing the EXISTS with a JOIN will allow you to REPLACE them, but you'll get multiple rows... I can't think of a way around that...
I have faced the same problem with a character that I never managed to match with a where query - `CHARINDEX, LIKE, REPLACE`, etc. did not work. Then I have used a brute force solution which is awful, heavy but works: **Step 1**: make a copy of the complete data set - keep track of the original names with an source\_id referencing the pk of the source table (and keep this source id in all the subsequent tables). **Step 2**: `LTRIM RTRIM` the data, and replace all double spaces, tab, etc (basically all the CHAR(1) to CHAR(32) by one space. Lowercase the whole set as well. **Step 3**: replace all the special characters that you know (get the list of all the quotes, double quotes, etc.) by something from a-z (I suggest z). Basically replace everything that is not standard English characters by a z (using nested REPLACE of REPLACE in a loop). **Step 4**: split by word into a second copy, where each word is in a separate row - the split is a `SUBSTRING` based on the position of the space characters - at this point, we should miss the ones where there's a hidden space that we did not catche earlier. **Step 5**: split each word into a third copy, where each letter is in a separate row (I know it makes a very large table) - keep track of the charindex of each letter in a separate column. **Step 6**: Select everything in the above table which is not LIKE [a-z]. This is the list of the unidentified characters we want to exclude. From the output of step 6 we have enough data to make a series of substring of the source to select everything but the unknown character we want to exclude. **Note 1**: there are smart ways to optimize this, depending on the size of the original expression (steps 4, 5 and 6 can be made in one go). **Note 2**: this is not very fast, but the fastest way to get this done for a large data set, because the split of lines into words and words into letters is made by substring, which slices all the table into one character slices. However, this is quite heavy to build. With a smaller set, it may be enough to parse each record one by one and search for character which is not in a list of all English characters plus all special characters.
8,655,909
When trying to identify erroneous data (often needing manual review and removal), I'd like an easy way of seeing hidden characters, such as TAB, Space, Carriage return and Line feed. Is there a built-in way for this? In a similar question here on stackoverflow, regarding Oracle, a DUMP(fieldname) function was suggested, but I don't know if that woud make things easier even if a corresponding function would exist in SQL Server, since I need to see the Characters in their context. The best idea I could come up with was replacing the expected hidden characters with visible ones, like this: `SELECT REPLACE(REPLACE(REPLACE(REPLACE(myfield, ' ', '˙'), CHAR(13), '[CR]'), CHAR(10), '[LF]'), CHAR(9), '[TAB]') FROM mytable` Is there a better way? I don't like this way since there might be other less common hidden characters that are not taken into account by me such as vertical TAB etc... Turning on "show hidden characters", as you can do in almost any text editor, would be such a nice feature in SQL Server Query Analyzer, so I almost expect that it can be done somehow in SQL server as well... or at least that someone has an even better idea than mine, to show this kind of white space info. *I just noticed that there is a built-in way to see "white space", not in SQL Query Analyzer, but in the part of the interface that once was the SQL Enterprise manager. Right-click a table in SQL Management Studio Object Explorer tree, and select "Edit top 200 rows". In the result white space (at least CR LF) is visible as empty squares.*
2011/12/28
[ "https://Stackoverflow.com/questions/8655909", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1047366/" ]
They way I did it was by selecting all of the data `select * from myTable` and then right-clicking on the result set and chose "Save results as..." a csv file. Opening the csv file in Notepad++ I saw the LF characters not visible in SQL Server result set.
I have faced the same problem with a character that I never managed to match with a where query - `CHARINDEX, LIKE, REPLACE`, etc. did not work. Then I have used a brute force solution which is awful, heavy but works: **Step 1**: make a copy of the complete data set - keep track of the original names with an source\_id referencing the pk of the source table (and keep this source id in all the subsequent tables). **Step 2**: `LTRIM RTRIM` the data, and replace all double spaces, tab, etc (basically all the CHAR(1) to CHAR(32) by one space. Lowercase the whole set as well. **Step 3**: replace all the special characters that you know (get the list of all the quotes, double quotes, etc.) by something from a-z (I suggest z). Basically replace everything that is not standard English characters by a z (using nested REPLACE of REPLACE in a loop). **Step 4**: split by word into a second copy, where each word is in a separate row - the split is a `SUBSTRING` based on the position of the space characters - at this point, we should miss the ones where there's a hidden space that we did not catche earlier. **Step 5**: split each word into a third copy, where each letter is in a separate row (I know it makes a very large table) - keep track of the charindex of each letter in a separate column. **Step 6**: Select everything in the above table which is not LIKE [a-z]. This is the list of the unidentified characters we want to exclude. From the output of step 6 we have enough data to make a series of substring of the source to select everything but the unknown character we want to exclude. **Note 1**: there are smart ways to optimize this, depending on the size of the original expression (steps 4, 5 and 6 can be made in one go). **Note 2**: this is not very fast, but the fastest way to get this done for a large data set, because the split of lines into words and words into letters is made by substring, which slices all the table into one character slices. However, this is quite heavy to build. With a smaller set, it may be enough to parse each record one by one and search for character which is not in a list of all English characters plus all special characters.
8,655,909
When trying to identify erroneous data (often needing manual review and removal), I'd like an easy way of seeing hidden characters, such as TAB, Space, Carriage return and Line feed. Is there a built-in way for this? In a similar question here on stackoverflow, regarding Oracle, a DUMP(fieldname) function was suggested, but I don't know if that woud make things easier even if a corresponding function would exist in SQL Server, since I need to see the Characters in their context. The best idea I could come up with was replacing the expected hidden characters with visible ones, like this: `SELECT REPLACE(REPLACE(REPLACE(REPLACE(myfield, ' ', '˙'), CHAR(13), '[CR]'), CHAR(10), '[LF]'), CHAR(9), '[TAB]') FROM mytable` Is there a better way? I don't like this way since there might be other less common hidden characters that are not taken into account by me such as vertical TAB etc... Turning on "show hidden characters", as you can do in almost any text editor, would be such a nice feature in SQL Server Query Analyzer, so I almost expect that it can be done somehow in SQL server as well... or at least that someone has an even better idea than mine, to show this kind of white space info. *I just noticed that there is a built-in way to see "white space", not in SQL Query Analyzer, but in the part of the interface that once was the SQL Enterprise manager. Right-click a table in SQL Management Studio Object Explorer tree, and select "Edit top 200 rows". In the result white space (at least CR LF) is visible as empty squares.*
2011/12/28
[ "https://Stackoverflow.com/questions/8655909", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1047366/" ]
Create a function that addresses all the whitespace possibilites and enable only those that seem appropriate: ``` SELECT dbo.ShowWhiteSpace(myfield) from mytable ``` Uncomment only those whitespace cases you want to test. For example: ``` CREATE FUNCTION dbo.ShowWhiteSpace (@str varchar(8000)) RETURNS varchar(8000) AS BEGIN DECLARE @ShowWhiteSpace varchar(8000); SET @ShowWhiteSpace = @str SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(32), '[?]') SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(13), '[CR]') SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(10), '[LF]') SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(9), '[TAB]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(1), '[SOH]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(2), '[STX]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(3), '[ETX]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(4), '[EOT]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(5), '[ENQ]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(6), '[ACK]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(7), '[BEL]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(8), '[BS]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(11), '[VT]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(12), '[FF]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(14), '[SO]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(15), '[SI]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(16), '[DLE]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(17), '[DC1]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(18), '[DC2]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(19), '[DC3]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(20), '[DC4]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(21), '[NAK]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(22), '[SYN]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(23), '[ETB]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(24), '[CAN]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(25), '[EM]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(26), '[SUB]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(27), '[ESC]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(28), '[FS]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(29), '[GS]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(30), '[RS]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(31), '[US]') RETURN(@ShowWhiteSpace) END ```
You can always use the DATALENGTH Function to determine if you have extra white space characters in text fields. This won't make the text visible but will show you where there are extra white space characters. ``` SELECT DATALENGTH('MyTextData ') AS BinaryLength, LEN('MyTextData ') AS TextLength ``` This will produce 11 for BinaryLength and 10 for TextLength. In a table your SQL would like this: ``` SELECT * FROM tblA WHERE DATALENGTH(MyTextField) > LEN(MyTextField) ``` This function is usable in all versions of SQL Server beginning with 2005.
8,655,909
When trying to identify erroneous data (often needing manual review and removal), I'd like an easy way of seeing hidden characters, such as TAB, Space, Carriage return and Line feed. Is there a built-in way for this? In a similar question here on stackoverflow, regarding Oracle, a DUMP(fieldname) function was suggested, but I don't know if that woud make things easier even if a corresponding function would exist in SQL Server, since I need to see the Characters in their context. The best idea I could come up with was replacing the expected hidden characters with visible ones, like this: `SELECT REPLACE(REPLACE(REPLACE(REPLACE(myfield, ' ', '˙'), CHAR(13), '[CR]'), CHAR(10), '[LF]'), CHAR(9), '[TAB]') FROM mytable` Is there a better way? I don't like this way since there might be other less common hidden characters that are not taken into account by me such as vertical TAB etc... Turning on "show hidden characters", as you can do in almost any text editor, would be such a nice feature in SQL Server Query Analyzer, so I almost expect that it can be done somehow in SQL server as well... or at least that someone has an even better idea than mine, to show this kind of white space info. *I just noticed that there is a built-in way to see "white space", not in SQL Query Analyzer, but in the part of the interface that once was the SQL Enterprise manager. Right-click a table in SQL Management Studio Object Explorer tree, and select "Edit top 200 rows". In the result white space (at least CR LF) is visible as empty squares.*
2011/12/28
[ "https://Stackoverflow.com/questions/8655909", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1047366/" ]
They way I did it was by selecting all of the data `select * from myTable` and then right-clicking on the result set and chose "Save results as..." a csv file. Opening the csv file in Notepad++ I saw the LF characters not visible in SQL Server result set.
``` select myfield, CAST(myfield as varbinary(max)) ... ```
8,655,909
When trying to identify erroneous data (often needing manual review and removal), I'd like an easy way of seeing hidden characters, such as TAB, Space, Carriage return and Line feed. Is there a built-in way for this? In a similar question here on stackoverflow, regarding Oracle, a DUMP(fieldname) function was suggested, but I don't know if that woud make things easier even if a corresponding function would exist in SQL Server, since I need to see the Characters in their context. The best idea I could come up with was replacing the expected hidden characters with visible ones, like this: `SELECT REPLACE(REPLACE(REPLACE(REPLACE(myfield, ' ', '˙'), CHAR(13), '[CR]'), CHAR(10), '[LF]'), CHAR(9), '[TAB]') FROM mytable` Is there a better way? I don't like this way since there might be other less common hidden characters that are not taken into account by me such as vertical TAB etc... Turning on "show hidden characters", as you can do in almost any text editor, would be such a nice feature in SQL Server Query Analyzer, so I almost expect that it can be done somehow in SQL server as well... or at least that someone has an even better idea than mine, to show this kind of white space info. *I just noticed that there is a built-in way to see "white space", not in SQL Query Analyzer, but in the part of the interface that once was the SQL Enterprise manager. Right-click a table in SQL Management Studio Object Explorer tree, and select "Edit top 200 rows". In the result white space (at least CR LF) is visible as empty squares.*
2011/12/28
[ "https://Stackoverflow.com/questions/8655909", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1047366/" ]
Create a function that addresses all the whitespace possibilites and enable only those that seem appropriate: ``` SELECT dbo.ShowWhiteSpace(myfield) from mytable ``` Uncomment only those whitespace cases you want to test. For example: ``` CREATE FUNCTION dbo.ShowWhiteSpace (@str varchar(8000)) RETURNS varchar(8000) AS BEGIN DECLARE @ShowWhiteSpace varchar(8000); SET @ShowWhiteSpace = @str SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(32), '[?]') SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(13), '[CR]') SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(10), '[LF]') SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(9), '[TAB]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(1), '[SOH]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(2), '[STX]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(3), '[ETX]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(4), '[EOT]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(5), '[ENQ]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(6), '[ACK]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(7), '[BEL]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(8), '[BS]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(11), '[VT]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(12), '[FF]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(14), '[SO]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(15), '[SI]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(16), '[DLE]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(17), '[DC1]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(18), '[DC2]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(19), '[DC3]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(20), '[DC4]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(21), '[NAK]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(22), '[SYN]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(23), '[ETB]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(24), '[CAN]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(25), '[EM]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(26), '[SUB]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(27), '[ESC]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(28), '[FS]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(29), '[GS]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(30), '[RS]') -- SET @ShowWhiteSpace = REPLACE( @ShowWhiteSpace, CHAR(31), '[US]') RETURN(@ShowWhiteSpace) END ```
``` select myfield, CAST(myfield as varbinary(max)) ... ```
30,355,368
What is difference between static and dynamic SQL? I have created database connection for Jvector Map. The code working and set an alert box. Its showing all alert country name canada. This is my static sql ``` $sql = "SELECT countryId,country, pdogcoregion,ccl,category FROM countrydetails WHERE Country='canda'"; ``` How to change to dynamic any example ?
2015/05/20
[ "https://Stackoverflow.com/questions/30355368", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4896446/" ]
**Static SQL** is SQL statements in an application that do not change at runtime and, therefore, can be hard-coded into the application. **Dynamic SQL** is SQL statements that are constructed at runtime in this case your query is static so to change it into a dynamic one you'd have to construct the query by using variables. and have for example some kinda form for the user to choose their content.
In static SQL, the structure of the statement will remain the same, but with dynamic SQL, it may change. In your example, you can use a parameter for the country variable to have dynamic SQL.
11,394,809
How to get this output by jQuery or Javascript: HTML: `This is a <span class="getClass" >test</span> for javascript and jquery Then suppose <span class="getClass" >it</span>. how can i change it in multiple line <span class="getClass" >look</span> like this` Output: `This is a test for javascript and jquery` I have tried some code but it does not work, Like: ``` <Script> $('.getClass').unwrap(); </script> ``` but it delete the parent element not element itself. I just have class selector access to the element and want to delete the whole element. I can not read the text inside element. Thank you.
2012/07/09
[ "https://Stackoverflow.com/questions/11394809", "https://Stackoverflow.com", "https://Stackoverflow.com/users/983777/" ]
Try this, **[Live Demo](http://jsfiddle.net/agrKZ/1/)** ``` $('.getClass').replaceWith($('.getClass').text()); ``` If you do not want to change the DOM but only need to get text without span tags then you can use ``` $('.getClass').text(); ```
Basically, you want only the text. Using `.text` to get returns the text with all tags stripped automatically. By using `.text` to set the text as well you effectively remove them: <http://jsfiddle.net/s4DJP/>. ``` $("...").text(function(i, text) { return text; }); ```
11,394,809
How to get this output by jQuery or Javascript: HTML: `This is a <span class="getClass" >test</span> for javascript and jquery Then suppose <span class="getClass" >it</span>. how can i change it in multiple line <span class="getClass" >look</span> like this` Output: `This is a test for javascript and jquery` I have tried some code but it does not work, Like: ``` <Script> $('.getClass').unwrap(); </script> ``` but it delete the parent element not element itself. I just have class selector access to the element and want to delete the whole element. I can not read the text inside element. Thank you.
2012/07/09
[ "https://Stackoverflow.com/questions/11394809", "https://Stackoverflow.com", "https://Stackoverflow.com/users/983777/" ]
Try this, **[Live Demo](http://jsfiddle.net/agrKZ/1/)** ``` $('.getClass').replaceWith($('.getClass').text()); ``` If you do not want to change the DOM but only need to get text without span tags then you can use ``` $('.getClass').text(); ```
You should be able to set it to the text content (see [`text()`](http://api.jquery.com/text)) ``` $(containerSelector).text(function (_, text) { return text; }); ```
11,394,809
How to get this output by jQuery or Javascript: HTML: `This is a <span class="getClass" >test</span> for javascript and jquery Then suppose <span class="getClass" >it</span>. how can i change it in multiple line <span class="getClass" >look</span> like this` Output: `This is a test for javascript and jquery` I have tried some code but it does not work, Like: ``` <Script> $('.getClass').unwrap(); </script> ``` but it delete the parent element not element itself. I just have class selector access to the element and want to delete the whole element. I can not read the text inside element. Thank you.
2012/07/09
[ "https://Stackoverflow.com/questions/11394809", "https://Stackoverflow.com", "https://Stackoverflow.com/users/983777/" ]
Try this, **[Live Demo](http://jsfiddle.net/agrKZ/1/)** ``` $('.getClass').replaceWith($('.getClass').text()); ``` If you do not want to change the DOM but only need to get text without span tags then you can use ``` $('.getClass').text(); ```
More generic, use regular expression: ``` $().ready(function(){ var htmlstr = 'This is a <span class="getClass" >test</span> for javascript and jquery'; var regex = /(<([^>]+)>)/ig; var plainstr = htmlstr.replace(new RegExp(regex ), ""); alert(plainstr ); });​ ``` <http://jsfiddle.net/Qjvxf/4/>
11,394,809
How to get this output by jQuery or Javascript: HTML: `This is a <span class="getClass" >test</span> for javascript and jquery Then suppose <span class="getClass" >it</span>. how can i change it in multiple line <span class="getClass" >look</span> like this` Output: `This is a test for javascript and jquery` I have tried some code but it does not work, Like: ``` <Script> $('.getClass').unwrap(); </script> ``` but it delete the parent element not element itself. I just have class selector access to the element and want to delete the whole element. I can not read the text inside element. Thank you.
2012/07/09
[ "https://Stackoverflow.com/questions/11394809", "https://Stackoverflow.com", "https://Stackoverflow.com/users/983777/" ]
You should be able to set it to the text content (see [`text()`](http://api.jquery.com/text)) ``` $(containerSelector).text(function (_, text) { return text; }); ```
Basically, you want only the text. Using `.text` to get returns the text with all tags stripped automatically. By using `.text` to set the text as well you effectively remove them: <http://jsfiddle.net/s4DJP/>. ``` $("...").text(function(i, text) { return text; }); ```
11,394,809
How to get this output by jQuery or Javascript: HTML: `This is a <span class="getClass" >test</span> for javascript and jquery Then suppose <span class="getClass" >it</span>. how can i change it in multiple line <span class="getClass" >look</span> like this` Output: `This is a test for javascript and jquery` I have tried some code but it does not work, Like: ``` <Script> $('.getClass').unwrap(); </script> ``` but it delete the parent element not element itself. I just have class selector access to the element and want to delete the whole element. I can not read the text inside element. Thank you.
2012/07/09
[ "https://Stackoverflow.com/questions/11394809", "https://Stackoverflow.com", "https://Stackoverflow.com/users/983777/" ]
You should be able to set it to the text content (see [`text()`](http://api.jquery.com/text)) ``` $(containerSelector).text(function (_, text) { return text; }); ```
More generic, use regular expression: ``` $().ready(function(){ var htmlstr = 'This is a <span class="getClass" >test</span> for javascript and jquery'; var regex = /(<([^>]+)>)/ig; var plainstr = htmlstr.replace(new RegExp(regex ), ""); alert(plainstr ); });​ ``` <http://jsfiddle.net/Qjvxf/4/>
44,476
I'm trying to convert a multipage color tiff file to a c# CompressionCCITT3 tiff in C#. I realize that I need to make sure that all pixels are 1 bit. I have not found a useful example of this online.
2008/09/04
[ "https://Stackoverflow.com/questions/44476", "https://Stackoverflow.com", "https://Stackoverflow.com/users/178/" ]
You need this conversion as CCITT3 and CCITT4 don't support color (if I remember right).
I saw the above code, and it looked like it was converting every pixel with manual logic. Would this work for you? Imports System.Drawing.Imaging 'get the color tif file Dim bmpColorTIF As New Bitmap("C:\color.tif") 'select the an area of the tif (will grab all frames) Dim rectColorTIF As New Rectangle(0, 0, bmpColorTIF.Width, bmpColorTIF.Height ) 'clone the rectangle as 1-bit color tif Dim bmpBlackWhiteTIF As Bitmap = bmpColorTIF.Clone(rectColorTIF, PixelFormat.Format1bppIndexed) 'do what you want with the new bitmap (save, etc) ... Note: there are a ton of pixelformats to choose from.
44,476
I'm trying to convert a multipage color tiff file to a c# CompressionCCITT3 tiff in C#. I realize that I need to make sure that all pixels are 1 bit. I have not found a useful example of this online.
2008/09/04
[ "https://Stackoverflow.com/questions/44476", "https://Stackoverflow.com", "https://Stackoverflow.com/users/178/" ]
You need this conversion as CCITT3 and CCITT4 don't support color (if I remember right).
Pimping disclaimer: I work for [Atalasoft](http://www.atalasoft.com), a company that makes .NET imaging software. Using [dotImage](http://www.atalasoft.com/products/dotimage/documentimaging/default.aspx), this task becomes something like this: ``` FileSystemImageSource source = new FileSystemImageSource("path-to-your-file.tif", true); // true = loop over all frames // tiff encoder will auto-select an appropriate compression - CCITT4 for 1 bit. TiffEncoder encoder = new TiffEncoder(); encoder.Append = true; // DynamicThresholdCommand is very good for documents. For pictures, use DitherCommand DynamicThresholdCommand threshold = new DynamicThresholdCommand(); using (FileStream outstm = new FileStream("path-to-output.tif", FileMode.Create)) { while (source.HasMoreImages()) { AtalaImage image = source.AcquireNext(); AtalaImage finalImage = image; // convert when needed. if (image.PixelFormat != PixelFormat.Pixel1bppIndexed) { finalImage = threshold.Apply().Image; } encoder.Save(outstm, finalImage, null); if (finalImage != image) { finalImage.Dispose(); } source.Release(image); } } ``` The Bob Powell example is good, as far as it goes, but it has a number of problems, not the least of which is that it's using a simple threshold, which is terrific if you want speed and don't actually care what your output looks like or your input domain is such that really is pretty much black and white already - just represented in color. Binarization is a tricky problem. When your task is to reduce available information by 1/24th, how to keep the right information and throw away the rest is a challenge. DotImage has six different tools (IIRC) for binarization. SimpleThreshold is bottom of the barrel, from my point of view.
44,476
I'm trying to convert a multipage color tiff file to a c# CompressionCCITT3 tiff in C#. I realize that I need to make sure that all pixels are 1 bit. I have not found a useful example of this online.
2008/09/04
[ "https://Stackoverflow.com/questions/44476", "https://Stackoverflow.com", "https://Stackoverflow.com/users/178/" ]
You need this conversion as CCITT3 and CCITT4 don't support color (if I remember right).
I suggest to experiment with the desired results first using tiff and image utilities before diving into the coding. I found [VIPS](http://www.vips.ecs.soton.ac.uk/index.php?title=VIPS) to be a handy tool. The next option is to look into what LibTIFF can do. I've had good results with the free [LibTiff.NET](http://bitmiracle.com/libtiff/) using c# (see also [stackoverflow](https://stackoverflow.com/questions/2041783/using-libtiff-from-c-to-access-tiled-tiff-images)). I was very disappointed by the GDI tiff functionality, although your milage may vary (I need the missing 16-bit-grayscale). Also you can use the [LibTiff](http://www.libtiff.org/) utilities (i.e. see <http://www.libtiff.org/man/tiffcp.1.html>)
44,476
I'm trying to convert a multipage color tiff file to a c# CompressionCCITT3 tiff in C#. I realize that I need to make sure that all pixels are 1 bit. I have not found a useful example of this online.
2008/09/04
[ "https://Stackoverflow.com/questions/44476", "https://Stackoverflow.com", "https://Stackoverflow.com/users/178/" ]
Pimping disclaimer: I work for [Atalasoft](http://www.atalasoft.com), a company that makes .NET imaging software. Using [dotImage](http://www.atalasoft.com/products/dotimage/documentimaging/default.aspx), this task becomes something like this: ``` FileSystemImageSource source = new FileSystemImageSource("path-to-your-file.tif", true); // true = loop over all frames // tiff encoder will auto-select an appropriate compression - CCITT4 for 1 bit. TiffEncoder encoder = new TiffEncoder(); encoder.Append = true; // DynamicThresholdCommand is very good for documents. For pictures, use DitherCommand DynamicThresholdCommand threshold = new DynamicThresholdCommand(); using (FileStream outstm = new FileStream("path-to-output.tif", FileMode.Create)) { while (source.HasMoreImages()) { AtalaImage image = source.AcquireNext(); AtalaImage finalImage = image; // convert when needed. if (image.PixelFormat != PixelFormat.Pixel1bppIndexed) { finalImage = threshold.Apply().Image; } encoder.Save(outstm, finalImage, null); if (finalImage != image) { finalImage.Dispose(); } source.Release(image); } } ``` The Bob Powell example is good, as far as it goes, but it has a number of problems, not the least of which is that it's using a simple threshold, which is terrific if you want speed and don't actually care what your output looks like or your input domain is such that really is pretty much black and white already - just represented in color. Binarization is a tricky problem. When your task is to reduce available information by 1/24th, how to keep the right information and throw away the rest is a challenge. DotImage has six different tools (IIRC) for binarization. SimpleThreshold is bottom of the barrel, from my point of view.
I saw the above code, and it looked like it was converting every pixel with manual logic. Would this work for you? Imports System.Drawing.Imaging 'get the color tif file Dim bmpColorTIF As New Bitmap("C:\color.tif") 'select the an area of the tif (will grab all frames) Dim rectColorTIF As New Rectangle(0, 0, bmpColorTIF.Width, bmpColorTIF.Height ) 'clone the rectangle as 1-bit color tif Dim bmpBlackWhiteTIF As Bitmap = bmpColorTIF.Clone(rectColorTIF, PixelFormat.Format1bppIndexed) 'do what you want with the new bitmap (save, etc) ... Note: there are a ton of pixelformats to choose from.
44,476
I'm trying to convert a multipage color tiff file to a c# CompressionCCITT3 tiff in C#. I realize that I need to make sure that all pixels are 1 bit. I have not found a useful example of this online.
2008/09/04
[ "https://Stackoverflow.com/questions/44476", "https://Stackoverflow.com", "https://Stackoverflow.com/users/178/" ]
I suggest to experiment with the desired results first using tiff and image utilities before diving into the coding. I found [VIPS](http://www.vips.ecs.soton.ac.uk/index.php?title=VIPS) to be a handy tool. The next option is to look into what LibTIFF can do. I've had good results with the free [LibTiff.NET](http://bitmiracle.com/libtiff/) using c# (see also [stackoverflow](https://stackoverflow.com/questions/2041783/using-libtiff-from-c-to-access-tiled-tiff-images)). I was very disappointed by the GDI tiff functionality, although your milage may vary (I need the missing 16-bit-grayscale). Also you can use the [LibTiff](http://www.libtiff.org/) utilities (i.e. see <http://www.libtiff.org/man/tiffcp.1.html>)
I saw the above code, and it looked like it was converting every pixel with manual logic. Would this work for you? Imports System.Drawing.Imaging 'get the color tif file Dim bmpColorTIF As New Bitmap("C:\color.tif") 'select the an area of the tif (will grab all frames) Dim rectColorTIF As New Rectangle(0, 0, bmpColorTIF.Width, bmpColorTIF.Height ) 'clone the rectangle as 1-bit color tif Dim bmpBlackWhiteTIF As Bitmap = bmpColorTIF.Clone(rectColorTIF, PixelFormat.Format1bppIndexed) 'do what you want with the new bitmap (save, etc) ... Note: there are a ton of pixelformats to choose from.
33,397,540
My CSS is validated, but still breaks when adding `<!DOCTYPE html>`. What am I doing wrong? I have search the forums and the common response seems to be [add height: 100% to the body, html tags. Did that, but no luck. Without DOCTYPE: <http://www.babeweiser.com/rockhistory/> With DOCTYPE: <http://www.babeweiser.com/rockhistory/test.php> **CSS** ``` html, body { height: 100%; width: 100%; background: #333333; } div.Container { margin: auto; width: 90%; background: #5e6d3d; padding: 10px; } p { font-family: sans-serif; } .Table { display: Table; } .Title { display: table-caption; text-align: center; font-weight: bold; font-size: larger; background: #c6d4a8; } .Heading { display: table-row; font-weight: bold; text-align: center; } .Row { display: table-row; height: 100%; width: 100%; } div.row:nth-child(odd) { background: #daedb2; } div.row:nth-child(even) { background: #c6d4a8; } .Cell { display: table-cell; padding: 15px; } select:required:invalid { color: #999; } option { color: #000; } ``` **test.php:** ``` <!DOCTYPE html> <html> <head> <title>Today in Rock History</title> <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.3/jquery.min.js"> </script> <link rel="stylesheet" type="text/css" href="rockhistory2.css"> <script> $(function() { $.ajax({ type: "POST", url: "pull_history2.php", data: "" + status, success: function(data){ document.getElementById("demo").innerHTML = data; } }); }); </script> <script> $(document).ready(function() { $(document).on('submit', '#reg-form', function() { var data = $(this).serialize(); $.ajax({ type : 'POST', url : 'pull_history2.php', data : data, success : function(data) { document.getElementById("demo").innerHTML = data; } }); return false; }); }); </script> </head> <body> <div class="Container"> <form id="reg-form" name="reg-form" method="post"> <select name="month" required id="month" size="1"> <option value="" disabled selected>Month</option> <?php $mo = 1; while($mo <= 12) { echo '<option value= "' . $mo . '">' . date("F", mktime(0, 0, 0, $mo+1, 0, 0)) . '</option>'; echo "\n"; $mo++; } ?> </select> <select name="day" required id="day" size="1"> <option value="" disabled selected>Day</option> <?php $da = 1; while($da <= 31) { echo '<option value= "' . $da . '">' . date("j", mktime(0, 0, 0, 0, $da, 0)) . '</option>'; echo "\n"; $da++; } ?> </select> <button type="submit" >Go</button> </form> <p id="demo"></p> </div> </body> </html> ``` **pull\_history2.php:** ``` <?php if($_POST) { $month= $_POST['month']; $day= $_POST['day']; } else { $month = date('n'); $day = date('j'); } $tdate=date("F j", mktime(0, 0, 0, $month, $day, 0)); ?> <div class="Table"> <div class="Title"> <p><? echo "This Day in Rock History for $tdate" ?></p> </div> <div class-"heading"> </div> <?php $db = mysql_connect("localhost","xxx", "xxx"); mysql_select_db("babewe5_wlup",$db); $result = mysql_query("SELECT * FROM RockHistory081512 WHERE month=$month AND day=$day ORDER BY year",$db); if (!$result) { echo("ERROR: " . mysql_error() . "\n$SQL\n"); } while ($row = mysql_fetch_array ($result)) { ?> <div class="Row"> <div class="Cell"> <p><? echo $row["year"] ?></p> </div> <div class="Cell"> <p><? echo $row["history"] ?></p> </div> </div> <? } mysql_free_result ($result); ?> <div class="Row"> <div class="Cell"> </div> <div class="Cell"> <p><small>Copyright &copy; <? echo date("Y"); ?> Tim Spencer</p> </div> </div> ```
2015/10/28
[ "https://Stackoverflow.com/questions/33397540", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5463684/" ]
You are using `.Row` and `.row` in your CSS. One is initial caps and the other is not. Change `div.row:nth-child` to `div.Row:nth-child`. Applying the `doctype` enforces a set of rules on your document. Without the `doctype` the two are treated the same.
It is the case sensitivity. either modify classname to "row" (lowercase letters). or change the relevant styles in stylesheet. div.Row:nth-child(even) -> (R as uppercase).. div.Row:nth-child(odd) You better check the browser compatibilities of n-th selectors.. It is supported in IE 9 and higher versions. <https://developer.mozilla.org/en-US/docs/Web/CSS/:nth-child>
50,639,973
I think I have some issues with either Python and/or pip on my Mac. I have Python 2.7 installed globally and then I normally setup virtualenvs and install Python3.6.4 but in the last day or so Ive been getting problems with packages such as Fabric and SSH2 where I have either not been able to install them with various errors or with Fabric it throws when I try to import the package. Im now trying to remove Fabric and install Fabric3 and its throwing errors like this: ``` Could not install packages due to an EnvironmentError: [Errno 13] Permission denied: '/Users/david/Documents/projects/uptimeapp/env/lib/python3.6/site-packages/Fabric3-1.14.post1.dist-info' Consider using the `--user` option or check the permissions. (env) Davids-MacBook-Air:uptimeapp david$ pip install fabric3 --user Can not perform a '--user' install. User site-packages are not visible in this virtualenv. ``` If I do `sudo pip install fabric` then it installs but with this warning: ``` The directory '/Users/david/Library/Caches/pip/http' or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag. The directory '/Users/david/Library/Caches/pip' or its parent directory is not owned by the current user and caching wheels has been disabled. check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag. ``` But I thought it was not advised to pip install with sudo? These are the errors I get when I try to `pip install ssh2-python` ``` ssh2/agent.c:569:10: fatal error: 'libssh2.h' file not found #include "libssh2.h" ^~~~~~~~~~~ 1 error generated. error: command 'clang' failed with exit status 1 ---------------------------------------- Command "/Users/david/Documents/projects/uptimeapp/env/bin/python3.6 -u -c "import setuptools, tokenize;__file__='/private/var/folders/bl/97vt48j97zd2sj05zmt4xst00000gn/T /pip-install-mpyq41q4/ssh2-python/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /private/var/folders/bl/97vt48j97zd2sj05zmt4xst00000gn/T/pip-record-qul_k3kq/install-record.txt --single-version-externally-managed --compile - -install-headers /Users/david/Documents/projects/uptimeapp/env/bin/../include/site/python3.6 /ssh2-python" failed with error code 1 in /private/var/folders/bl/97vt48j97zd2sj05zmt4xst00000gn/T/pip-install-mpyq41q4/ssh2-python/ ``` I have managed to remove Fabric and install Fabric3 with the sudo command but I would rather not do that. I should add that Ive not had any other problems with installing other packages either globally in Python2.7 or in envs.
2018/06/01
[ "https://Stackoverflow.com/questions/50639973", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7266376/" ]
The `permission denied` error is raised because you've already borked your virtual environment by installing with `sudo`. Run ``` $ sudo chown -R david:staff /Users/david/Documents/projects/uptimeapp/env ``` to fix the permissions. Maybe it's even wise to fix the permissions for the whole home dir, should you have other permission issues: ``` $ sudo chown -R david:staff /Users/david/ ``` Now reinstalling packages should work again: ``` $ source /Users/david/Documents/projects/uptimeapp/env/bin/activate $ (env) pip uninstall -y fabric $ (env) pip install fabric ``` > > `'libssh2.h' file not found` > > > means that before installing `ssh-python`, you need to install the according lib first: ``` $ brew install libssh2 ```
You can make pip to install the package within the virtualenv library location: ``` sudo -H venv/bin/pip install fabric ```
9,037,842
I'm trying to get a list of 'contacts' for a specified user id. Let says my user id is 1, i need to get the list of ids of my my contacts from *chat-contactlist* then get all the infos for each id. **All users' id, name and contact information** Table usr: uid, rname, phonenumber **Online status and other stuff** Table chat-usr: uid, nickname, online\_status **Containing user id and the user id of each contact this user have :** Table chat-contactlist: uid, cid (cid = The id of the person who's int he "uid" user list So I need the name, the nickname, the online\_status for all the 'cid' for a specified 'uid'... Dont know i read a tutorial about left join but it seams complex to merge multiple tables, anyone wanna try? Any recommendation? Thank you **EDIT** Changing name by rname because name is a reserved word for SQL.
2012/01/27
[ "https://Stackoverflow.com/questions/9037842", "https://Stackoverflow.com", "https://Stackoverflow.com/users/473908/" ]
You can control this from within emacs by writing a function that (temporarily) sets `default-directory` and calls `compile`. ``` (defun compile-in-parent-directory () (interactive) (let ((default-directory (if (string= (file-name-extension buffer-file-name) "ml") (concat default-directory "..") default-directory)))) (call-interactively #'compile)) ``` When using `compile-in-parent-directory` all `ml` files will be compiled in the parent directory of where they are. Of course if they are nested deeper you can change the logic to reflect that. In fact there is a [version on the EmacsWiki](http://www.emacswiki.org/emacs/UsingMakefileFromParentDirectory) which searches parent directories until it finds a makefile. I found this after I wrote this answer, otherwise I would have just pointed you there. *sigh*. The good thing about my method is that it's not specific to `make` so that you can use the same "trick" for other commands. You can also change the call to compile to be non-interactive if you know exactly what you want the command to be. This would work particularly well if it's bound to a key in the appropriate mode hook.
i use a script like this which allows me to run make from any sub-directory (assuming you are in a posix-like environment). just put this script in your PATH as something like "sub\_make.sh" and invoke it the same way you would invoke make: ``` #!/bin/bash # search for project base INIT_DIR=`pwd` while [ "$PWD" != "/" ] ; do if [ -e "makefile" ] ; then break fi cd .. done if [ ! -e "makefile" ] ; then echo "Couldn't find 'makefile'!" exit 1 fi # indicate where we are now echo "cd "`pwd` echo make "$@" # now run make for real exec make "$@" ```
9,037,842
I'm trying to get a list of 'contacts' for a specified user id. Let says my user id is 1, i need to get the list of ids of my my contacts from *chat-contactlist* then get all the infos for each id. **All users' id, name and contact information** Table usr: uid, rname, phonenumber **Online status and other stuff** Table chat-usr: uid, nickname, online\_status **Containing user id and the user id of each contact this user have :** Table chat-contactlist: uid, cid (cid = The id of the person who's int he "uid" user list So I need the name, the nickname, the online\_status for all the 'cid' for a specified 'uid'... Dont know i read a tutorial about left join but it seams complex to merge multiple tables, anyone wanna try? Any recommendation? Thank you **EDIT** Changing name by rname because name is a reserved word for SQL.
2012/01/27
[ "https://Stackoverflow.com/questions/9037842", "https://Stackoverflow.com", "https://Stackoverflow.com/users/473908/" ]
You can call `make` with the right arguments: ``` make -C .. -k ``` where `..` is the path to your `Makefile`
You can control this from within emacs by writing a function that (temporarily) sets `default-directory` and calls `compile`. ``` (defun compile-in-parent-directory () (interactive) (let ((default-directory (if (string= (file-name-extension buffer-file-name) "ml") (concat default-directory "..") default-directory)))) (call-interactively #'compile)) ``` When using `compile-in-parent-directory` all `ml` files will be compiled in the parent directory of where they are. Of course if they are nested deeper you can change the logic to reflect that. In fact there is a [version on the EmacsWiki](http://www.emacswiki.org/emacs/UsingMakefileFromParentDirectory) which searches parent directories until it finds a makefile. I found this after I wrote this answer, otherwise I would have just pointed you there. *sigh*. The good thing about my method is that it's not specific to `make` so that you can use the same "trick" for other commands. You can also change the call to compile to be non-interactive if you know exactly what you want the command to be. This would work particularly well if it's bound to a key in the appropriate mode hook.
9,037,842
I'm trying to get a list of 'contacts' for a specified user id. Let says my user id is 1, i need to get the list of ids of my my contacts from *chat-contactlist* then get all the infos for each id. **All users' id, name and contact information** Table usr: uid, rname, phonenumber **Online status and other stuff** Table chat-usr: uid, nickname, online\_status **Containing user id and the user id of each contact this user have :** Table chat-contactlist: uid, cid (cid = The id of the person who's int he "uid" user list So I need the name, the nickname, the online\_status for all the 'cid' for a specified 'uid'... Dont know i read a tutorial about left join but it seams complex to merge multiple tables, anyone wanna try? Any recommendation? Thank you **EDIT** Changing name by rname because name is a reserved word for SQL.
2012/01/27
[ "https://Stackoverflow.com/questions/9037842", "https://Stackoverflow.com", "https://Stackoverflow.com/users/473908/" ]
You can call `make` with the right arguments: ``` make -C .. -k ``` where `..` is the path to your `Makefile`
Not a completely general solution w.r.t makefile location, but adding this here for posterity because it solved my particular use-case. If you use `projectile` and your makefile is always in the root of your project directory, then you can use `projectile-compile-project`. (In my case, I wanted to lint my project, so calling `(compile "flake8")` would only flake from the current buffer's directory downwards, whereas what I really wanted was linting of the entire project. `projectile-compile-project` achieves this.)
9,037,842
I'm trying to get a list of 'contacts' for a specified user id. Let says my user id is 1, i need to get the list of ids of my my contacts from *chat-contactlist* then get all the infos for each id. **All users' id, name and contact information** Table usr: uid, rname, phonenumber **Online status and other stuff** Table chat-usr: uid, nickname, online\_status **Containing user id and the user id of each contact this user have :** Table chat-contactlist: uid, cid (cid = The id of the person who's int he "uid" user list So I need the name, the nickname, the online\_status for all the 'cid' for a specified 'uid'... Dont know i read a tutorial about left join but it seams complex to merge multiple tables, anyone wanna try? Any recommendation? Thank you **EDIT** Changing name by rname because name is a reserved word for SQL.
2012/01/27
[ "https://Stackoverflow.com/questions/9037842", "https://Stackoverflow.com", "https://Stackoverflow.com/users/473908/" ]
You can control this from within emacs by writing a function that (temporarily) sets `default-directory` and calls `compile`. ``` (defun compile-in-parent-directory () (interactive) (let ((default-directory (if (string= (file-name-extension buffer-file-name) "ml") (concat default-directory "..") default-directory)))) (call-interactively #'compile)) ``` When using `compile-in-parent-directory` all `ml` files will be compiled in the parent directory of where they are. Of course if they are nested deeper you can change the logic to reflect that. In fact there is a [version on the EmacsWiki](http://www.emacswiki.org/emacs/UsingMakefileFromParentDirectory) which searches parent directories until it finds a makefile. I found this after I wrote this answer, otherwise I would have just pointed you there. *sigh*. The good thing about my method is that it's not specific to `make` so that you can use the same "trick" for other commands. You can also change the call to compile to be non-interactive if you know exactly what you want the command to be. This would work particularly well if it's bound to a key in the appropriate mode hook.
Not a completely general solution w.r.t makefile location, but adding this here for posterity because it solved my particular use-case. If you use `projectile` and your makefile is always in the root of your project directory, then you can use `projectile-compile-project`. (In my case, I wanted to lint my project, so calling `(compile "flake8")` would only flake from the current buffer's directory downwards, whereas what I really wanted was linting of the entire project. `projectile-compile-project` achieves this.)