target
stringlengths
16
149
source
stringlengths
69
7.23k
category
stringclasses
1 value
is it better to submit a paper to a journal "normal" issue, or wait 4 months and send it to a more focused "special section" of the same journal?
there's another alternative. you can submit to the special issue early. usually journals don't have "easier" submission to a special issue or section. most journals try to have the same review and editing criteria for such contributions as they do for other articles. (consider that having different criteria makes it harder for them to process and track such articles.) i can attest - i had a manuscript that was asked for significant revision and it didn't make the "special issue" by the time we'd revised it. but you can submit the article and ask for inclusion in the special issue and explain that you have the manuscript done now. they will probably send it for review, etc. but it won't be published until the remainder of the special section is finished as well. i've done this. it gets the paper off your desk and lets you move on to other things. what are the pros and cons?: pro: usually a special section or special issue gets added attention and publicity. i haven't seen analysis, but one would hope that articles in this section or issue would have more readers and potentially more citations than in a normal issue. con: you will have to wait for the special issue to finalize, while if you submit to the "regular" journal, publication will probably happen sooner. on the other hand, if the journal publishes accepted articles before publication, or you can put the manuscript on a pre-print server, there's little downside. personally, i'd submit early, indicate in the submission letter (and online forms) that the article is for the special section and be done with it.
comparison
is university teaching better than high school teaching experience in applying for teaching-funded phds?
if there is an option definitely university teaching is more valuable as it is at the same level as the teaching you will experience while doing your degree. however, you should consider that phd is a research-based degree and teaching obligation is not primary concern. so you might get the position without having any teaching experience and finally get your phd being a year or so younger.
comparison
is it better to go to class, or watch the recorded lectures?
just go to the class. in class you can ask questions for things you do not understand or answer the instructor's questions as well. moreover, you also have the chance to interact and get to know your co-students and coordinate assignments (for group assignments) or reading sessions. if this was a concert of your favorite artist, would you consider watching the concert from youtube the same as actually being there? especially, if you already paid for the ticket (since you are officially enrolled to the class and therefore paid the corresponding fees - if any). live interaction during a course is vastly superior to watching a lecture in your pajamas, which is still a valid alternative for people who cannot be there (sickness, online learning, free courses on coursera). do not miss this opportunity you are given to actually be there. and use video lectures as supplementary material as the excellent answer from @patriciashanahan already states.
comparison
what advantages and disadvantages should i consider in deciding whether to publish my academic book traditionally or self-publish?
should i publish it traditionally or self-publish? why do you want to publish at all? you answered i write the book to store down my research results and to spread my new knowledge. to make money is not the main aim, but it would be nice. given that: the answer is that you should certainly not self-publish your work. you can store your results and spread knowledge by having the material freely available on the internet, as i believe is already the case. the arxiv is one nice place to put work, but it is not the only one: you could put in on github or any number of other repositories. you can just put it on your own website and make sure that google indexes it. that means that billions of people can access it at any time. let me be clear with you: you are not going to make money self-publishing works of mathematics that you have not been able to publish traditionally. it is exceedingly rare for any mathematical text beyond the undergraduate level to make a profit that is worth the time taken to write it. (maybe a few of serge lang's books qualify; probably not.) if you go self-publishing rather than traditional publishing, you will lose money, and what you're paying for is the vanity of being a published author. the bar for interest by the mathematical community is much lower than the bar for the type of public interest needed to generate any real sales. the thought that you have "my ideas are too bold for the mathematical community, so i need to take matters into my own hands; they don't know the value of my work as well as i do" is not only crankish but actually specifically damaging to you: it makes you ideal prey for predators of various kinds. you told us in a previous question that you literally fell prey to a diploma mill and thereby lost money. the same mindset that you have now is going to cost you more money in the future. i'm sorry to tell you this, but this has been going on for several years now, so i feel i should be plain: no one in the world has found your work to be of significant mathematical value. this means that, with probability slightly less than one, that your work does not have significant mathematical value. but in the unlikely event that your work does have value, you're not doing what is necessary in order to show it. mathematical research is not about simply writing down structures that generalize other structures and proving results about them. you have to solve old problems or pose new ones that are of interest to the community. bold statements of superiority would be a positive thing if they are specific and factual: for your work to be "superior", it should solve at least one problem that others have posed. if you've done that, please explain yourself properly and then your work can be published in the mathematical mainstream. if you haven't: please start to be honest with yourself about the value of your work. your livelihood is at stake.
comparison
the book suggested for a class has bad reviews, should i use it or another?
first, note that a course on oop is probably not the same as a course just on c++. the c++ language was designed by stroustrup to support the developer in multiple styles of programming, including oo. c++ today has become such a huge and complex language, moreover one for which which professional programming style has significantly evolved since the 98 standard. for example, its support for and take-up of functional programming has grown significantly. i'm assuming you want to do well on the course, as opposed to simply learning skills. now, it's a long while since i've had a look at deitel and deitel, and i can't say i've good memories of it. however, hopefully the book is recommended because does take an approach supporting the oop of the course and the subset of c++ you are expected to learn on the course. so go ahead and get the book. having said that, if you really want to learn c++ itself well enough to be able to use it as a professional, be prepared to buy a few more books and put in a lot more work after you've done this introductory course.
comparison
1-year professional masters at excellent school or 2-year research-oriented masters at great school?
one year course based ms degree programs are typically "terminal degrees" designed for students who want to work in industry and not pursue a phd and an academic career. if you want to work in industry and are prepared to give up on any hope of an academic career, then you should seriously consider the one year ms program. if you want to keep open the possibility of doing a phd later, you should go for the 2 year ms program with thesis.
comparison
should those more interested in research choose a postdoc or teaching position?
in my discipline, mathematics, there is a huge range of tenure track positions, from very strongly research oriented positions to positions that are entirely oriented towards teaching. most new phd's would prefer to end up in a more research oriented position, but most tenure track faculty positions are not at that end of the spectrum. this means that a lot of new phds will ultimately have to settle for something less than the research oriented position that they have dreamed of. if you're only willing to accept a research oriented tenure track position and would not accept a teaching oriented position then you should focus your efforts on getting a research oriented post-doc. if you are most interested in a research oriented position but would at least be willing to consider taking a somewhat more teaching oriented position, then you should make an attempt to get some teaching experience by doing some teaching during your post-doc or by taking a position that is designed to mix research and teaching. for example, dartmouth has named instructorships in mathematics with a teaching load of one course per quarter. there are also non tenure track faculty positions (typically called "visiting assistant professor.") these are a good way to get teaching experience but it is extremely difficult to get any research accomplished while teaching a load of 3-4 courses per semester in such a position. these positions are sometimes created to temporarily fill the vacancy created by when a tenured faculty member leaves or retires. sometimes the visiting assistant professor position turns into a tenure track position in that department, but you shouldn't count on this happening. .
comparison
which one is more beneficial, working independently or collaborating with other researchers?
i'm trying to coin a phrase: interior point maximum. well, the phrase already exists: it means that the function you are trying to maximize which is defined on some closed, bounded interval [a,b] does not have its maximum value at either a or b but rather at some point c somewhere strictly in between. most functions one studies in calculus have interior point maxima, though there are obvious exceptions: especially if the function is increasing its maximum is at b, and if it is decreasing its maximum is at a. perhaps this is because the methods of calculus only speak to interior point maxima: the basic observation is that if the function is differentiable, an interior maximum must occur at a stationary point, i.e., for which the instantaneous rate of change is zero. (famously, the converse is not true.) what's the point of this math lesson? it's this: when we step outside of math class we tend to completely forget about this phenomenon and try to stare off into space and figure out which of the two extreme points, a and b, is better. but in many real-world situations it is actually pretty obvious that the maximum must be at an interior point. to me at least, the current question is a case of this. if you never collaborate, then you never benefit from anyone else's expertise. in research we almost never do exactly what we want: rather we collect various pieces of what we want to do, and then have to make hard choices about how and when to combine those pieces into published work. if you can find someone else whose pieces are complementary to your pieces, then you both benefit tremendously form collaboration, because academia (justly) rates complete solutions more than twice as highly as half solutions. this is, to me, the best argument for collaboration, and it already shows that "no collaboration" is not going to be your optimal choice. another argument for collaboration, not nearly as good, is that it allows you to increase your multiplicity: in a given year, maybe you can write one paper all by yourself, or maybe you can write one fourth of four papers and put your name on all of them. in some academic cultures, depending upon how you play it, you will get more credit with the second option. however, there is no inherent advantage to this -- in other words, there is no added value to those outside your circle of collaborators -- so this is really rather specious. (but it works, to a certain extent...unfortunately.) another legitimate benefit of collaboration is that your collaborators get to know you and know your skills. i have several collaborators that don't write as many papers as i do and are perhaps not as high-profile in the community as i am. i wouldn't have thought they were anything special if i hadn't worked with them -- worked with them because they brought to the table key pieces that i could use to advance my work. whenever anyone asks me about these people, i say how great they are. if you always collaborate, then people begin to wonder whether you can in fact write a paper / complete an experiment / do one unit of substantial academic work by yourself. if you always collaborate with the same people, and especially if they are more senior than you and/or have other papers without you, then a lot of hard-nosed academics [including me] are going to suspect that you are not the brains of the operation and eventually wonder whether you may not have been gifted coauthorship. the details of this must be entirely field dependent, but i am in a field in which senior people usually don't get added as coauthors unless their intellectual contribution was decisive [in many cases, this means most decisive], so if i see someone with a sequence of strong papers all of which are joint with their eminent thesis advisor and no others, then i really need to hear their thesis advisor describe specifically and cogently the value added by their student. (in some fields collaboration is not an option, it's a reality. but this seems to nullify the question: if a = b, you can maximize the function.) so it seems clear that it's an interior point maximum: it will be best for your research if you collaborate x% of the time for some 0 < x < 100. as with all interior maxima, one way to figure out x is: take a rough guess as to what you think a good value of x would be, and then explore the nearby space. definitely do at least one collaborative work and at least one solo work and then evaluate how they went. at the risk of ruining my meme, i will say though that in this case the amount of collaboration is less important -- if you make it safely between 0 and 100% -- than the type of collaboration. as above, you want to choose collaborations that qualitatively augment your work. you do not want to "trade papers" or get involved in projects just to have your name on one more paper. definitely make sure that you are bringing something to the table whenever you collaborate: you really don't want people wondering whether you've added anything of value.
comparison
what are the advantages and disadvantages of doing a postdoc in singapore and uk?
to put the following into context, i teach at a uk university and have delivered courses at a singapore partner university for many years. in support of many of the above comments, you need have no concerns about the academic establishment in singapore, at any level, from primary through to university. education is top of singapore's agenda and this is apparent everywhere. teaching is in english throughout. singapore's universities are world-class. the living and working environments are excellent. cost of living (apart from housing) is lower than uk. the transport system is also excellent. cost of accommodation is the only negative. apartment rental costs are high - comparable to london. however, if you are appointed on expat terms the university will provide an apartment and the rent will be subsidised; all large companies that employ expat staff take account of accommodation costs. but you should make sure that you know what type of apartment is on offer and that it meets your family needs. 'landed properties' form a very small part of singapore's accommodation, the large majority are high-rise apartments. income tax is much lower than uk levels - you can check this out on the government's iras web page.
comparison
what's more important in choosing a phd program, advisor or institution?
obviously one needs a competent advisor with whom one is compatible. but assuming that both professors qualify, i think what matters most is the quality of the students who will be your peers. you need to surround yourself with students who, from day 1, expect nothing less of themselves than to produce novel scientific research of the highest caliber, present it at top meetings, publish it in top journals, and forth. ultimately you will learn more from your peers than from your advisor. a sufficiently talented and ambitious cohort will hold the bar high for you and push you to excel whereas a sufficiently talentless and unambitious cohort will help you make excuses for your own failures to reach your potential. in my experience, top schools with top graduate programs have the sorts of students you want to surround yourself with. second tier regional programs may, but i have yet to see it.
comparison
what is the difference between a teaching assistant and an instructor?
no, there really is no universal definition of the terms; i have been a "teaching assistant" and an "instructor" at the same school for basically the same position. the only thing that i would say is that the term "teaching assistant" tends to imply a position that does not have significant lecturing responsibilities, although she may be responsible for nearly everything else in the course (creating and grading homework and exam problems, interacting with students, conducting recitations sections, and so on). note that this does not mean that the ta might not carry out a "spot lecture" or two; but this is not an expectation of the position overall. in general, i would avoid this problem by doing two things: listing on my cv the "job title" that the school assigns to the role you carried out. provide a short list of the duties your position entailed. in this way, there is no ambiguity or misconception that can result, since you're providing all the information needed to understand the breadth of your teaching experience.
comparison
is it better to negotiate a faculty offer by email or phone?
negotiating the conditions of any future employment is an important conversation. most of human communication is non-verbal. we communicate a lot through our voice tone and body language. some people avoid face-to-face, or telephone conversations, as they find the interaction awkward. ask yourself:- if you were the search committee chair, would you want to employ someone as a lecturer who was uncomfortable in face-to-face communication? at least offer to have a telephone conversation, or skype, or video conference with the other party, if a face-to-face meeting is impractical or inappropriate. this shows you are keen to engage in the most efficient means of human-to-human communication. then follow this up with an email clarifying the main points of your conversation.
comparison
should i take a tutor job or a research job in a company before going to grad school?
personally i would choose the company position: prior experience in the private sector (outside of teaching) is a big advantage if you decide to look for a non-academic job later. python is a popular programming tool to use. might as well learn now. data analysis is a big part of doing a physics phd. you will form better contacts (probably). never underestimate the value of good connections. you will get plenty of opportunity to do teaching later if you wish; and i do not really see what it adds to your position now. however, i never really saw the appeal of teaching, so perhaps my view is coloured. a cv is just a way to get yourself an interview. it doesn't get you a job. definitely send your cv in.
comparison
doing ms in cs from low ranked university versus applying again next year?
please don't rely only on a ranking to figure out if a particular university is the right place for you. here is what you can look at instead: course offerings at the candidate university schedule of classes for next fall and spring -- this is important, because some institutions list mouth-watering courses which are almost never offered in practice faculty profiles faculty publications opportunities for student research projects and collaboration with industry the institution's commitment to assisting students with their job search their calendar of events to see what sort of seminar talks and cultural events take place there their list of student clubs, to see if you will find like-minded souls there
comparison
is it better to have just two good letters of recommendation, or two good letters and one bad letter, if three are expected?
is it better to have 2 good letters or recommendation or 2 good and 1 bad letter? definitely, it's better to have two good letters. a bad letter is a very, very bad thing. they're not counting -- they're reading for understanding.
comparison
what is the difference between mba and masters in a business subject, and which is better for a future researcher?
an mba is a professional degree designed for students who will go to work in industry and not pursue further graduate study or research. students in mba programs typically all take the same courses in lock step with very few electives or options. it generally isn't possible to specialize in a particular area in an mba program. in that sense, an mba program is very much like medical school or law school and totally unlike graduate study in engineering and the arts and sciences. students in mba programs usually have several years of full time experience working in low level corporate jobs after completing a bachelors degree. because they have had some exposure to the corporate/business world, the courses that they take draw on that experience. in comparison, the master's in management is a professionally oriented degree program aimed primarily at students who have just completed a bachelor's degree. these programs tend to be more theoretical simply because students have less practical experience to draw on. many business schools also offer professionally oriented masters degrees in more specialized technical areas (e.g. a master's degree in mathematical finance or operations research or analytics.) phd programs in business schools are typically very separate from mba programs and the students in the phd program will typically take very few if any courses with the mba students. phd students take more theoretical course work, take advanced courses in an area of specialization (e.g. accounting, finance, marketing, etc.) take course work in research methods, and then conduct research and write a dissertation. if you're just completing a bachelor's degree and want to get a masters and go to work in industry, than a masters in management is the typical path. if you're just completing a bachelor's degree and want to become a business school professor then a phd program would be appropriate. if you've had several years of experience in business and want to move up in management, then an mba would be the typical path for you.
comparison
what is the difference between research assistantship and teaching assistantship?
at my university, ras are paid for 20 hours of nominal work a week, usually through grant funds, to work on a research proejct. tas are paid for 10 hours per week per course with departmental or university funds to assist the professor of a course. their duties may include: teaching so-called recitation sections of the course where the week's lecture materials might be reviewed or homework questions discussed or clarified, holding office hours where students can come to ask questions about material they do not understand or get guidance on homework, and marking homework problem sets. taships generally do not involve doing research with a prof, though students with a taship may transition to an ra with a professor if they sufficiently impress one during their taship. the level of pay and benefits is the same (per hour) at my university. both tas and ras are generally restricted to 20 hour per week of nominal work, and health insurance benefits don't kick in until they reach that level.
comparison
is it better to repeat a caption or reference an identical caption?
it depends on several factors: can you expect readers to read this part of the caption only once and remember everything important for the rest of the reading? or do you expect that readers need to constantly consult the caption to understand your figure? do you expect figures to be read in order? or is it likely that a reader will look at a later figure in the series first? how much information does the caption contain? this is not the same as the first point, as not all of this information needs to be important. also, this is not exactly about length; a short caption with a lot of numbers is worse than a long caption describing straightforward concepts. do you need the space used by the repeated captions? positive answers to the respective first questions are in favour of referencing. but why reference at all, unless you need the space? because redudancy, in particular in technical aspects is annoying and time-consuming for the reader. if a caption references another caption the reader still remembers, this is the quickest way of communicating it. otherwise the reader has to read the same (usually boring) text again. moreover, if there is an unneeded redundancy, i would expect that there is a reason for this, e.g., that you changed some details of what is described in the caption. this would usually make me frantically compare the two caption (and thus flipping back and forth). if you have the space, i suggest a third way that takes the best of both worlds: fig 7. 2×167 in 10–200 mm kcl: (a)–(c) as labeled in fig. 6, namely: (a) shows the reduced intensity for the varied sample conditions with the inset showing a key region. (b) shows how the scale applied to each data as a function of the concentration. (c) shows the time dependence. this way, all the information is available for the reader who does not remember or has never read the earlier figure’s caption. but the reader who did, knows that the rest of the caption is redundant.
comparison
why is there less funding or scholarship opportunity for business students compared to other background students?
it may be useful to take a slightly larger perspective, rather than comparing just business vs. stem. while there is a great degree of variation around the world and from program to program, most higher-level degree funding can be clustered into three general categories: doctoral degrees in stem fields are generally funded by universities or external agencies. terminal "professional" degrees, including mba, md, jd, dds, etc. frequently expect students to be responsible for their own funding, often going deeply into debt to do so. doctoral degrees in humanities and liberal arts fields are in-between, where sometimes students are supported or partially supported and other times they are not. there's a great deal of variation, of course. for example, i believe that the education of medical doctors is supported by some nations (though i lack a reference at the moment). likewise, masters degrees are sometimes considered a "professional degree" that students are expected to fund, while other times they are lumped in with stem doctoral degrees and funded by the university or other sources. so, why should this be the case? it seems to me that a lot of this is simply independently developing markets: stem students are typically expected to contribute significantly to the investigation of science and engineering as part of their training, and since these contributions can be very valuable in those markets, it makes sense for external funding agencies to include students in their funding. professional degrees are often coupled with very high post-graduation salaries, while the training period is much more one-way, with the students spending most of their time learning rather than making novel (and readily fundable) contributions. humanities and liberal arts are also quite important societally, but they are not currently associated with markets flush with capital and an easy set of incentives to pour money into schools either during the program (as with stem) or afterward (as with professional degrees). as such, they have generally developed much more ad hoc and patchwork approaches to funding degrees.
comparison
what is the difference between a "i" or "ii" after a job title?
in a job title, "i" or "ii" usually denotes the level of experience. you will also see "assistant", "senior" and similar adjectives used. the idea is that employees can be hired at one of several levels of experience and that employees can advance through these levels as they gain experience. someone hired as a "analyst i" may be promoted to a "analyst ii" after some number of years. sometimes these kinds of job titles also have additional educational requirements (such as an advanced degree or certification) for the higher levels. typically there are different pay ranges that apply to employees as they advance through the classifications. the definitions of the different levels will vary from one employer to another so there's really nothing more specific that we can say in answer to this question- you'll have to check the definitions used by your employer to see what the requirements are for each level.
comparison
should i be a visiting student rather than a non-degree seeking graduate student?
enrolling as a non-degree-seeking graduate student in a u.s. mathematics graduate program is almost never a good idea, and this is in no way a mainstream option. (i'm talking about not seeking a degree anywhere. for comparison, it's not uncommon to be officially seeking a degree at university x but visiting university y while your advisor visits there.) it's easy to get the wrong impression from course catalogs, since they give short descriptions of options that may not reflect how they are used in practice. for example, i doubt the princeton math department ever admits anyone as a qualifying student; if it happens at all, it is exceptionally rare. i can imagine it might happen for a clearly brilliant student from a deprived background, but not for the vast majority of applicants. there's just too much competition for admission. if you are accepted as a non-degree-seeking graduate student: you won't be treated in any way like an ordinary graduate student. to the extent anyone in the department is aware of you, you'll be in a special category of "person who wasn't admitted to the graduate program but is paying a lot of money to take courses anyway", which is not a flattering description. in particular, you should not expect faculty to supervise your research or interact with you any more or differently than they treat the undergraduates in these classes. (it could happen, but i'd guess it probably won't.) it won't help with admission, compared with doing equally well in similar courses elsewhere. specifically, any admissions committee will have members who want to make sure this isn't used as an easier back door to admission, and they will be sure to enforce strict standards. i can graduate this year (as a junior) and become a "qualifying student", or become a "visiting student" and i can graduate a year later. are you sure you're interpreting these programs right? in this listing of the categories, visiting students are enrolled in graduate programs elsewhere, while qualifying students are non-degree-seeking students who are trying to make up for weak backgrounds in the hope of future admission. i'm not aware of any option for undergraduates to spend a year at princeton, except for some international exchange programs. however, i might well be missing some possibility.
comparison
is it more common to refer to subplots as a "frame" or a "panel"?
i can't recall having seen the word frame or panel used to refer to a subplot. more commonly: in the text, e.g., figure 1(a) shows [...]; figure 1(b), instead, shows [...]. in the caption, e.g., figure 1. comparison among blah blah: (a) function f [...]; (b) function g [...]
comparison
are masters programs generally easier to get into than phd?
not every stem master's program is going to be easier to get into than every phd program, but on the whole they are easier. universities are much more likely to take you if you're paying your own way (aka. a masters) than if they have to fund you. in the us it's common for students who didn't do so well in undergrad or are from a lesser known international school to pay their way through a masters first and then go on to a phd after proving their worth. that being said, do not discount how helpful research can be. a published paper or a good letter of recommendation from a known professor can go a long way to erasing some bad grades. what counts in a phd program is your ability to do research.
comparison
should i get a bsc at a top-ranked university or a mcomp at a lower-ranked university?
if you can attend a top university, do so. having other excellent students around can provide a much better learning environment--to the point that in three years at a top place you may learn as much as in four years at a more mediocre institution. if you do well, you can always do another one or two years of graduate study after your bachelor's degree.
comparison
is it better to do undergraduate research in the summer, or during the school year?
i think the main question is a false dichotomy. most students don't have a choice between either doing research over the summer or doing it during the year. as long as it doesn't interfere with your grades or other activities, more research is always a good thing, no matter when it happens. that said, in my experience you should simply have a conversation with your professor (or direct supervisor, who will often be a grad student or post-doc) about time commitment and expectations. no one expects you to do the same amount of work in 10-15 hrs/week that you might do in 30 hrs/week, and no one expects you to work full-time while taking a full course load. as with many things in life, just make sure to communicate about expectations.
comparison
which is better, a paper with pedantic vocabularies or a paper easy to read?
there is a third way: be pedantic about your vocabulary in the right way to make your paper more readable. using established, clear, and consistent vocabulary and defining it when necessary is the best way to ensure that you are not misunderstood. the main advantage of “simple and easy words” is that they do not need to be explained to the reader, but this also entails that you rely on the reader interpreting these words the same way as you do, which may be not given surprisingly often. defined vocabulary does not have this disadvantage. moreover, you need to at least need to mention the established, “pedantic” vocabulary for context and to avoid the impression of reinventing the wheel. by consistently sticking to it, you avoid switching between different terms for the same thing, which usually impedes intelligibility. however, you should also bear in mind that readers unfamiliar with this vocabulary may want to read your paper. for these readers, define the more uncommon words and cite papers explaining the basic underlying concepts. papers are not difficult to read because they contain new words, but because these words are not properly explained or because the reader does not understand the concepts represented by them. for example if you write a paper on theoretical particle physics, readers will have to understand some elementary aspects of quantum theory to follow your thoughts. using vocabulary that can only appeal to readers without this basic understanding of quantum theory is pointless and only raises false expectations. finally, if you get to introduce new concepts, you can try to choose words for them appeal to intuition, but this does not mean that you are relieved from the burden of explaining these words.
comparison
phd programme dilemma: doing it in a good faculty with strong supervisor or weaker supervisor but more interesting project?
based on what you've said i like option a. but of course you know more about this situation than can be conveyed in a post, and there is more information you can find out, to help make the right decision. for context, i am a math phd student just about to graduate. some points for option a: in my opinion, the most important thing in pursuing a phd is to be passionate about the work. i have known people who went about it differently, picking a topic that they thought would gain them a reputation so they could later do something interesting, but most of the time these people end up either not graduating, or taking a very very long time to graduate, or settling for a mediocre thesis just to get it over with (which makes it hard to find a job). what you are about to do is extremely challenging, and you need that personal drive to get through it. now, you can pursue something you are passionate about with both a and b. but the fact that you already know a project that gets you excited with a outweighs that the supervisor at b has a better reputation. especially since the adviser at a is young, maybe s/he just does not have a good reputation yet. someone interested in what you are interested in is valuable, and more rare than you may expect, and a collaboration can bring good things for both of you. when someone has a good reputation in academia, it usually is not a result of their skills as a mentor, or in finding jobs for their students, or in helping students find their place in the community. a professor might be good at those things, but it is not required for them to be known as a great scientist or mathematician. nonetheless a professor with such a reputation is going to attract a lot of students, often for the wrong reasons, and they must compete for his/her attention, also often for the wrong reasons. you don't want to join a fleet of satellites. schools that are concerned about their reputations are less likely to encourage exploration of unconventional topics. everything i've said though is coming from my more categorical impressions. you have the ability to get more information, and maybe what i've said does not even apply. so my main advice is to get more info until you are confident about a choice. one of the best ways to do that is to visit each school over a weekend and, in particular, buy some of the supervisors' students a few drinks.
comparison
better to give a poor/unfinished talk at a conference or cancel it?
when this happens to me, i just describe it as "work in progress" or a "research attempt" and present what i can with what i've got. what's wrong with that? that's how research works. "...hasn't panned out like i hoped it would when i accepted the talk." you can say this during your talk, and explain why it didn't pan out, and ask the audience if they can help. maybe someone in the audience (your next co-author perhaps?) knows how to proceed, and has the expertise you're lacking. if nobody can help, maybe that's an indication that there is an obstacle (which is important to know). you've said you're a phd student, without expertise in the side-topic. feel free to mention it in your talk "i'm a phd student in [some subject] at [some university] under [someone]." you said a post-doc invited you to give a talk (which i expect is not in the sense of an "invited speaker"), who presumably knows you're an early phd student, and presumably knows what early phd student talks are sometimes like. my suspicion is that this question is more about inexperience and lack of confidence than an inability to give a reasonable talk due to the work being unfinished.
comparison
is co-first author or co-corresponding author better for a postdoc's job prospects?
i would say co-first author: first authorship is more important: if a professor both led the work and made the biggest contribution to the paper, they will generally choose first authorship rather than last authorship. early in your career, making a big contribution to the research itself is likely to be most important, and first authorship shows that. the "last author" signifying the leader of the work is less widespread (one discussion here). even in fields that use this convention, it is not always used. thus, your last authorship might not mean as much. especially if someone knows you are not a professor leading your own lab, they might not give you much credit for this. "co-first authorship" is a clear idea: you are one of two people who equally made the largest contributions to the work. but "co-last authorship" is not so clear. what does it mean? especially if the other last author is the professor who runs the lab? it's not clear what this would say about your contribution. of course this is referring to fields where authorship ranking by contribution is used; this isn't universal.
comparison
what is the distinction between "graduate school" and "professional school"?
at least in my geographic region (texas, or the us southern states), a professional school implies a curriculum that targets specific skills for a particular vocation. in effect, this is akin to my awkward bastardization of several concepts: "teach a man to fish and he can become a fisherman ... help him learn how to think and he can pursue a career in whatever he wants." the former refers to a professional/vocational school. the latter refers to accredited universities with broader range of curriculum missions. granted, this reflects my biased perspective: admittedly, vocational schools have a (usually) noble and useful purpose, but proper universities can offer so much more to enrich the whole student, and potentially brighten the intellectual and career outlook for years to come.
comparison
is it better to focus on relearning mathematics fundamentals or learn material as needed to understand research papers?
why not both? if you aren't explicitly a mathematician but there is a lot of math used as a tool in your field to solve problems (this is often the case with my field), it is understood that some people are stronger in the math than others. the whole point of having a field where people of different educational backgrounds work together is that you don't have to be an expert at every sub-skill and sub-task - you can have your own unique strengths without starting over at the fundamentals for every subject in play. a strategy that i've found works very well is read some literature to see what techniques are used by others in their related research. if a technique comes up a few times (like, say, applying dynamic time warping to analyze a time series), read into that bit to get a better high-level understanding of it - how it is used, it's strength and weakness, when it's not appropriate, etc. at this point you may find that the technique just isn't relevant to your research - so you can probably skip it for now and move on. sure, you don't understand it from top to bottom - but you just don't have time to learn everything in infinite detail! however, what if the technique seems really useful? well, some people don't bother to understand it at all and just blindly apply it, because well other researchers have so it's probably fine. i'm not at all fond of this, and i would humbly suggest it leads to bad science, unreliable findings, and missed opportunities. so if a technique seems useful to you, learn some more about it. try to delve a little more deeply and find out just what the technique seems to actually do. what do some of the variables mean? how is the calculation performed overall - how does it behave based on some inputs compared to others, and why? again, you probably have limited time, so don't feel you have to prove everything from first principles. next, you still have some time to advance your understanding of the fundamentals. so especially as you read through the literature and useful techniques, what area is really hard for you? is probability stumping you, or calculus, or dynamic programming, or are the notations and implicit variable meanings alien to you? reserve a little of your time - perhaps a few hours a week at most - to strengthen yourself on the most fundamental issues. perform a trivial calculation by hand, or read a textbook explanation of the notations used. i've found the biggest pitfall is thinking you have to understand everything from square one right at the very beginning, and i have to fight the tendency to get sucked in to minutiae. but then often as little as a few days pass and suddenly i realize i have a far better understanding of something than i thought i did, and it wasn't really as hard as it seemed at first. now this works for me, and for plenty of other people i know, but i cannot say it will be the best for you, in your field, with your personality and own unique traits and talents. as always ymmv - find what works best for you!
comparison
which is more important to get into top cs phd program: high gpa or industry r&d project experience?
neither. top cs phd progams are looking for strong evidence of future potential as an independent researcher. everything else is secondary. recommendation letters that speak directly to your research potential, in personal, technical, and credible detail, with direct comparisons with other students they have mentored who have gone on to strong phd programs, will serve you far better than either a perfect gpa or industry internships or even both. strong research results are even better. a better way to way to ask this question is... which is more important for finding a strong, supportive mentor and a fruitful topic for undergraduate research: high gpa or industry internships? but even this is missing the mark. it's not your gpa that you want to maximize, but rather (1) your mastery of the material, and (2) your interactions with faculty. a recommendation letter that says nothing but "my class was hard; oshan got an a." is utterly useless. (we call them dwic letters, for "did well in class".) similarly, industry interships are only useful if they provide you with opportunities for intellectual creativity, or leverage for getting into a specific research project that you care about. in particular, unless you actually care enough about image processing to want to do research on it, an internship doing image processing is just spinning your wheels. the most important advice i can give is to talk to your potential research mentors as soon as possible. ask them what they look for in potential undergraduate researchers. (they might say "a high gpa" or "industry internships", and then you have your answer.) ask them about their track record in placing students in top phd programs. ask them what they would expect you to justify a strong recommendation letter. ask them what sort of research they do, and how to get involved. if they don't have time to work with you ask for pointers to interesting papers to read, problems to think about, projects to work on on your own. do not wait for permission; just start knocking on doors. today.
comparison
in case of admission of international students, do the us,canadian universities prefer more on toefl than ielts?
in most cases, there is no preference. the vast majority of graduate programs in the us accept both ielts and toefl. a program that does have a preference will say so in its admission instructions. for example, bu engineering prefers toefl: in addition to the required credentials for applications, international students must demonstrate an understanding of english, including the ability to read and write with ease. international students are required to submit scores from or the test of english as a foreign language (toefl) that meet our minimum requirements. if the toefl is not available, we will accept scores from the international english language testing system (ielts). and mit material science and engineering only accepts ielts: if your first language is not english, you will need to submit an ielts test score. please note: this department does not accept toefl scores under any circumstances. while stanford only accepts toefl: stanford does not accept ielts scores. (while through spring 2016 nyu gsas acccepted toefl only, as of fall 2016 they appear to have moved to the "no preference" camp.) and nyu graduate school of arts and sciences only accepts toefl:-- do you accept the ielts instead of the toefl? no. so you can check if the specific programs you are interested in have a preference.
comparison
when would polling for events be better than using observer pattern?
imagine you want to get notified about every engine cycle, e.g. to display an rpm measurement to the driver. observer pattern: the engine publishes an "engine cycle" event to all observers for each cycle. create a listener that counts events and updates the rpm display. polling: the rpm display asks the engine at regular intervals for an engine cycle counter, and updates the rpm display accordingly. in this case, the observer pattern would probably loose: the engine cycle is a high-frequency, high-priority process, you don't want to delay or stall that process just to update a display. you also don't want to thrash the thread pool with engine cycle events. ps: i also use the polling pattern frequently in distributed programming: observer pattern: process a sends a message to process b that says "each time an event e occurs, send a message to process a". polling pattern: process a regularly sends a message to process b that says "if you event e occured since the last time i've polled, send me a message now". the polling pattern produces a bit more network load. but the observer pattern has downsides, too: if process a crashes, it will never unsubscribe, and process b will try to send notifications to it for all eternity, unless it can reliably detect remote process failures (not an easy thing to do) if event e is very frequent and/or the notifications carry a lot of data, then process a might get more event notifications than it can handle. with the polling pattern, it can just throttle the polling. in the observer pattern, high load can cause "ripples" through the whole system. if you use blocking sockets, these ripples can go both ways.
comparison
is razor or xslt better for my project?
i have successfully used xslt as a web presentation tier... in 1999. in the last 12 years, much better options have come along. do yourself a big favor, and use razor. it's a pleasure.
comparison
is it better to develop apps using a platform specific technology or web technologies?
the big shift toward html5/js came recently with the introduction of hardware acceleration in ie9 and now other browsers. before, you just couldn't get the performance out of html that you could with silverlight and flash. they had access to hardware and html didn't. the vast majority of mobile browsers support html5, and so do their webviews in apps that things like phonegap takes advantage of. so in the mobile space, depending upon exactly how much hardware you want to use, html5/css3/js is a prerfectly good option. with windows 8, there's going to be html5 apps in the new tablet view, but native apps are still very much the core part of the experience (visual studio won't be html5 for example...i don't think). however, since the markup/js is hardware accelerated it can now be every bit as fast as a native application. to your point about if flash and silverlight have a place - windows phone 7's ui is based on silverlight, so i'm sure it'll be around for a while. also, these technologies are still the only way to get a rich web media experience on older browsers like ie7, which are still very much alive in the xp world. until every user you want to view something is off of that, you'll need a fallback method. flash is still great for that with video and graphics.
comparison
slf4j vs log4j which one to prefer?
slf4j is basically an abstraction layer. it is not a logging implementation. it means that if you're writing a library and you use slf4j, you can give that library to someone else to use and they can choose which logging implementation to use with slf4j e.g. log4j or the java logging api. it helps prevent projects from being dependent on lots of logging apis just because they use libraries that are dependent on them. so, to summarise: slf4j does not replace log4j, they work together. it removes the dependency on log4j from your library/app. take a look here: [ref] or even here: [ref] (apache commons logging is similar to slf4j) for more info on this concept.
comparison
for user defined data, is it better to use a table of tables or dynamically created tables?
a generic database design limits the proliferation of new tables as the number of forms and data elements grows. on the other hand even the simplest query involved 3 tables, in our case. queries indeed got very mind boggling. after 3 years of this i became of the opinion that explicit tables was better. i'd rather deal with more tables that better reflect the business form/model - that i can understand just by looking at them; vice 1/2 the tables but with multitudes of meaningless join tables requiring astonishing long, nested queries to make even the most rudimentary sense out of the data. you actually start memorizing the meaning/context of individual identity column primary keys! finally, we had evolved to where we had that one table to rule them all so as to get away from some of the inherent complexities (and bad coding and design decisions of the past) in the existing design and evolving user requirements. nonetheless, we were certain we'd have problems as the table size went into the million+ row size.
comparison
what does svn do better than git?
subversion is a central repository while many people will want to have distributed repositories for the obvious benefits of speed and multiple copies, there are situations where a central repository is more desirable. for example, if you've got some critical piece of code that you don't want anyone to access, you'd probably not want to put it under git. many corporations want to keep their code centralized, and (i guess) all (serious) government projects are under central repositories. subversion is conventional wisdom this is to say that many people (especially managers and bosses) have the usual way to number the versions and seeing the development as a "single line" along time hardcoded into their brain. no offense, but git's liberality is not easy to swallow. the first chapter of any git book tells you to blank out all the conventional ideals from your mind and start anew. subversion does it one way, and nothing else svn is a version control system. it has one way to do its job and everybody does it the same way. period. this makes it easy to transition to/from svn from/to other centralized vcs. git is not even a pure vcs -- it's a file-system, has many topologies for how to set up repositories in different situations -- and there isn't any standard. that makes it harder to choose one. other advantages are: svn supports empty directories svn has better windows support svn can check out/clone a sub-tree svn supports exclusive access control svn lock which is useful for hard-to-merge files svn supports binary files and large files more easily (and doesn't require copying old versions everywhere). adding a commit involves considerably fewer steps since there isn't any pull/push and your local changes are always implicitly rebased on svn update.
comparison
is it better to use pre-existing bad practices, or good practices that don't fit well with old code?
you should choose better design if: you are going to be taking over a large part of future coding better design isn't more expensive to the client in the long run. for instance, i have witnessed multi-month "refactorings" for projects that were discontinued by the end of the year. you should choose "same bad style" if: you're just helping out. it's unrealistic to think you can take an existing group and will them to higher design standards if you're just a part-time fill-in on the project. better design is subjective and almost always has a learning curve. if that new design isn't learned by the rest of the team then the project will end up with a mish-mash of styles, and your better designed code may end up in the pile of stuff that nobody changes because they can't understand it. in your example above, what happens if there are updates to the third party software? sacrificing "better" design will get you a tangible business advantage. like adding feature x badly will get you a large contract, but missing the deadline causes you to end up with nothing. this is where the historic conflict between mgmt and the technical team come in. like renovating a house, someone has to decide when to pay. you can pay down the debt initially by living with no electricity or plumbing for a year. or you can pay 3x as much with the benefit of having those utilities.
comparison
which is a better practice - helper methods as instance or static?
this really depends. if the values your helpers operate on are primitives, then static methods are a good choice, as péter pointed out. if they are complex, then solid applies, more specifically the s, the i and the d. example: class cookiejar { function takecookies(count:int):array<cookie> { ... } function countcookies():int { ... } function ressuplycookies(cookies:array<cookie> ... // lot of stuff we don't care about now } class cookiefan { function gethunger():float; function eatcookies(cookies:array<cookie>):smile { ... } } class ourhouse { var jake:cookiefan; var jane:cookiefan; var cookies:cookiejar; function makeeverybodyashappyaspossible():void { //perform a lot of operations on jake, jane and the cookies } public function cookietime():void { makeeverybodyashappyaspossible(); } } this would be about your problem. you can make makeeverybodyashappyaspossible a static method, that will take in the necessary parameters. another option is: interface cookiedistributor { function distributecookies(to:array<cookiefan>):array<smile>; } class happynessmaximizingdistributor implements cookiedistributor { var jar:cookiejar; function distributecookies(to:array<cookiefan>):array<smile> { //put the logic of makeeverybodyashappyaspossible here } } //and make a change here class ourhouse { var jake:cookiefan; var jane:cookiefan; var cookies:cookiedistributor; public function cookietime():void { cookies.distributecookies([jake, jane]); } } now ourhouse need not know about the intricacies of cookie distribution rules. it must only now an object, which implements a rule. the implementation is abstracted away into an object, who's sole responsibility is to apply the rule. this object can be tested in isolation. ourhouse can be tested with using a mere mock of the cookiedistributor. and you can easily decide to change cookie distribution rules. however, take care that you don't overdo it. for example having a complex system of 30 classes act as the implementation of cookiedistributor, where each class merely fulfills a tiny task, doesn't really make sense. my interpretation of the srp is that it doesn't only dictate that each class may only have one responsibility, but also that a single responsibility should be carried out by a single class. in the case of primitives or objects you use like primitives (for example objects representing points in space, matrices or something), static helper classes make a lot of sense. if you have the choice, and it really makes sense, then you might actually consider adding a method to the class representing the data, e.g. it's sensible for a point to have an add method. again, don't overdo it. so depending on your problem, there are different ways to go about it.
comparison
is it better to return null or empty values from functions/methods where the return value is not present?
stackoverflow has a good discussion about this exact topic in this q&a. in the top rated question, kronoz notes: returning null is usually the best idea if you intend to indicate that no data is available. an empty object implies data has been returned, whereas returning null clearly indicates that nothing has been returned. additionally, returning a null will result in a null exception if you attempt to access members in the object, which can be useful for highlighting buggy code - attempting to access a member of nothing makes no sense. accessing members of an empty object will not fail meaning bugs can go undiscovered. personally, i like to return empty strings for functions that return strings to minimize the amount of error handling that needs to be put in place. however, you'll need to make sure that the group that your working with will follow the same convention - otherwise the benefits of this decision won't be achieved. however, as the poster in the so answer noted, nulls should probably be returned if an object is expected so that there is no doubt about whether data is being returned. in the end, there's no single best way of doing things. building a team consensus will ultimately drive your team's best practices.
comparison
trial/free & full version vs. free app + in-app billing?
it's one code only in either case, you are just using either compile-time or run-time conditionals in it. unfortunately java lacks compile-time conditionals, so you have to be creative there (java can optimize away if(static_final_variable), so you will just have two versions of one source defining that variable). the chance that someone will crack your application is the same in either option and depends on the strength of the check that the user has valid license. google provides interface to check license on the market through the market client, but it also depends on how tightly you integrate the checks in the application and therefore how hard you make it to disable them via disassembling. the license can be obtained in two ways, either by purchasing the app on the market (separate apk) or by in-app billing (unlocking features). the later obviously involves some extra work, so that slightly favors the separate applications. on the other hand you could do the billing in single package yourself and avoid giving 30% to google, but that's a lot of work. one package with unlocking features will be slightly more convenient for the user, because settings and content will be carried over. how much this favors single package depends on how much such content you expect average user to have.
comparison
is it better to use preprocessor directive or if(constant) statement?
i think there is one advantage of using a #define that you didn't mention, and that is the fact that you can set the value on the command line (thus, set it from your one-step build script). other than that, it is generally better to avoid macros. they don't respect any scoping and that can cause problems. only very dumb compilers can't optimize a condition based on a compile-time constant. i don't know if that is a concern for your product (for example it might be important to keep the code small on embedded platforms).
comparison
java web frameworks (jsf vs wicket) third-party component suites in wicket?
mostly the jsf hate blogs belong to 2009 and before, jsf 2.0 fixed many of the issues. i don't think it is popular just because of it's standard, remember that ejb 2.x was the standard then spring came up. there are many web frameworks but jsf is still very popular. also recent jsf specs starting with 2.0 is designed according to user feedback acquired since jsf 1.x.
comparison
in what types of programming environments is reactive management better than proactive management?
logic would suggest that there is some benefit to reactive management there is, it's cheaper* consider the following: you find corrupted data in the system how you find the edge case that is causing and error? you need to first think of the edge case and\or go through all the code looking for possible edge cases that you may have missed. imagine the man hours to do this with a massive code base. often that is what takes to be proactive. what does it take to be reactive? put in a logging mechanism, and wait for it to happen again. many argue that dealing with the bugs proactively is cheaper in the long run. but in most cases its simply the case. some bugs are just too difficult and time consuming to peg down and kill if you don't "catch them in the act". so the decision is made to wait and see....
comparison
is it better to target ios 5 and arc or an earlier version with mrc?
worrying about memory management arc is a godsend: it doesn't solve every problem, but it's much better than having to do it all yourself or the short detour into garbage collection with mac os x. there are two things to keep in mind with it: it's a compiler feature: xcode provides arc support for building to ios 4 targets1. it's optional, even targeting ios 5. if you want to learn manual reference counting (mrc) just to make extra sure you know it, you can do it even targeting ios 5. but arc doesn't take away the ability to understand how memory management works, it just removes the tedium of having to declare release and retain everywhere. justin on stack overflow gave a good summary of the difference between arc and manual reference counting (mrc): if you don't want to learn mrc, then you may want to first try arc. a lot of people struggle with, or try to ignore common practices of mrc (example: i've introduced a number of objc devs to the static analyzer). if you want to avoid those issues, arc will allow you to postpone your understanding; you cannot write nontrivial objc programs without understanding reference counting and object lifetimes and relationships, whether mrc, arc, or gc. arc and gc simply remove the implementation from your sources and do the right thing in most cases. with arc and gc, you will still need to give some guidance. beyond whether or not you should use arc, you ought to consider support for the os version: does it really make sense to focus on version-specific features (like zeroing weak references) when there aren't a whole lot of people using that version? or worse yet, if everyone's using ios 3, how long do you have to wait until you can even start to use arc? this comes down to two things: device support and market share. device support thankfully, one of the benefits to developers with respect to ios development is that the latest version of the software runs on older devices; generally going back at least 2 years. so if you want to target ios 5, you'll be able to target the following devices: iphone 4s (released october 2011) ipad 2 (released march 2011) ipod touch (4th generation, released september 2009) iphone 4 (released june 2010) ipad (released april 2010) ipod touch (3rd generation, released september 2009) iphone 3g s (released june 2009) which is a large set of options. if you target ios 4.2, you can hit every device since iphone 3g was released back in june 2008. market share which comes to the other question: should one spend time learning anything other than ios 5 sdk: it depends on what you want to do. if you want to just focus on the latest and greatest, use all the neat features available in the latest sdk, and damn market share (for now): by all means go for it. if you want to maximize market share now, i'd hold off for a few more months. marco arment, the creator of instapaper (a really popular ios app), publishes his usage stats from time to time and just released the latest report a few days ago. in it, he notes that ios 5 has a 45.1/48% ipad/iphone market share, while ios 4.2 (needed for cdma iphone 4s that haven't upgraded to ios 5 yet) has a 97/97.2% market share. generally, hitting 97% of the potential market is "close enough": i've seen it as a rule of thumb not just for ios development, but for web development as well. but one thing to consider is how long of a development cycle you're going to have. if you're not planning on launching for a few months, ios 5 is not a bad choice, even if you're trying to hit a large portion of the potential market share. ios users tend to upgrade much quicker than on other platforms, for a variety of reasons, and there's no reason to believe the upgrade from ios 4.x to ios 5 will trend any differently. if you take ios 4.2's market share as a baseline, it was only released a year ago. it's not unreasonable to assume that october of next year ios 5 will be well into the 90% range. conclusion don't worry about memory management too much: arc is a great convenience, but it's not a huge paradigm shift from earlier versions. instead, worry about the other features and support issues. if you're launching today and need to hit the largest market share possible, target ios 4 and consider using mrc. otherwise, target ios 5 and consider using arc. 1caveat: you lose out on some features if you need to target < ios 5, like zeroing weak references. if you want to go whole-hog into arc, you're probably better off targeting ios 5.
comparison
what are the relative advantages of dictionaries versus databases?
if you want to know whether there is a performance advantage, the best thing to do is measure it yourself. the performance depends a lot on the type of data, the language, the amount of data, etc. it's impossible to give a blanket statement as to when dictionaries are better than databases. again, it depends on the data, the language, etc. roughly speaking, dictionaries are better for simple and small datasets, and databases are good for complex and large data sets.
comparison
javascript client - which is likely to be better serverside? wcf or mvc3 controller with restful messages?
if the use case is pretty simple and you've already got to support a mvc site then i think mvc is a pretty good choice. i'd also generally agree that standard wcf is perhaps a bit too much ceremony and boilerplate and difficult to ioc in a sane and rational manner. all that said, you might want to check out the wcf web api. it is pretty simple to implement at first, plays well with your existing mvc site while also having some pretty powerful message handling facilities. your typical mvc controllers start to fall down a bit -- or at least the level of complexity increases dramatically -- when you get into stuff like etags, cache hinting and such. the web api bridges that gap nicely and it is definitely worth checking out.
comparison
what are graph datastores better at doing than other datastores and why?
highly connected graphs are not easy to model or query using relational databases. think about social graphs - bob is a friend to alice, alice is a friend to carol. how many friends of friends does bob have? modelling and querying this kind of data is what graph datastores are good at. another example - think about dr who episodes and a corpus of data about all the actors, characters and sets used in them. in a graph datastore you could query all the episodes an actor was in a specific set with a specific character - this is not easy to model or query in relational database.
comparison
is it better to use assert or illegalargumentexception for required method parameters?
beware! assertions are removed at runtime unless you explicitly specify to "enable assertions" when compiling your code. java assertions are not to be used on production code and should be restricted to private methods (see exception vs assertion), since private methods are expected to be known and used only by the developers. also assert will throw assertionerror which extends error not exception, and which normally indicates you have a very abnormal error (like "outofmemoryerror" which is hard to recover from, isn't it?) you are not expected to be able to treat. remove the "enable assertions" flag, and check with a debugger and you'll see that you will not step on the illegalargumentexception throw call... since this code has not been compiled (again, when "ea" is removed) it is better to use the second construction for public/protected methods, and if you want something that is done in one line of code, there is at least one way that i know of. i personally use the spring framework's assert class that has a few methods for checking arguments and that throw "illegalargumentexception" on failure. basically, what you do is: assert.notnull(obj, "object was null"); ... which will in fact execute exactly the same code you wrote in your second example. there are a few other useful methods such as hastext, haslength in there. i don't like writing more code than necessary, so i'm happy when i reduce the number of written lines by 2 (2 lines > 1 line) :-)
comparison
which one is a better practice a static factory method of the same class or a separate factory class with methods?
the second one; the separate class. that separate class should also have an interface. the reason is that this is the option that allows swapping the factory most easily, which is good for lowering coupling and for testability.
comparison
is it better to build html code string on the server or on the client side?
server-side pros: more controllable, easier to debug, less dependent on client's browser cons: more server load, higher network traffic and latency client-side cons: depends on decent js/dom implementation in the browser. pros: performance, performance, performance. less server load (thus faster server response), much less network traffic, and thanks to previous two much less latency. for example linkedin's engineering team article "blazing fast node.js: 10 performance tips from linkedin mobile" as one of the points talks about that issue.
comparison
is it better to learn the dom or jquery first?
learn the dom. by doing that, you will have a better understanding and appreciation for what libraries like jquery do for you. this also means you'll be better suited in the long run if you must change tools. the dom will always be there, and is common ground for understanding the fundamental design choices for any given library that interacts with the dom. with all that given, i don't think there is anything wrong with learning the dom alongside jquery. especially if you have a project that requires jquery and needs to get done relatively soon. i think the important thing is that you learn how the dom works in order to keep yourself decoupled from any particular library.
comparison
is it better to use a database or a data structure for network stack?
if you can fit everything into available memory why not use one of the many hash map libraries/templates available for c/c++. you will get a massive performance boost (no disks, no io, no parsing no .....) and most of the apis available are pretty simple to use. have a look at this comparison to see which one would best suit your needs.
comparison
is it better to spend resources on a skilled team or good process practice?
if your team isn't skilled (as in your definition), you will get nothing done at all. in this case the process doesn't matter. not skilled in the sense of not much experience would be another matter. if your people are talented, but have not much experience with projects, then a good process may help avoid problems. testing and early response from customers as agile provides would most likely avoid many problems in such a situation.
comparison
which is a better design pattern for a database wrapper: save as you go or save when you're done?
if you need transactions in your application, you will need to offer some kind of explicit "save" option. if the applications don't need transactions, using a database (as opposed to, say, a nosql technology) may be redundant. this also means that the "save" operation should not be a method called on the mutated object. it should be a method on the transaction object, or the session.
comparison
is onerror handler better than exceptions?
i agree with paul equis's answer that exceptions should be preferred, but with a caveat. one major feature of exceptions is that they break control flow. this is usually desirable, but if it isn't then some other pattern might be useful for augmenting the exception system. for example, suppose you're writing a compiler. exceptions might not be the best choice here, because throwing an exception stops the compile process. this means that only that first error would be reported. if you want to keep reading the source code in order to try and find more errors (as the c# and vb compilers do), then some other system is needed for reporting errors to the outside world. the easiest way to take care of that would be saving the exceptions to a collection and then returning it. however, using an onerror delegate might be worthwhile if you want to give the caller an opportunity to give advice on how to proceed after each error. that sounds like an uncommon scenario to me, though. if you're not asking the caller to really micro-manage error handling, then using some flags to specify error-handling behavior would be less fiddly to work with.
comparison
what does c++ do better than d?
most of the things c++ "does" better than d are meta things: c++ has better compilers, better tools, more mature libraries, more bindings, more experts, more tutorials etc. basically it has more and better of all the external things that you would expect from a more mature language. this is inarguable. as for the language itself, there are a few things that c++ does better than d in my opinion. there's probably more, but here's a few that i can list off the top of my head: c++ has a better thought out type system there are quite a few problems with the type system in d at the moment, which appear to be oversights in the design. for example, it is currently impossible to copy a const struct to a non-const struct if the struct contains class object references or pointers due to the transitivity of const and the way postblit constructors work on value types. andrei says he knows how to solve this, but didn't give any details. the problem is certainly fixable (introducing c++-style copy constructors would be one fix), but it is a major problem in language at present. another problem that has bugged me is the lack of logical const (i.e. no mutable like in c++). this is great for writing thread-safe code, but makes it difficult (impossible?) to do lazy intialisation within const objects (think of a const 'get' function which constructs and caches the returned value on first call). finally, given these existing problems, i'm worried about how the rest of the type system (pure, shared, etc.) will interact with everything else in the language once they are put to use. the standard library (phobos) currently makes very little use of d's advanced type system, so i think it is reasonable the question whether it will hold up under stress. i am skeptical, but optimistic. note that c++ has some type system warts (e.g. non-transitive const, requiring iterator as well as const_iterator) that make it quite ugly, but while c++'s type system is a little wrong at parts, it doesn't stop you from getting work done like d's sometimes does. edit: to clarify, i believe that c++ has a better thought out type system -- not necessarily a better one -- if that makes sense. essentially, in d i feel that there is a risk involved in using all aspects of its type system that isn't present in c++. d is sometimes a little too convenient one criticism that you often hear of c++ is that it hides some low-level issues from you e.g. simple assignments like a = b; could be doing many things like calling conversion operators, calling overload assignment operators etc., which can be difficult to see from the code. some people like this, some people don't. either way, in d it is worse (better?) due to things like opdispatch, @property, opapply, lazy which have the potential to change innocent looking code into things that you don't expect. i don't think this is a big issue personally, but some might find this off-putting. d requires garbage-collection this could be seen as controversial because it is possible to run d without the gc. however, just because it is possible doesn't mean it is practical. without a gc, you lose a lot of d's features, and using the standard library would be like walking in a minefield (who knows which functions allocate memory?). personally, i think it is totally impractical to use d without a gc, and if you aren't a fan of gcs (like i am) then this can be quite off-putting. naive array definitions in d allocate memory this is a pet peeve of mine: int[3] a = [1, 2, 3]; // in d, this allocates then copies int a[3] = {1, 2, 3}; // in c++, this doesn't allocate apparently, to avoid the allocation in d, you must do: static const int[3] statica = [1, 2, 3]; // in data segment int[3] a = statica; // non-allocating copy these little 'behind your back' allocations are good examples of my previous two points. edit: note that this is a known issue that is being worked on. edit: this is now fixed. no allocation takes place. conclusion i've focussed on the negatives of d vs c++ because that's what the question asked, but please don't see this post as a statement that c++ is better than d. i could easily make a larger post of places where d is better than c++. it's up to you to make the decision of which one to use.
comparison
how to better start learning programming - with imperative or declarative languages?
i used to teach prolog as a first language to freshmen (followed by scheme and java in the next semesters) . it was great for some of them, and apparently completely impenetrable for many others. logic programming may be very close to mathematical logic, but many people are not, in fact very good at formal logic - it's very different from the kind of reasoning (largely informal or even unconscious) that people actually use day-to-day. on the other hand, i have also taught java first, with much the same ratio of success! these days i believe that it depends way to much on the individual learner to make a blanket suggestion. i still think that starting with near-machine-level languages (to understand what is actually happening in the computer) and then progressing towards more abstract languages was the right way for me personally, but experience has convinced me that this is not the kind of recommendation that can be generalized.
comparison