id
stringlengths
5
27
question
stringlengths
19
69.9k
title
stringlengths
1
150
tags
stringlengths
1
118
accepted_answer
stringlengths
4
29.9k
_webapps.106185
For instance, I'd like to share my resume (in 3 formats) with a recruiter. I'd also like those links to be valid at all times but also be up to date if I upload a new PDF / DOCX / ODT file to Google Docs. A symlink of some sort seems like the natural choice. That way I could point the link to the new file without having to send an email out to the recruiter that says hey here's the newest version. Does such a thing exist in Google Docs?
Is there anything like a symlink in Google Docs?
google apps;google documents
null
_softwareengineering.191913
I was recently talking with a recruiter who wants to put me at a company for a position of Developer in Test. He essentially made it sound like a position where you get to fiddle with new programming techniques and test bugs and improvements in software but where you don't need to worry about standard deadlines. You get to be very creative in your work.But that description was still kinda vague to me. I have been a Web Developer for a number of years now, mostly working in PHP. So I wanted to know if others in the community know more about what these positions typically entail.I know that this might not be a subject appropriate for this forum, but it was the best fit I could find among Stack Exchange and I would really appreciate it if this wasn't closed since there is really no where else here to ask about it.I have tried Googling it, but there isn't a lot of information out there. So what exactly is a Developer in Test?
What is a Developer in Test?
web development;php;testing;career development;engineering
I am a Software Development Engineer in Test, and have been at 2 separate companies. Currently I work for Microsoft.Broadly speaking, Bryan Oakley is correct: you write software that tests software. Beyond that, it depends on your level of experience, the scope of your responsibilities, and the type of software that the employer would be producing. An SDET position can include writing anything from the basics of feature level verification tests, to writing and maintaining test infrastructure to run those tests. It's also not uncommon to have SDETS that specialize in focused testing for certain types of requirements (testing security, performance/scale, usability, etc. are examples that immediately spring to mind).The description that you received from the recruiter sounds like a poor selling technique. You're not fiddling; you have n days to get automated test coverage over x features deployed in y different supported environments in z languages.Oh, btw: those tests have to run fast enough for the devs to have a quick dev/test cycle because...No standard deadlines? You're in charge of the quality of the product and the release date was set by marketing 6 months ago. The dev team is 6 weeks late delivering a stable build to your test team, and the company isn't pushing that release date (again). Is the product or service stable enough to release to a couple million (billion?) people, on the same day?...and if (when ) customers call in with problems... Why (the hell) didn't you catch it first?I hope that gives you a bit of an example of what being an SDET is like.
_webapps.100569
I am using google maps in chrome having OS: Ubuntu on my hp laptop. But it's not showing place names. I don't know how names have disappeared. Please help me to get them back
Google maps not showing any names
google maps
null
_codereview.126468
I'm updating a stats bar on the page after a user action. The code works, but as you can see, it's very messy.Is there a shorter way of writing this function? Seems like a complete mess. I thought about using $.each a couple times to cycle through, but since I need to compare values from the first two array keys in the multi-dimensional array, it makes things much more complicated.Here's an example of the post object data:{ {team_avg: {overdue: 1, in_review: 2, in_progress: 1, assigned: 1}} {user: {assigned: 5, overdue: 3, in_review: 4, in_progress: 1}}Here's my current (mess) for the $.post callback:$.post({{ route('task.load') }}, { _token: $('meta[name=csrf-token]').attr('content') }, function(data) { $task_summary = $('.taskload-stats'); // update # of assigned $assigned = $task_summary.find('.assigned'); $assigned.html(data.user.assigned); if (data.user.assigned > data.team_avg.assigned) { $assigned.append(' <i data-tooltip aria-haspopup=true title=Above Team Avg class=fa fa-arrow-up above-avg aria-hidden=true></i>'); } else if (data.user.assigned === data.team_avg.assigned) { $assigned.append(' <span data-tooltip aria-haspopup=true title=Equals Team Average class=neutral>--</span>'); } else { $assigned.append(' <i data-tooltip aria-haspopup=true title=Below Team Avg class=fa fa-arrow-down below-avg aria-hidden=true></i>'); } // update # of in progress $in_progress = $task_summary.find('.in-progress'); $in_progress.html(data.user.in_progress); if (data.user.in_progress > data.team_avg.in_progress) { $in_progress.append(' <i data-tooltip aria-haspopup=true title=Above Team Avg class=fa fa-arrow-up above-avg aria-hidden=true></i>'); } else if (data.user.in_progress === data.team_avg.in_progress) { $in_progress.append(' <span data-tooltip aria-haspopup=true title=Equals Team Average class=neutral>--</span>'); } else { $in_progress.append(' <i data-tooltip aria-haspopup=true title=Below Team Avg class=fa fa-arrow-down below-avg aria-hidden=true></i>'); } // update # of in review $in_review = $task_summary.find('.in-review'); $in_review.html(data.user.in_review); if (data.user.in_review > data.team_avg.in_review) { $in_review.append(' <i data-tooltip aria-haspopup=true title=Above Team Avg class=fa fa-arrow-up above-avg aria-hidden=true></i>'); } else if (data.user.in_review === data.team_avg.in_review) { $in_review.append(' <span data-tooltip aria-haspopup=true title=Equals Team Average class=neutral>--</span>'); } else { $in_review.append(' <i data-tooltip aria-haspopup=true title=Below Team Avg class=fa fa-arrow-down below-avg aria-hidden=true></i>'); } // update # of overdue $overdue = $task_summary.find('.overdue'); $overdue.html(data.user.overdue); if (data.user.overdue > data.team_avg.overdue) { $overdue.append(' <i data-tooltip aria-haspopup=true title=Above Team Avg class=fa fa-arrow-up below-avg aria-hidden=true></i>'); } else if (data.user.overdue === data.team_avg.overdue) { $overdue.append(' <span data-tooltip aria-haspopup=true title=Equals Team Average class=neutral>--</span>'); } else { $overdue.append(' <i data-tooltip aria-haspopup=true title=Below Team Avg class=fa fa-arrow-down above-avg aria-hidden=true></i>'); } $(document).foundation('tooltip', 'reflow');});
Updating a stats bar on a page
javascript;jquery
That code is full of security holes. Completely avoid .html, create elements programmatically, not as strings of html text. Use .text to set the visible text of elements securely.What happens if data.user.assigned has this value:<script src=//malicious.host.c0m/stealsession.js></script>Visible TextThen every user that views that page will load and run the malicious script and send their session cookie to the attacker, who can then impersonate you, because they have your session secret. The user will not be notified at all, they see Visible TextYou should not repeat yourself so much. You should create an element, and clone that off for each similar duplication of it. This way, you can just set those common attributes once and create copies of it efficiently.You have everything named very nicely, you can leverage that and make it table driven:See jsfiddle$(function() { function createIcons(data, scope, names) { var template = $('<i/>', { 'data-tooltip': '', 'aria-haspopup': 'true', 'class': fa, 'aria-hidden': true }); names.forEach(function(descriptor) { var parent = scope.find(descriptor.sel), userVal = data.user[descriptor.prop], teamAvg = data.team_avg[descriptor.prop]; parent.text(userVal); makeUpDownIcon(parent, template.clone(), userVal, teamAvg); }); } function makeUpDownIcon(outputParent, element, input, compareTo) { var title, classes; if (input > compareTo) { title = 'Above Team Avg'; classes = 'fa-arrow-up above-avg'; } else if (input === compareTo) { title = 'Equals Team Average'; classes = 'neutral'; } else { title = 'Below Team Avg'; classes = 'fa-arrow-down below-avg'; } return (element.attr('title', title) .addClass(classes) .appendTo(outputParent)); } //faked $.post({{ route('task.load') }}, { _token: $('meta[name=csrf-token]').attr('content') }, function(data) { var data = { user: { assigned: 11, in_progress: 10, in_review: 3, overdue: 44 }, team_avg: { assigned: 6, in_progress: 24, in_review: 2, overdue: 1 } }; $task_summary = $('.taskload-stats'); createIcons(data, $task_summary, [ { sel: '.assigned', prop: 'assigned' }, { sel: '.in-progress', prop: 'in_progress' }, { sel: '.in-review', prop: 'in_review' }, { sel: '.overdue', prop: 'overdue' } ]); //$(document).foundation('tooltip', 'reflow');});Note that I forced in fake data because I cannot do the real ajax request on jsfiddle. I had to rip out that foundation thing to make it runnable too. You should have no trouble modifying that second part to actually use the post.EDIT:If you wanted to have a more complex template, say, a span with text and icon then you could do something a bit like this: function createIcons(data, scope, names) { var template = $('<div/>'), caption = $('<span/>', { 'data-tooltip': '', 'aria-haspopup': 'true', 'class': '', 'aria-hidden': 'true', appendTo: template }), icon = $('<i/>', { appendTo: template });Then before each clone, you can reach into the template and do caption.text(something) and icon.attr('class', 'fa fa-something'), then clone, then the clone will already be set, and no need to go .find into it.Note that .text blows away all of the content of the node, so I restructured it so .text wont kill the icon.
_unix.165658
I am trying to debug an application running in embedded linux. The application uses qt-mobility.On my PC, it works fine, but not on my device.When I start my application, I get this message in the /var/log/messages :Nov 3 11:45:41 pdm360ng daemon.info bluetoothd[1435]: Discovery session 0x1074aab0 with :1.1 activatedOn my PC (running ubuntu 14.04), /var/log/syslog tells :Nov 3 11:01:15 deeclu42 bluetoothd[25182]: Discovery session 0x7f0897203160 with :1.599 activatedSo, it prints a version of something (1.1 for embedded, and 1.599 for my PC), and different version may cause problems I am seeing.The bluetoothd is of the same version in both cases :bluetoothd --version4.101So, what is printing that version in the log?
Of what is this version of?
bluetooth
null
_unix.162638
I know chmod 777 allows read, write, and execute for user, group, and others but what if I just do chmod 7?Is that only rwx for the user?
What is the result of running `chmod 7` on a file?
files;chmod
null
_ai.2795
I have been looking into Viv an artificial intelligent agent in development. Based on what I understand, this AI can generate new code and execute it based on a query from the user. What I am curious to know is how this AI is able to learn to generate code based on some query. What kind of machine learning algorithms are involved in this process? One thing I considered is breaking down a dataset of programs by step. For example:Code to take the average of 5 terms1 - Add all 5 terms together2 - Divide by 5Then I would train an algorithm to convert text to code. That is as far as I have figured out. Haven't tried anything however because i'm not sure where to start. Anybody have any ideas on how to implement Viv? Here is a demonstration of Viv.
AI that can generate programs
neural networks;machine learning;deep learning;ai design;nlp
null
_hardwarecs.2622
There exist several Cherry MX Switch Testers. I am looking for a Topre Switch tester.Ideally, it should contain the same key switches as the ones used in Topre Realforce. The price should be cheaper than buying the full keyboard. I do not mind about shipping constraints.So far, I have only found this Switch Tester, which contains only one Topre Novatouch switch (and 5 Cherry MX: Black, Red, Brown, Blue, and Green switches). I would like more Topre switches.
Topre Switch tester
keyboards
null
_cs.22722
Suppose that I have a set of $N$ points in $k$-dimensional space ($k>1$), such as in this question, and that I need to find all pairs with a distance smaller than a certain threshold $t$. The brute-force method would require $N(N-1)$ distance calculations, which is not acceptable. I attack the problem by first sorting the cells in a grid, such as in this answer, followed by brute-force within each grid cell and a number of neighbours (which is easily calculated from the cell size $w*h$ and the maximum distance $t$).My solution seems to work acceptably well for my purposes, and the results appear to be correct. However I'm neither a computer scientist nor a mathematician, and I'm not sure what tools I could use to calculate the optimal cell size. In fact, I developed the aforementioned possibly naive algorithm because it seemed like a reasonably okay method. I guess the optimal cell size depends in some way on $N$, $t$, on the cost of the distance function, and on the implementation of the sorting in cells, on the distribution of points, and on other things. How would I make a guess of the optimal values of $w$ and $h$, with or without a priori knowledge on the approximate number of pairs I expect to find?Does the answer change if the N points are divided in two sets $S_1$ and $S_2$, and each pair shall consist of one element from each set?Not necessarily euclidian. The points may, for example, be locations on a sphere, i.e. on Earth, with latitude and longitude.
How do I choose an optimal cell size when searching for close pairs of points, and using cells to implement this?
algorithms;computational geometry;matching
There's an enormous amount of work on data structures and algorithms for this sort of problem. I suggest you start by reading the general literature on this problem.Start by reading about algorithms for nearest neighbor search, including all nearest neighbors and the fixed-radius nearest neighbors problem. Those techniques are applicable to your problem.Then, read about quadtrees, octrees, k-d trees, VP trees, and the general category of BSP trees.
_webmaster.95543
I am an HTML beginner and I'm making a website using HTML Editey. I know how to do things like make bold text or make an underline under the letter or number, but I don't know how to make a clickable link.I want to make a clickable link or some text which is you click on it, it will make you go to another website. I did inspect on Chrome but I got confused and couldn't figure it out.
How to make a clickable link with HTML Editey
html;links;text editor
null
_codereview.46059
I just got into Object Oriented Programming, and it made me think about certain code how I can make them as efficient as possible.Right now I am just including my header and footer like the following:PHPrequire_once 'core/init.php';require_once 'includes/header.php';//Big block of content goes hererequire_once 'includes/footer.php';I wonder if this is the neatest / most efficient way of doing that?
Neatest way of including a header and footer on every page
php;object oriented
According to a PHP blog post, yes this is exactly the way that it should be done. Even w3schools (whom you normally shouldn't trust too much since they're not related to the real w3 at all) recommends it.I like that you are using require instead of include. I am not so sure if you really need the _once though.I recommend you read the documentation of the include function (which also applies to require, require_once and include_once) to make sure that you really are aware of how it works. Note that if you are using any PHP scripts inside the included files, any global variables (which you should try avoid using too many of overall) also gets included to the calling script.
_webapps.10608
Possible Duplicate:Get e-mail addresses from Gmail messages received I have a Google Apps Gmail account and I would like to extract all email adresses in the from and to (and cc, bcc and reply-to if possible) headers of every message. How can I do this?I've found a site (https://gmailextract.com/) that promises to do exactly what I need, but I don't think I can trust it with my password.
Extract email adresses of Gmail
gmail;email;google apps;google apps email
I would do it the old fashion offline way:1) Download all mails with Gmail Backup or an IMAP client if you only want headers2) Use a regexp tool on the downloaded files or headers to find the email adresses and save the result into a text file3) Optionally import the file into Contacts in Gmail
_cogsci.8854
I recall a study that concluded the risk of falling into poverty / homelessness increases dramatically after experiencing 5 (or 7?) traumatic life events, such asDeath of a loved oneAbandonment by spouseBankruptcyBeing firedForeclosureetc (a threshold effect). That's about as clear a memory as I can form, and search engines don't help. Anybody know about this study? It might have been from New Zealand.
What study examined the effect of number of traumatic life events on falling into poverty?
social psychology;reference request;sociology
This effect was identified in a report by the New Zealand Ministry of Social Development (Jensen et al., 2006). It specifically indicated that individuals experiencing eight or more life shocks (negative life events) experienced significantly more negative socioeconomic outcomes. This effect is referenced on Wikipedia's Cycle of Poverty page (http://en.wikipedia.org/wiki/Cycle_of_poverty), which links to a news story on the report (Berry, 2006) and the report itself. A summary of the report's findings were later published in an academic journal (Jensen et al., 2007).Berry, R. (2006, July 12). Life shocks tip people into hardship.The New Zealand Herald. Retrieved from: http://www.nzherald.co.nz/nz/news/article.cfm?c_id=1&objectid=10390891Jensen, J., Krishnan, V., Hodgson, R., Sathiyandra, S., Templeton,R., Jones, D., ... & Beynon, P. (2006). New Zealand Living Standards 2004 Ng huatanga Noho o Aotearoa. Wellington: Ministry of Social Development. Retrieved from:http://media.nzherald.co.nz/webcontent/document/pdf/living-standards-2004.pdfJensen, J., Sathiyandra, S., & Matangi-Want, M. (2007). The 2004 New Zealand Living Standards Survey: What does it signal about the importance of multiple disadvantage?. Social Policy Journal of New Zealand, 30, 110-144.
_softwareengineering.328729
Evans introduces in his book Domain Driven Design in Chapter 6 Aggregates the concept of Aggregates. He further defines rules to translate that concept into an implementation (Evans 2009, pp. 128-129):The root ENTITY can hand references to the internal ENTITIES to other objects, but those objects can use them only transiently, and they may not hold on to the reference.After elaborating on other rules he summarizes them into this paragraph:Cluster the Entities and Value Objects into Aggregates and define boundaries around each. Choose one Entity to be the root of each Aggregate, and control all access to the objects inside the boundary through the root. Allow external objects to hold references to the root only. Transient references to internal members can be passed out for use within a single operation only. Because the root controls access, it cannot be blindsided by changes to the internals. This arrangement makes it practical to enforce all invariants for objects in the Aggregate and for the Aggregate as a whole in any state change.So what does transient usage exactly mean?My colleague understands that only the aggregate root exposes a public interface for the clients. Clients will have no opportunity to call any operation on an entity other than the aggregate root.My understanding of the cited sentences is different. I understand that it does indeed explicitly allow clients calling operations on internal entities. However only after getting them from the root.So let's have a concrete example:Let's say a Cart consists of many Items. Each Item has a Quantity. The model should support the use case Increase the quantity of one specific Item. No invariants could be violated which affects anything outside of the Item.Is a model violating above cited rules, when a client can do this by calling cart.item(itemId).increaseQuantity() or should a client only be allowed to call a cart.increaseItemQuantity(itemId)? What would be the benefit of the latter?
Can clients call methods on entities other than the aggregate root?
domain driven design
null
_codereview.11154
We implement a C++ class Proposition that represents a (possibly compound) propositional logic statement made up of named atomic variables combined with the operators AND, OR, NOT, IMPLIES and IFF.We then use it to find all the truth assignments of the following proposition:((A and not B) implies C) and ((not A) iff (B and C))Once everything is defined, the snippet of C++ code that evaluates this proposition is:auto proposition = (A_var && !B_var).implies(C_var) && (!A_var).iff(B_var && C_var);auto truth_assignments = proposition.evaluate_all({A, B, C});Language features used include polymorphism, implicit sharing, recursive data types, operator overloading and (new in C++ 2011 and gcc 4.7) user-defined literals.// (C) 2012, Andrew Tomazos <andrew@tomazos.com>. Public domain.#include <cassert>#include <memory>#include <set>#include <vector>#include <string>#include <iostream>using namespace std;struct Proposition;// The expression...//// foo_var//// ...creates an atomic proposition variable with the name 'foo'Proposition operator _var (const char*, size_t);// Represents a compound propositionstruct Proposition{ // A.implies(B): means that A (antecendant) implies ==> B (consequent) Proposition implies(const Proposition& consequent) const; // A.iff(B): implies that A and B form an equivalence. A <==> B Proposition iff(const Proposition& equivalent) const; // !A: the negation of target A Proposition operator!() const; // A && B: the conjunction of A and B Proposition operator&&(const Proposition& conjunct) const; // A || B: the disjunction of A and B Proposition operator||(const Proposition& disjunct) const; // A.evaluate(T): Given a set T of variable names that are true (a truth assignment), // will return the truth {true, false} of the proposition bool evaluate(const set<string>& truth_assignment) const; // A.evaluate_all(S): Given a set S of variables, // will return the set of truth assignments that make this proposition true set<set<string>> evaluate_all(const set<string>& variables) const;private: struct Base { virtual bool evaluate(const set<string>& truth_assignment) const = 0; }; typedef shared_ptr<Base> pointer; pointer value; Proposition(const pointer& value_) : value(value_) {} struct Variable : Base { string name; virtual bool evaluate(const set<string>& truth_assignment) const { return truth_assignment.count(name); } }; struct Negation : Base { pointer target; bool evaluate(const set<string>& truth_assignment) const { return !target->evaluate(truth_assignment); } }; struct Conjunction : Base { pointer first_conjunct, second_conjunct; bool evaluate(const set<string>& truth_assignment) const { return first_conjunct->evaluate(truth_assignment) && second_conjunct->evaluate(truth_assignment); } }; struct Disjunction : Base { pointer first_disjunct, second_disjunct; bool evaluate(const set<string>& truth_assignment) const { return first_disjunct->evaluate(truth_assignment) || second_disjunct->evaluate(truth_assignment); } }; friend Proposition operator _var (const char* name, size_t sz);};Proposition operator _var (const char* name, size_t sz){ auto variable = make_shared<Proposition::Variable>(); variable->name = string(name, sz); return { variable };}Proposition Proposition::implies(const Proposition& consequent) const{ return (!*this) || consequent;};Proposition Proposition::iff(const Proposition& equivalent) const{ return this->implies(equivalent) && equivalent.implies(*this);}Proposition Proposition::operator!() const{ auto negation = make_shared<Negation>(); negation->target = value; return { negation };}Proposition Proposition::operator&&(const Proposition& conjunct) const{ auto conjunction = make_shared<Conjunction>(); conjunction->first_conjunct = value; conjunction->second_conjunct = conjunct.value; return { conjunction };}Proposition Proposition::operator||(const Proposition& disjunct) const{ auto disjunction = make_shared<Disjunction>(); disjunction->first_disjunct = value; disjunction->second_disjunct = disjunct.value; return { disjunction };}bool Proposition::evaluate(const set<string>& truth_assignment) const{ return value->evaluate(truth_assignment);}set<set<string>> Proposition::evaluate_all(const set<string>& variables) const{ set<set<string>> truth_assignments; vector<string> V(variables.begin(), variables.end()); size_t N = V.size(); for (size_t i = 0; i < (size_t(1) << N); ++i) { set<string> truth_assignment; for (size_t j = 0; j < N; ++j) if (i & (1 << j)) truth_assignment.insert(V[j]); if (evaluate(truth_assignment)) truth_assignments.insert(truth_assignment); } return truth_assignments;}int main(){ assert( (foo_var) .evaluate({foo})); // trivially true assert( (foo_var) .evaluate_all({foo}) == set<set<string>> {{foo}} ); assert( (!foo_var) .evaluate({})); // basic negation assert(! (!foo_var) .evaluate({foo})); // basic negation assert( (!foo_var) .evaluate_all({foo}) == set<set<string>> {{}} ); assert( (!!foo_var) .evaluate({foo})); // double negation assert( (!!foo_var) .evaluate_all({foo}) == set<set<string>> {{foo}} ); assert( (foo_var && bar_var) .evaluate({foo, bar})); // conjunction assert(! (foo_var && bar_var) .evaluate({bar})); // conjunction assert(! (foo_var && bar_var) .evaluate({foo})); // conjunction assert(! (foo_var && bar_var) .evaluate({})); // conjunction assert( (foo_var && bar_var) .evaluate_all({foo, bar}) == set<set<string>>({{foo, bar}})); assert( (foo_var || bar_var) .evaluate({foo, bar})); // disjunction assert( (foo_var || bar_var) .evaluate({bar})); // disjunction assert( (foo_var || bar_var) .evaluate({foo})); // disjunction assert(! (foo_var || bar_var) .evaluate({})); // disjunction assert( (foo_var || bar_var) .evaluate_all({foo, bar}) == set<set<string>>({{foo, bar}, {foo}, {bar}})); assert( (foo_var.implies(bar_var)) .evaluate({foo, bar})); // implication assert( (foo_var.implies(bar_var)) .evaluate({bar})); // implication assert(! (foo_var.implies(bar_var)) .evaluate({foo})); // implication assert( (foo_var.implies(bar_var)) .evaluate({})); // implication assert( (foo_var.implies(bar_var)) .evaluate_all({foo, bar}) == set<set<string>>({{foo, bar}, {bar}, {}})); assert( (foo_var.iff(bar_var)) .evaluate({foo, bar})); // equivalence assert(! (foo_var.iff(bar_var)) .evaluate({bar})); // equivalence assert(! (foo_var.iff(bar_var)) .evaluate({foo})); //equivalence assert( (foo_var.iff(bar_var)) .evaluate({})); // equivalence assert( (foo_var.iff(bar_var)) .evaluate_all({foo, bar}) == set<set<string>>({{foo, bar}, {}})); cout << ((A and not B) implies C) and ((not A) iff (B and C)): << endl << endl; auto proposition = (A_var && !B_var).implies(C_var) && (!A_var).iff(B_var && C_var); auto truth_assignments = proposition.evaluate_all({A, B, C}); cout << A B C << endl; cout << ----------- << endl; for (auto truth_assignment : truth_assignments) { for (auto variable : {A, B, C}) cout << (truth_assignment.count(variable) ? 1 : 0) << ; cout << endl; }}The output is as follows:((A and not B) implies C) and ((not A) iff (B and C)):A B C-----------1 1 0 1 0 1 0 1 1
C++11: Propositional Logic: Proposition Evaluator
c++;c++11
null
_codereview.138479
There are a plethora of streams and streambuffers available today, none are as fast as a memory streambuf, that you can use for testing. Shoot away!#ifndef MEMSTREAMBUF_HPP# define MEMSTREAMBUF_HPP# pragma once#include <cassert>#include <cstring>#include <array>#include <iostream>#include <streambuf>template <::std::size_t N>class memstreambuf : public ::std::streambuf{ ::std::array<char, N> buf_;public: memstreambuf() { setbuf(buf_.data(), buf_.size()); } ::std::streambuf* setbuf(char_type* const s, ::std::streamsize const n) final { auto const begin(s); auto const end(s + n); setg(begin, begin, end); setp(begin, end); return this; } pos_type seekpos(pos_type const pos, ::std::ios_base::openmode const which = ::std::ios_base::in | ::std::ios_base::out) final { switch (which) { case ::std::ios_base::in: if (pos < egptr() - eback()) { setg(eback(), eback() + pos, egptr()); return pos; } else { break; } case ::std::ios_base::out: if (pos < epptr() - pbase()) { setp(pbase(), epptr()); pbump(pos); return pos; } else { break; } default: assert(0); } return pos_type(off_type(-1)); } ::std::streamsize xsgetn(char_type* const s, ::std::streamsize const count) final { auto const size(::std::min(egptr() - gptr(), count)); ::std::memcpy(s, gptr(), size); gbump(size); return egptr() == gptr() ? traits_type::eof() : size; } ::std::streamsize xsputn(char_type const* s, ::std::streamsize const count) final { auto const size(::std::min(epptr() - pptr(), count)); ::std::memcpy(pptr(), s, size); pbump(size); return epptr() == pptr() ? traits_type::eof() : size; }};template <::std::size_t N = 1024>class memstream : public memstreambuf<N>, public ::std::istream, public ::std::ostream{public: memstream() : ::std::istream(this), ::std::ostream(this) { }};#endif // MEMSTREAMBUF_HPP
Memory streambuf and stream
c++;stream
AssertI personally dislike using asserts. The problem for me is that they do different things in production and debug code. I want the same action in both. pos_type seekpos(pos_type const pos, ::std::ios_base::openmode const which) final { switch (which) { case ::std::ios_base::in: case ::std::ios_base::out: default: assert(0); } return pos_type(off_type(-1)); }So in this code if we get to the default action; the debug version will assert (and stop the application) while production code will return -1.In my opinion if you get to a point where your code should not then throw an exception that should not be caught. That will cause the application to exit (in a controlled way).But in this case I think the expected behavior is not to throw but to return -1. So I would make the default action a no-op.seekpos whichThe which in your seekpos() has basically three settings. You only check for two of the three. switch (which) { case ::std::ios_base::in: case ::std::ios_base::out: // You forgot this case case ::std::ios_base::in | ::std::ios_base::out: // it means set the position of the input and output stream. // since this is the default value of `which` I would expect // this to happen most often (and would assert in your code). default: assert(0); } return pos_type(off_type(-1)); }xsgetn/xsputn eofI think you return result of these functions can be incorrect. You should only return eof if you did not get/put any values. You return eof if you have filled/emptied all the data. ::std::streamsize xsgetn(char_type* const s, ::std::streamsize const count) final { // If there is no data to get then return eof if (egptr() == gptr()) { traits_type::eof(); } auto const size(::std::min(egptr() - gptr(), count)); ::std::memcpy(s, gptr(), size); gbump(size); return size; // return the number of bytes read. }Overall DesignThe input and output use the same buffer but are not linked together. If you have not written anything into the buffer I would not expect you to be able to read from the buffer (as there is nothing to read).When you do write I would only expect you to be able to read only what has written (no more).int main(){ memstream<1024> buffer; if (buffer.write(abcdef, 6)) { std::cout << Write OK\n; } char data[100]; if (buffer.read(data, 10)) { auto c = buffer.gcount(); std::cout << Count: << c << \n; std::cout << std::string(data, data+10) << \n; }}Result:Write OKCount: 10abcdefSo I wrote 6 characters. But I managed to read 10 characters. What were the last 4 characters?Circular BufferOnce you have linked the two buffers correctly. You could get more from the buffer by making it circular. That means as you get to the end of the write buffer you can circle around and start writing at the beginning again if you have been reading from the buffer and there is space.Inheriting from the buffertemplate <::std::size_t N = 1024>class memstream : public memstreambuf<N>, // This is unusual public ::std::istream, public ::std::ostream{public: memstream() : ::std::istream(this), ::std::ostream(this) { }};Normally I have seen this as a member object rather than inheriting from the buffer. Though this technique saves a call to setbuffer. I can live with it.BUT: if you are going to inherit from it then I would use private inheritance.template <::std::size_t N = 1024>class memstream : private memstreambuf<N>, // Note the private public ::std::istream, public ::std::ostream{public: memstream() : ::std::istream(this), ::std::ostream(this) { }};This is because I don't want my object to behave like a stream and a stream buffer to people who use it. That could be confusing. So hide the buffer properties from external users (by using private). They can always get a reference to the buffer using the method rdbuf().pragma once and include guardsThere is no point in using both:#ifndef MEMSTREAMBUF_HPP# define MEMSTREAMBUF_HPP# pragma once....#endifI would use the header guards because not all compiler support #pragma once. Note the space between # and the word is non standard. Most compilers may be forgiving about it but not all so I would not do it.# define MEMSTREAMBUF_HPP ^ // extra space.
_unix.222102
I have several data samples that I need to grop the zeroes before and after the data sample. However there are zeroes between the data sample that I must keep for obvious reasons. How can I do this with awk or maybe sed?Thanks.0000000000000004.4021.2017.4418.242.0819.9214.5621.206.64027.0432.2465.2812.0040.8030.4830.1630.24062.566.5629.76043.8413.4417.1254.4823.5230.7229.0411.0414.565.7631.6013.6811.2017.4417.44036.5616.6432.4018.4001049.841.6863.8419.285.7628.0012.640013613.2823.281.2019.1227.2802.8836.1627.4413.6036.3220.9615.8423.1210.24.9643.608.320061.6020.0031.3632.80072.3227.049.5221.282.0844.4811.2026.4019.9218.40078.3213.0438.886.2466.644.5625.1243.204.0058.0818.402.4820.3215.7624.96028.4028.6432.726.6414.7200000
Removing Zeroes before and after data sample
sed;awk
This will drop all zeros from the beginning and the end of the file while keeping zeros in the middle:awk '/[^0]/{if (z)print substr(z,2);print;z=;f=1;next} f{z=z\n$0}' fileHow it works/[^0]/{if (z)print substr(z,2); print;z=; f=1; next}If the present line has any character on it other than zero, /[^0]/, then we do the following:If the variable z is non-empty, we print it, skipping its first character.We print the current line (the one with the non-zero).We set z back to an empty string.We set the flag f to 1 to signify that we have seen a line with a non-zero.We skip the rest of the commands and jump to start over on the next line.f{z=z\n$0}If we get to this command, that means that the line contains no non-zero character. If we have seen a non-zero line, in other words if f is 1, then we append to z a newline and the current line.Example 1Consider this file:$ cat file2002.08018.4000The command produces the following output:$ awk '/[^0]/{if (z)print substr(z,2);print;z=;f=1;next} f{z=z\n$0}' file22.08018.40Example 2Using your input file$ awk '/[^0]/{if (z)print substr(z,2);print;z=;f=1;next} f{z=z\n$0}' file4.4021.2017.4418.242.0819.9214.5621.206.64027.0432.2465.2812.0040.8030.4830.1630.24062.566.5629.76043.8413.4417.1254.4823.5230.7229.0411.0414.565.7631.6013.6811.2017.4417.44036.5616.6432.4018.4001049.841.6863.8419.285.7628.0012.640013613.2823.281.2019.1227.2802.8836.1627.4413.6036.3220.9615.8423.1210.24.9643.608.320061.6020.0031.3632.80072.3227.049.5221.282.0844.4811.2026.4019.9218.40078.3213.0438.886.2466.644.5625.1243.204.0058.0818.402.4820.3215.7624.96028.4028.6432.726.6414.72
_codereview.67042
I have a function that converts any normal true-type font to my own font file .bffThat function works correct and am not going to post that function for that matter(the function is also only compatible with a specific engine you may not hear about).If I post the .bff file and you have an environment in which you could use putpixel(x, y) to put a pixel (for e.g a red one) on the x and y spot, then you can also test my rendering function.Let me first explain my custom compression method:I call Arlo compression vername Arlo1 with compression level over 60%. You have to understand the compression before take a look at the rendering function.Standart characters (dec 48+) to represent whitespaces.5 will represent 5 spaces and so forthAfter each space a pixel is placed. So if there is 00 it puts two pixels right next to each other if there is 20 it puts 1 pixel after 2 spaces and one pixel as a next.Important: to prevent a vertical line after each character the rendering function is made to not put pixel after the last space-representing character of the line determined (that is what causes problems - some characters needs a pixel at the end and it renders them cropped (without these last pixels) thanks to the vertical-line prevention condition.# character found in the bff means new line of the character& character found in the bff means offset for next characterCharacters are with ascii sequence, dec32-126The function:typedef unsigned char BYTE;voidbitfox_render_text(char *FONTNEX, char *STRING, int X_OFFSET, int Y_OFFSET){ FILE* fp = fopen(FONTNEX, r); char *buffer = STRING, *font; int fontsize, buffsize = strlen(buffer), strl; int line = 0, linep = 0, colp = 0; fseek(fp, EOF, SEEK_END); fontsize = ftell(fp); font = malloc(fontsize); rewind(fp); fread(font, sizeof(BYTE), fontsize, fp); fclose(fp); for(strl=0; strl<buffsize; strl++) { int chrOffset, chr = 32; // In-line code for finding the current index of characters for(chrOffset=0; chr != buffer[strl]; chrOffset++) { if(font[chrOffset] == '&') { chr++; }} // <-- do { linep++; if (font[chrOffset+linep] == '#') { colp++; line = 0; } else if (font[chrOffset+(linep+1)] != '#') // if not NL and w/o ending supplement: { // there is a function that describes size and color of the brush here putpixel(X_OFFSET+(line+=(font[chrOffset+linep]-48)+1), Y_OFFSET+colp); } } while(font[chrOffset+(linep+1)] != '&'); } free(font);}The function currently reads successfully only one character passed as a string. Don't mind the arguments currently not used. Despite the fact that this function is not fully functional.. the only problem is the condition:(font[chrOffset+(linep+1)] != '#') // if not NL and w/o ending supplementIt prevents for putting pixels at the end of each line.If i remove that condition.. there will be a vertical line at the end of each character.. if it remains, some characters that need these pixels will be rendered cropped.I will be doing constant edits to improve the level of clarification. Here is the download link of the .bff that response for the .ttf file arial.ttf size 11 Normal.Summary: The .bff file or Bitfox Font File is a custom font file, with custom compression method impiled (Arlo1). Each character from the ascii in the range of 32 to 126 in sequence is represented in raster-type where # determines there is a new line and & determines next character. To indicate where is a pixel placed, the current character's data consists of ascii characters (dec48-126) that represent whitespaces. The pixel is placed after the sapce is determined with chr-48. For the sake of the unrequired pixels prevention, the rendering function won't place pixels when the last character of the line is readed. That causes some characters to be cropped with 1 pixel at their end.
Function for rendering custom fonts
c;io;graphics
null
_webapps.88274
A year ago I liked page A.1 on Facebook. Today, I want to unlike it. However, A.1 has merged with A.2 and A.2 now loads when I try to visit A.1.I do not have A.2 liked (this is what displays on A.2 and A.2 does not link to A.1. I still have A.1 liked on my profile. How do I unlike A.1?
How do you unlike a merged page on Facebook?
facebook;facebook pages
I just figured out the answer to my issue. The solution is to not try to find the page that was liked, but to visit your profile likes sub-page and unlike the dangling reference from there via the dropdown element (unlike).
_webapps.66100
I used to use the Google code playground to visualize and try Google APIs samples. I found it very useful.But I can't find it anymore! Do you have any idea what happened to it and whether is has moved somewhere else?The link was https://code.google.com/apis/ajax/playgroundThe requested URL /apis/ajax/playground was not found on this server. Thats all we know.And it looked like:
Where did the Google code playground go?
google code;google chart api
The Google Developers Chart Gallery include examples of charts and buttons with the text CODE IT YOURSELF ON JSFIDDLE, so we could say that Google Code Playground, at least in relation to the Charts API, was moved to JSFIDDLE.
_vi.5718
I'm using vim-easy-align plugin and when in bash scripts I try to align $ at the beginning of variable names, the aligning adds spaces around the delimiter, i.e. <space>$<space> which of course makes those variables meaningless.Are there options or tips to temporarily or permanently disable this? for specific delimiters? That is, to allow spaces around = and other delimiters, but not the variable leaders such as $.Should I make a \$\w regex instead of just using the $?
easy-align spaces around delimiters
alignment;plugin easy align
FileType solution, just for reference to whoever can use it....---- Easy Align ---- {{{xmap ga <Plug>(EasyAlign)nmap ga <Plug>(EasyAlign)augroup FileType sh,perl let g:easy_align_delimiters = { \ 's': { \ 'pattern': '\$', \ 'ignore_groups': ['Comment'], \ 'left_margin': 0, \ 'right_margin': 0, \ 'indentation': 'shallow', \ 'stick_to_left': 0 \ }, \ '=': { \ 'pattern': '=', \ 'ignore_groups': ['Comment'], \ 'left_margin': 0, \ 'right_margin': 0, \ 'indentation': 'deep', \ 'stick_to_left': 0 \ } \} augroup END}}}
_unix.186198
I am installing fedora 21 server on VM.It used to boot in to text/command line interface.So I followed steps here.In the last step, when I did vi /etc/inittab, the file reads initab is no longer used So as instructed, I ran following:systemctl set-default graphical.targetbut now when I reboot it gives me blank screen with blinking cursor to which I cannot type anything.
Not able to boot into graphical environment in fedora
fedora;system installation
null
_codereview.115529
Problem 21:Let \$d(n)\$ be defined as the sum of proper divisors of \$n\$ (numbers less than \$n\$ which divide evenly into \$n\$). If \$d(a) = b\$ and \$d(b) = a\$, where \$a b\$, then \$a\$ and \$b\$ are an amicable pair and each of \$a\$ and \$b\$ are called amicable numbers.For example, the proper divisors of 220 are 1, 2, 4, 5, 10, 11, 20, 22, 44, 55 and 110; therefore \$d(220) = 284\$. The proper divisors of 284 are 1, 2, 4, 71 and 142; so \$d(284) = 220\$.Evaluate the sum of all the amicable numbers under 10000.Am I abusing reduce or itertools here? I'm concerned this solution could be more readable. Also, is there a more efficient solution rather than calculating everything up front?from itertools import chain, countfrom operator import muldef factorize(n): for factor in chain((2,), count(start=3,step=2)): if factor*factor > n: break exp = 0 while n % factor == 0: exp += 1 n //= factor if exp > 0: yield factor, exp if n > 1: yield n, 1def sum_of_factors(n): >>> sum_of_factors(220) 284 >>> sum_of_factors(284) 220 total = reduce(mul, ((fac**(exp+1)-1)/(fac-1) for fac,exp in factorize(n)), 1) return total - nif __name__ == '__main__': cache = {k: sum_of_factors(k) for k in xrange(1, 10000) } print sum(k for k, v in cache.iteritems() if cache.get(v, None) == k and v != k)
Project Euler 21: Sum of Amicable Numbers
python;programming challenge;python 2.7
null
_webapps.67458
I sent an email with link to a file in my google drive.I now update my file, but Google drive create another file when I upload again. This means that the link in my sent email still points to the old file.How can the recipient of the email get the latest file from the previous link I sent? Thanks.btw, Will uploading to dropbox solve the problem?
How can I update a file in Google drive which I have sent in an email?
google drive;dropbox
null
_unix.292277
Running a program in kernel mode forbids using standard C library because the only thing your program linked to is kernel itself. So I'm allowed to use functions defined in kernel. But kernel itself is a program written in C and compiled for some particular architecture. And it shouldn't use C standard library, but it also shouldn't use any drivers since drivers are loadable modules. So my question is what actual C functions are used when writing a kernel? How can you interact with hardware not through kernel? Don't say me to look at sources it's too next level for me, TY.
How was kernel written?
linux;kernel;linux kernel
null
_unix.150892
#!/bin/bashrm outmkfifo outnc -l 8080 < out | while read linedo echo hello > out echo $linedoneIf I browse to the IP of the machine this script is running on (using port 8080), I would expect to see the word 'hello' and then on the machine running the script, I would expect it to output lines from the request.However, nothing happens. The browser gets no response, and nothing is output to the server's terminal.Why doesn't it work, and what can I modify to make it work? I want to keep it to simple pipes, I don't want to use process substitution or anything like that.
Why doesn't this piped script work?
shell script;io redirection;pipe;fifo
null
_unix.27851
I just tried to install oh-my-zsh. I get the following error when I try to run rvm:zsh: command not found: rvmI also get the following error when I try to open a new tab:/Users/jack/.zshrc:source:34: no such file or directory: /Users/jack/.oh-my-zsh/oh-my-zsh.sh/Users/jack/.zshrc:source:38: no such file or directory: .bashrcHere's my .zshrc file:# Path to your oh-my-zsh configuration.ZSH=$HOME/.oh-my-zsh# Set name of the theme to load.# Look in ~/.oh-my-zsh/themes/# Optionally, if you set this to random, it'll load a random theme each# time that oh-my-zsh is loaded.ZSH_THEME=robbyrussell# Example aliases# alias zshconfig=mate ~/.zshrc# alias ohmyzsh=mate ~/.oh-my-zsh# Set to this to use case-sensitive completion# CASE_SENSITIVE=true# Comment this out to disable weekly auto-update checks# DISABLE_AUTO_UPDATE=true# Uncomment following line if you want to disable colors in ls# DISABLE_LS_COLORS=true# Uncomment following line if you want to disable autosetting terminal title.# DISABLE_AUTO_TITLE=true# Uncomment following line if you want red dots to be displayed while waiting for completion# COMPLETION_WAITING_DOTS=true# Which plugins would you like to load? (plugins can be found in ~/.oh-my-zsh/plugins/*)# Custom plugins may be added to ~/.oh-my-zsh/custom/plugins/# Example format: plugins=(rails git textmate ruby lighthouse)plugins=(git bundler brew gem rvm cscairns)source $ZSH/oh-my-zsh.sh# Customize to your needs...source .bashrcexport PATH=/usr/local/bin:$PATHWhat do I need to do to fix these errors?
After installing oh-my-zsh: ... /.zshrc:source:34: no such file or directory ... /.oh-my-zsh/oh-my-zsh.sh
bash;zsh;oh my zsh
null
_unix.144791
I want to execute the script on removal of usb drive. In that script I want to restart the server.Is there any way to do the same ? I know I have to modify the rules in /etc/udev/rules.d/.
How to Run a script on USB removal?
linux;udev;usb drive
null
_cs.43115
I'm doing revision for a module on programming language semantics andI'm having trouble understanding the introduction of side-effects inexpressions.We assume a standard syntax for arithmetic expression, with the fourusual operations, plus the unary prefix and postfix operator$\texttt{++}$, defined as in language C, applicable to variables onlyas in $\texttt{V++}$, and assigment statement $\texttt{V:=E}$.We have seen how to define the denotation function$[\![\texttt{E}]\!]_{\mathrm{Exp}}$ when we have to evaluate anexpression $\texttt{E}$ without side-effect, to return a valueresulting from that evaluation.In order to give a denotational semantics for expressions with side-effects, we need to change the type of the denotation function $[\![\texttt{E}]\!]_{\mathrm{Exp}}$ for expressions $\texttt{E}$, so that it returns both the value of the expression and the state as modified by the side-effects. I.e., we want to define a denotation function$[\![\texttt{E}]\!]_{\mathrm{Exp}} : State IntState$by induction on the form of expressions $\texttt{E}$. For example, in the case $\texttt{E}$ has the form $\texttt{E1+E2}$, we define:$[\![\texttt{E1+E2}]\!]_{\mathrm{Exp}}(S) = (n_1 + n_2, S_2)$where $(n_1, S_1) = [\![\texttt{E1}]\!]_{\mathrm{Exp}}(S)$and $(n_2, S_2) = [\![\texttt{E2}]\!]_{\mathrm{Exp}}(S_1)$This says first evaluate the leftmost expression $\texttt{E1}$, giving the integer value $n_1$ and updated state $S_1$, then evaluate $\texttt{E2}$ in that updated state, giving the integer value $n_2$ and updated state $S_2$; the value of the expression is $n_1 + n_2$, and evaluation has the side effect of updating the state to $S_2$.However this does not show when side effects actually take place, and I do not know how to write the denotation function for expressions that do produce side-effects. Can you tell me how to complete the inductive definition of $[\![\texttt{E}]\!]_{\mathrm{Exp}}$, including the case where $\texttt{E}$ is of the form $\texttt{V++}$?Then, how should one change the definition of the denotation function $[\![\texttt{V:=E}]\!]_{\mathrm{Pgm}}$ for the assignment statement, to take account of the changes in the definition of $[\![\texttt{E}]\!]_{\mathrm{Exp}}$, which it necessarily uses?Are there other changes that would need to be made to the denotational semantics of the language?
Denotational semantics of expressions with side effects
programming languages;semantics;imperative programming;denotational semantics
null
_softwareengineering.157825
We already have a fully operational web service which caters requests from multiple paltform devices. Each device sends only one request at a time and immediately after a response for the request the device sends an applicative ACK, this ack message is like a regular request message, and it's all via http.During a performance/load test we discovered that the following situation happens alot: regular requests gets processed ok but when the ack message is sent, it is thrown because the server can't handle too many requests.We accept the case that the server throws requests because of overload, but we do not accept throwing ack messages.So essentially we want the server to process 2 requests at a time for each device, is there anyway to do it in the current situation?if not, what kind of changes do we need to do?
Designing a 3-phase commit web service
.net;web services;wcf
null
_codereview.132896
Below is a Ruby implementation of a random statistical event, based on a hash with the actual observed counts of outcomes.I'd be interested in feedback in particular on what techniques I might use to avoid a loop-based accumulator in the RandomEvent#predict! method. I'm also very curious as well about any other suggestions on refactoring, patterns and performance that might be applicable here. The statistics material itself might be somewhat beyond the scope of a review but I'd appreciate any thoughts on appropriate naming and more effective (deterministic) ways to test this.Specinclude Statisticsdescribe RandomEvent do context 'when an event has only one outcome' do it 'always happens' do expect(RandomEvent.from_hash(always: 1).predict!).to eq(:always) end end context 'when the event has multiple outcomes' do let(:trials) { 10_000 } subject(:event) do RandomEvent.from_hash(heads: 51, tails: 49) end it 'should distribute them' do coinflips = trials.times.map { event.predict! } heads_variance = (coinflips.count(:heads) - trials/2).abs tails_variance = (coinflips.count(:tails) - trials/2).abs expected_variance = trials/10 expect(heads_variance).to be < expected_variance expect(tails_variance).to be < expected_variance end endendImplementationclass RandomEvent def initialize @outcome_counts = {} end def add_outcome(outcome, count:) @outcome_counts[outcome] = count end def normalized_outcome_probabilities total_outcome_counts = @outcome_counts.values.reduce(&:+) @outcome_counts.inject({}) do |hash,(outcome,count)| hash[outcome] = count / total_outcome_counts.to_f hash end end def predict! acc = 0.0 roll = rand selected_outcome = nil normalized_outcome_probabilities.each do |outcome, probability| acc += probability if acc > roll selected_outcome = outcome break end end selected_outcome end def self.from_hash(outcome_counts_hash) event = new outcome_counts_hash.each do |outcome, count| event.add_outcome(outcome, count: count) end event endend
Random distribution in Ruby
object oriented;ruby;unit testing;random;statistics
First thing, it looks like RandomEvent.from_hash implements features of initialize method.acc variable at RandomEvent#predict can be easily moved to inject iterator.Code:class RandomEvent def initialize(outcome_counts = {}) @outcome_counts = outcome_counts end def add_outcome(outcome, count) @outcome_counts[outcome] = count end def normalized_outcome_probabilities total_outcome_counts = @outcome_counts.values.reduce(:+).to_f @outcome_counts.map { |outcome, count| [outcome, count / total_outcome_counts] }.to_h end def predict! roll = rand normalized_outcome_probabilities.inject(0.0) do |acc, (outcome, probability)| break outcome if (acc += probability) > roll acc end endendNow instead of RandomEvent.from_hash(heads: 51, tails: 49) you can write RandomEvent.new(heads: 51, tails: 49)
_unix.104928
I have followed the following steps to get Optimus/Bumblebee configuration running on Fedora 20 (fresh basic installation) on my brand new laptop (based on msi barebone MS-16GC).I have listed all steps, very similar to this linkThe end result has x not booting, I can boot to a terminal. I feel I am stuck at the last step, please help:These are the steps I have taken1) My BIOS does not support switching on/off the nvidia card2) Fedora 20 was installed from live cd - kernel/software updated - kernel-devel and kernel-headers are installed, along with gcc-c++ and lshw. NVIDIA display driver version 331.20 is downloaded, but not installed yet. 3) Subsequent current kernel: 3.11.10-301.fc20.x86_644) lspci gives two devices of intrest00:02.0 VGA compatible controller: Intel Corporation 4th Gen Core Processor Integrated Graphics Controller (rev 06)01:00.0 3D controller: NVIDIA Corporation GK106M [GeForce GTX 765M] (rev a1)lshw shows that the NVIDIA card uses nouveau *-display description: 3D controller product: GK106M [GeForce GTX 765M] vendor: NVIDIA Corporation physical id: 0 bus info: pci@0000:01:00.0 version: a1 width: 64 bits clock: 33MHz capabilities: bus_master cap_list rom configuration: driver=nouveau latency=0 resources: irq:16 memory:f6000000-f6ffffff memory:c0000000-cfffffff memory:d0000000-d1ffffff ioport:e000(size=128) memory:f7000000-f707ffff*-display description: VGA compatible controller product: 4th Gen Core Processor Integrated Graphics Controller vendor: Intel Corporation physical id: 2 bus info: pci@0000:00:02.0 version: 06 width: 64 bits clock: 33MHz capabilities: vga_controller bus_master cap_list rom configuration: driver=i915 latency=0 resources: irq:43 memory:f7400000-f77fffff memory:b0000000-bfffffff ioport:f000(size=64)In preperation of driver NVIDIA driver install5) blacklist nouveau, by creating a file blacklist.conf in /etc/modprobe.d/ with the line `blacklist nouveauRebootmv /boot/initramfs-$(uname -r).img /boot/initramfs-$(uname -r)-nouveau.imgdracut /boot/initramfs-$(uname -r).img $(uname -r)RebootAnd in /etc/default/grubI added rdblacklist=nouveau to GRUB_CMDLINE_LINUXFollowed by the grub2-mkconfig > /boot/grub2/grub.cfg commandReboot the computer (just to be sure)lshw now outputs the following - no sign of nouveau is disabled - device is UNCLAIMED. *-display UNCLAIMED description: 3D controller product: GK106M [GeForce GTX 765M] vendor: NVIDIA Corporation physical id: 0 bus info: pci@0000:01:00.0 version: a1 width: 64 bits clock: 33MHz capabilities: bus_master cap_list configuration: latency=0 resources: memory:f6000000-f6ffffff memory:c0000000-cfffffff memory:d0000000-d1ffffff ioport:e000(size=128) memory:f7000000-f707ffff *-display description: VGA compatible controller product: 4th Gen Core Processor Integrated Graphics Controller vendor: Intel Corporation physical id: 2 bus info: pci@0000:00:02.0 version: 06 width: 64 bits clock: 33MHz capabilities: vga_controller bus_master cap_list rom configuration: driver=i915 latency=0 resources: irq:43 memory:f7400000-f77fffff memory:b0000000-bfffffff ioport:f000(size=64)6) Next I create a file in my home directory called .xinitrc containing following linesxrandr --setprovideroutputsource modesetting NVIDIA-0xrandr --autoexec gnome-session7) Then I create a file in /etc/X11 called xorg.conf2containing following dataSection ServerLayout Identifier layout Screen 0 nvidia Inactive intelEndSectionSection Device Identifier intel Driver intelEndSectionSection Screen Identifier intel Device intelEndSectionSection Device Option ConstrainCursor no Identifier nvidia Driver nvidia BusID PCI:1:0:0EndSectionSection Screen Identifier nvidia Device nvidia#Comment to output using hdmi cable Option UseDisplayDevice noneEndSection8) Time to install the drivers/* I get stuck here */Type chmod +x NVIDIA*And then ./NVIDIA*Next I move xorg.conf2 to xorg.confThis installs fine,but when rebooting all I get is a black screen,I can log in to ttylwhen I type startx from command line same problem, just black screen.Please help!
Need help installing NVIDIA graphics driver for Optimus configuration
linux;fedora;graphics;nvidia
null
_softwareengineering.253705
I have a design and am wondering what the appropriate way to access variables is. I'll demonstrate with this example since I can't seem to describe it better than the title.Term is an object representing a bunch of time data (a repeating duration of time defined by a bunch of attributes)Term has some print functionality but does not implement the print functions itself, rather they are passed in as anonymous functions by the parent. This would be similar to how shaders can be passed to a renderer rather than defined by the renderer.A container (let's call it Box) has a Schedule object that can understand and use Term objects.Box creates Term objects and passes them to Schedule as required. Box also defines the print functions stored in Term.A print function usually takes an argument and uses it to return a string based on that argument and Term's internal data. Sometime the print function could also use data stored in Schedule, though. I'm calling this data shared.So, the question is, what is the best way to access this shared data. I have a lot of options since JS has closures and I'm not familiar enough to know if I should be using them or avoiding them in this case.Options: Create a local reference (term used lightly) to the shared data (data is not a primitive) when defining the print function by accessing the shared data through Schedule from Box. Example:var schedule = function(){ var sched = Schedule(); var t1 = Term( function(x){ // Term.print() return (x + sched.data).format(); });};Bind it to Term explicitly. (Pass it in Term's constructor or something). Or bind it in Sched after Box passes it. And then access it as an attribute of Term.Pass it in at the same time x is passed to the print function, (from sched). This is the most familiar way for my but it doesn't feel right given JS's closure ability.Do something weird like bind some context and arguments to print.I'm hoping the correct answer isn't purely subjective. If it is, then I guess the answer is just do whatever works. But I feel like there are some significant differences between the approaches that could have a large impact when stretched beyond my small example.EditI'll post the solution I'm using but I'd still welcome criticism:All print functions take, as arguments, anything term doesn't own. This way, term is not coupled to schedule in any way (obviously schedule is still dependent on term, though). This allows term to be initialized/constructed anywhere without needing knowledge of schedule.So, if term had an init() function it might take an object that looks something like this:{ inc: moment.duration(1,d), periods: 3, class: long, text:Weekly, pRange: moment.duration(7,'d'), //*...other attr*// printInc: function(increments,period){ return moment(this.start).add(this.inc.product(increments) .add(this.startGap)) .add(this.pRange.product(period)) .format(DATEDISPLAYFORMAT); }, printLabel: function(datetime){ return (datetime).format(DATEDISPLAYFORMAT); }}Where increment, period, datetime would all be passed from whatever is using term's print methods (schedule in this case).
JS closures - Passing a function to a child, how should the shared object be accessed
design;object oriented;javascript;closures
null
_unix.297274
I understand how ionice can help you when you have multiple processes requesting access to the same disk resources, but how does it work when you have multiple disks?For example, you have one rsync operation moving data from Drive A -> Drive B, and another rsync moving data from Drive C -> Drive D. In theory, since they are not competing for resources, ionice'ing one of these rsync processes shouldn't change its throughput. Is this how it works, or will it still impact performance?Additionally, is there some upper limit on total I/O one might experience on a linux system that is independent of drive speed? Like if you hooked up 100 SSD drives, at some point would the OS run into a bottleneck aside from drive speed?
How does ionice work with multiple drives?
filesystems;performance;priority;ionice
On Linux, drives are scheduled independently from each other. You can even set the IO scheduling algorithm to be different for different drives on the same system, by writing to /sys/block/<device>/queue/iosched. The bandwidth between memory and the disks can indeed become a bottleneck. This is why hardware RAID makes sense: the data is sent to the RAID controller once, as opposed to each disk separately. You can also increase this bandwidth by attaching those 100 SSDs to more than one computer, distributing the load between them.I'm not sure how the IO scheduler takes this into account, but I don't think it does.
_unix.362390
How do i get terminal like this in kali linux.I searched for long but couldn't find anything relevant.
Colored arrow in Bash prompt
bash;terminal;gnome;prompt
null
_codereview.173643
I am creating a server client app where after the connection is done, the server and client will send packages back and forward. The Stream can be a NetworkStream or SslStream.I have created a Async ReadContinuously method and it seems to work, but I do not trust my own knowledge about Async yet. Can you guys tell me I am on the right track or not?Client: private async void ListenToServer() { bool exitbyerror = false; // Queue<TestServerDataPacket> queue = new Queue<TestServerDataPacket>(); try { await Task.Run(() => { // After this a Queue Reader must be created _packetReader.ReadContinuously(_netStream, _connection.ReceiveBufferSize, queue); // For Testing while (_connection?.Connected == true) { // Console.WriteLine(Client: ({0}) Packets in queue., queue.Count); // For Testing Thread.Sleep(2000); } }); } catch { exitbyerror = true; } // if (exitbyerror) { // } }PacketReader: private bool _readContinuously; public void ReadContinuously(Stream s, int bufferSize, Queue<TestServerDataPacket> packetQueue) { try { if (s == null) { throw new ArgumentNullException(Stream can not be null!); } if (packetQueue == null) { throw new ArgumentNullException(Queue<TestServerDataPacket> can not be null!); } // _readContinuously = true; // DoReadContinuously(s, bufferSize, packetQueue); } catch { throw; } } private async void DoReadContinuously(Stream s, int bufferSize, Queue<TestServerDataPacket> packetQueue) { // byte[] buffer = new byte[bufferSize]; // TestServerDataPacket packet; try { // Read Packet Length = 4 bytes int bytesReceived = 0; while (bytesReceived < 4) { // int byteread = await s.ReadAsync(buffer, bytesReceived, 4 - bytesReceived); // if (byteread == 0) { // 0 bytes read = end of stream / disconnected throw new Exception(Connection Closed!); } // bytesReceived += byteread; } bytesReceived = 0; // Get Packet Size int packetSize = BitConverter.ToInt32(buffer, 0); // Create Packet Byte Array byte[] packetbytes; // Read Data using (MemoryStream memoryStream = new MemoryStream()) { // Read Data while (bytesReceived < packetSize) { // Adjust Buffer size to catch only the packet and nothing else if (buffer.Length > (packetSize - bytesReceived)) { buffer = new byte[(packetSize - bytesReceived)]; } // int count; if ((count = await s.ReadAsync(buffer, 0, buffer.Length)) > 0) { // Save Data memoryStream.Write(buffer, 0, buffer.Length); // Count bytesReceived += count; } } // Get Packet Bytes Array packetbytes = memoryStream.GetBuffer(); } // Create Packet DeserializeData(packetbytes, out packet); } catch { throw; } // if (packet != null) { packetQueue.Enqueue(packet); } // if (_readContinuously) { DoReadContinuously(s, bufferSize, packetQueue); } }Working test server:namespace TestServer{ public class Program { static void Main(string[] args) { TestServer server = new TestServer(); TestClient client = new TestClient(); Console.ReadLine(); } } public class TestServer { private readonly TcpListener _listener; public TestServer() { IPEndPoint localEndPoint = new IPEndPoint(IPAddress.Any, 45654); _listener = new TcpListener(localEndPoint); _listener.Start(100); AcceptConnections(); } private async void AcceptConnections() { await Task.Run(async () => { try { Socket s = await _listener.AcceptSocketAsync(); if (s != null) { Console.WriteLine(Server: Client Connected); TestServerConnection c = new TestServerConnection(s); } } catch { // } }); } } public class TestServerConnection { private readonly Socket _connection; private readonly TestPacketSender _packetSender; public TestServerConnection(Socket s) { _connection = s; _packetSender = new TestPacketSender(new NetworkStream(s, FileAccess.ReadWrite)); Task.Factory.StartNew(ListenToClient); } private async Task ListenToClient() { bool exitbyerror = false; try { await Task.Run(() => { while (_connection?.Connected == true) { Thread.Sleep(10000); Console.WriteLine(Server: Sending Hi); _packetSender.Send(new TestServerDataPacket(2000)); // Int 2000 is Hi } }); } catch { exitbyerror = true; } // if (exitbyerror) { // } } } public class TestClient { private readonly Socket _connection; private NetworkStream _netStream; private TestPacketReader _packetReader; public TestClient() { _packetReader = new TestPacketReader(); IPEndPoint remoteEndPoint = new IPEndPoint(IPAddress.Parse(127.0.0.1), 45654); // Create a TCP/IP socket. _connection = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp) { ReceiveBufferSize = (8 * 1024), SendBufferSize = (8 * 1024), NoDelay = true }; // Connect to the remote endpoint. _connection.Connect(remoteEndPoint); _netStream = new NetworkStream(_connection, FileAccess.ReadWrite); Console.WriteLine(Client: Connected to server.); Task.Factory.StartNew(ListenToServer); } private async void ListenToServer() { bool exitbyerror = false; // Queue<TestServerDataPacket> queue = new Queue<TestServerDataPacket>(); try { await Task.Run(() => { // After this a Queue Reader must be created _packetReader.ReadContinuously(_netStream, _connection.ReceiveBufferSize, queue); // For Testing while (_connection?.Connected == true) { // Console.WriteLine(Client: ({0}) Packets in queue., queue.Count); // For Testing Thread.Sleep(2000); } }); } catch { exitbyerror = true; } // if (exitbyerror) { // } } } public class TestPacketSender { // private readonly Stream _stream; private readonly object _writingToStream = new object(); // public TestPacketSender(Stream stream) { _stream = stream; } // public bool Send(TestServerDataPacket packet) { if (_stream == null) { throw new ArgumentNullException(Stream can not be null); } if (packet == null) { throw new ArgumentNullException(TestServerDataPacket can not be null!); } // lock (_writingToStream) { return SendToStream(_stream, packet); } } private bool SendToStream(Stream s, TestServerDataPacket packet) { try { // Byte Array containing Packet Size and Packet byte[] buffer; // Fill buffer (Packet Size + Packet Content) SerializeData(packet, out buffer); // Write Packet to the Stream s.Write(buffer, 0, buffer.Length); s.Flush(); // return true; } catch { // return false; } } // private void SerializeData(TestServerDataPacket packet, out byte[] buffer) { if (packet == null) { buffer = new byte[0]; return; } byte[] packetbytes; BinaryFormatter formatter = new BinaryFormatter(); using (MemoryStream ms = new MemoryStream()) { // formatter.Serialize(ms, packet); // packetbytes = ms.ToArray(); } // buffer = CreatePacket(packetbytes); } private byte[] CreatePacket(byte[] packetbytes) { // Get the packet length byte[] lengthPrefix = BitConverter.GetBytes(packetbytes.Length); // byte[] totalpacket = new byte[lengthPrefix.Length + packetbytes.Length]; // Combine the packet length and the packet data lengthPrefix.CopyTo(totalpacket, 0); packetbytes.CopyTo(totalpacket, lengthPrefix.Length); // return totalpacket; } } public class TestPacketReader { private bool _readContinuously; public void ReadContinuously(Stream s, int bufferSize, Queue<TestServerDataPacket> packetQueue) { try { if (s == null) { throw new ArgumentNullException(Stream can not be null!); } if (packetQueue == null) { throw new ArgumentNullException(Queue<TestServerDataPacket> can not be null!); } // _readContinuously = true; // DoReadContinuously(s, bufferSize, packetQueue); } catch { throw; } } private async void DoReadContinuously(Stream s, int bufferSize, Queue<TestServerDataPacket> packetQueue) { // byte[] buffer = new byte[bufferSize]; // TestServerDataPacket packet; try { // Read Packet Length = 4 bytes int bytesReceived = 0; while (bytesReceived < 4) { // int byteread = await s.ReadAsync(buffer, bytesReceived, 4 - bytesReceived); // if (byteread == 0) { // 0 bytes read = end of stream / disconnected throw new Exception(Connection Closed!); } // bytesReceived += byteread; } bytesReceived = 0; // Get Packet Size int packetSize = BitConverter.ToInt32(buffer, 0); // Create Packet Byte Array byte[] packetbytes; // Read Data using (MemoryStream memoryStream = new MemoryStream()) { // Read Data while (bytesReceived < packetSize) { // Adjust Buffer size to catch only the packet and nothing else if (buffer.Length > (packetSize - bytesReceived)) { buffer = new byte[(packetSize - bytesReceived)]; } // int count; if ((count = await s.ReadAsync(buffer, 0, buffer.Length)) > 0) { // Save Data memoryStream.Write(buffer, 0, buffer.Length); // Count bytesReceived += count; } } // Get Packet Bytes Array packetbytes = memoryStream.GetBuffer(); } // Create Packet DeserializeData(packetbytes, out packet); } catch { throw; } // if (packet != null) { packetQueue.Enqueue(packet); } // if (_readContinuously) { DoReadContinuously(s, bufferSize, packetQueue); } } private void DeserializeData(byte[] data, out TestServerDataPacket packet) { BinaryFormatter formatter = new BinaryFormatter(); using (MemoryStream stream = new MemoryStream()) { // stream.Write(data, 0, data.Length); stream.Seek(0, SeekOrigin.Begin); // packet = (TestServerDataPacket)formatter.Deserialize(stream); } } } [Serializable] public class TestServerDataPacket { // Unique Id public readonly Guid Id; // Type public readonly TestServerPacketType Type; // Sugnal/Message public readonly int Signal = 0; // public TestServerDataPacket(int signal) { Id = Guid.NewGuid(); Signal = signal; Type = TestServerPacketType.Signal; } } public enum TestServerPacketType { Signal }}
ReadAsync: Continuously reads stream and spits out Packets
c#;socket;stream;async await
null
_unix.347270
I have extracted the user name to perform a test: w | grep ^usera | wc -lwhich will show 1 if the usera have an open session, but now I need more generic use case to extract user group.Example: Extract user group , if group=admin, then wc -l how many users from grp admin have an active session.
Extract user group while trying to connect using ssh
linux;shell script;command line
null
_unix.39342
I want to see if my process makes a lot of context switches. I also want to see how manpulating task groups affects the number of context switches.
How to see how many context switches a process makes?
linux;shell;process
null
_datascience.20363
how is countvectorizer used in real production environment?do you keep training the model with new features/vocabulary everyday and save the vocab into a flat file and reload them up on the next day?do you use a pipeline to streamline the process?what is the best practice?we are going to implement a combination of countvectorizer,tfidf and some machine learning algo in production system soon and any tips or practical experiences will be appreciated.thanks!
how is countvectorizer used in real production environment?
nlp;preprocessing
null
_webapps.60917
Is there a way to have posts appear in both your Google Plus (business) Page and personal posts without posting twice?For example, if I post to my Page stream, I want that post to also show up in my personal stream and vice versa.
Post both to Google Plus Personal and Page?
google plus
No. You will have to post twice or post once and reshare that post.
_softwareengineering.203527
My customer has his own graphics designer he wants to use to style his web application we're building in ASP.NET MVC 4. Our solution is in Bitbucket, but if he can't run it what choices do we have? I doubt he uses Visual Studio 2012. One idea is for us to publish to our solution to a file system, send it to him, have him create a local IIS website on his machine (assuming he isn't using a Mac). Mocking data or pointing to a test SQL in Azure isn't a problem. Then he can make changes to .css and .cshtml files. Will this even work? The point is that he needs to be able to test his changes. I know he can modify the views and just check-in. But he needs to deliver a working design. So it seems inefficient.The graphics designer will have access to our test site so he can see how it works, what data we have and fields. Another idea is for him to build a static mock site using just HTML/CSS. Later I'd integrate his styles into customer's solution, split his html into partial views which we use and add Razor syntax. Again, we'd like to leverage graphics designer for all of this. Is there a best practice documented around this subject? How do other teams deal with this situation?
Best way for an external (remote) graphics designer to style ASP.NET MVC 4 app?
web development;asp.net mvc;collaboration;web design
null
_webmaster.88447
We are using Salesforce's Visualforce to run one of our websites.We use Moz to monitor SEO issues.Moz has identified more issues pages than we have - see the attached image. We have only a few thousand pages at best. However Google and Moz are taking keywords and adding it to the end of the domain and creating a URL that doesn't exist. Recently I made all of these URLs redirect to the home page to see if it resolved the issue - it didn't and the URLs that don't exist are still being crawled.What do I do - this is effecting my rank. Please help?
False Duplicate Content On Moz Report
seo;htaccess;web development;301 redirect;duplicate content
The CMS was just trying to serve broken pages on unreal URLs. I had to add a small php script that identified if the URL was valid or not. If the your was invalid it would redirect to a 404 page
_webmaster.18421
Let's say I keep every word of file names capitalized. For example, Home.php or MultipleWordTitle.html.Is there any way for Apache to redirect requests for incorrectly-cased URLs to the capitalized pages, preferrably using .htaccess?Simply rewriting the URL to make the first letter capitalized won't work, since some files have multiple capital letters.Ideally, I'd like the change to be reflected on the user's side, so it would show up as MultipleWordTitle.html in his/her address bar. It's okay if that can't be done, though.
Can Apache Correct URL Case?
apache;htaccess;url;url rewriting
There are many reasons not to ignore the all-lowercase convention but, if you believe you have a good reason for doing so, you can use Apache's mod_speling to ignore case problems in the request and then specify the canonical URL in the document itself, or issue a redirect as LazyOne described.
_softwareengineering.198309
For some time in personal projects I have been using XSL to convert my raw XML data into human-friendly HTML/CSS (in simple projects, I have no JavaScript, so let's leave that out of the equation for simplicity).Now I'm trying to understand the MVC architectural pattern (not my first experience with it, but it is taking some work to go from understanding it basically to understanding it well), and I'm wondering if there is an analogy between the two.XML: data model; lacks the complexity/logic of a full-blown model component, but intent seems similarXSL: converts raw data for viewing—seems like a controllerHTML/CSS (rendered): the viewable outputIs this analogy fitting? What in it matches well and what does not?(One dissimilarity, I suppose, is that in my example I am not getting any input back from the view—only producing output.)
Is XML, HTML/CSS, XSL analogous to Model, View, Controller?
mvc;html;css;xml;xslt
In this case, I would suggest that you don't really have a controller per se. The XML is the model and the XSL (by way of producing an HTML output) is a view on that data. If you had some mechanism which took some user input and filtered (or caused to be filtered) the raw XML prior to the XSL transformation, then you might consider that mechanism to be your controller.
_unix.192325
I want to display a text above the user screen(as an upper layer). I know that there is solutions like xmessages that could display the text in a box, But need it to be displayed without a box on the entire screen if possibleI am running RaspbianIs there any solution/software that could do this ?
How display a text for users in the entire screen
x11;raspbian;display
xosd, which is available in Raspbian, can display text on top of the current X screen. It takes its input from a file or from the standard input:echo Hello | osd_cat -p middle -A centerIt's an old-style X11 application so its configuration can be verbose; changing the font in particular looks likeecho Hello | osd_cat -p middle -A center -f '-*-lucidatypewriter-bold-*-*-*-*-240'or even strictly speakingecho Hello | osd_cat -p middle -A center -f '-*-lucidatypewriter-bold-*-*-*-*-240-*-*-*-*-*-*'You can customise the colour, add a shadow and/or outline, change the delay, even add a progress bar.
_unix.365999
This is the tutorial shows:You can see the /application/nginx -> /application/nginx-1.8.0But I follow the steps:[root@localhost nginx-1.8.0]# ll /application/nginxlrwxrwxrwx. 1 root root 12 5 19 04:01 /application/nginx -> nginx-1.8.0/It is nginx-1.8.0/, there is no /application in front it, and sure the nginx-1.8.0 is the Symbolic Link under the /application.My operating system is Cnet OS 7.2The tutorial operating system is Cent OS 6.8The difference between the tutorial if is the system reason?
In my CentOS 7.2 the symbolic link is not the whole directory
centos;symlink
null
_softwareengineering.151375
Years ago, I was surprised when I discovered that Intel sells Visual Studio compatible compilers. I tried it in particular for C/C++ as well as fantastic diagnostic tools. But the code was simply not that computationally intensive to notice the difference. The only impression was: did Intel really do it for me just now, wow, amazing tools with nanoseconds resolution, unbelievable. But the trial ended and the team never seriously considered a purchase.From your experience, if license cost does not matter, which vendor is the winner?It is not a broad or vague question or attemt to spark a holy war. This sort of question is about two very visible tools. Nobody likes when tools have any mysteries or surprises. And choices between best and best are always the pain. I also understand the grass is always greener argument. I want to hear all what ifs stories.What if Intel just locally optimizes it for the chip stepping of the month, and not every hardware target will actually work as well as Microsoft compiled? What if AMD hardware is the target and everything will slow down for no reason? Or on the other hand, what if Intel's hardware has so many unnoticable opportunities, that Microsoft compiler writers are too slow to adopt and never implement it in the compiler? What if both are the same exactly, actually a single codebase just wrapped into two different boxes and licensed to both vendors by some third-party shop?And so on. But someone knows some answers.
Are Intel compilers really better than the Microsoft ones?
compiler
WARNING: Answer based on own experience - YMMVIf the code is really computationally expensive, yes, definitely. I have seen an improvement of over 20x times with the former Intel C++ Compiler (now Intel Studio if I recall correctly) vs the standard Microsoft Visual C++ Compiler. It's true the code was very far from perfect and that may have played a role (actually that's why we bothered using the Intel compiler, it was easier than refactoring the giant codebase), also the CPU used to run the code was an Intel Core 2 Quad, which is the perfect CPU for such a thing, but the results were shocking. The compiler itself contains myriads of ways to optimize code, including targeting a specific CPU in terms of, say, SSE capabilities. It really makes -O2/-O3 run away ashamed. And that was before using the profiler.Note that, however, turning on really aggressive optimizations will make the compilation take quite some time, two hours for a large project is not impossible at all. Also, with high levels of optimizations, there's a higher chance of an error in the code to manifest itself (this can be observed with gcc -O3, too). To a project you know well, this might be a plus, since you'll find and fix any eventual bugs you didn't catch earlier, but when compiling a hairy mess, you just cross your fingers and pray to the x86 gods.Something about performance on AMD machines: It's not as good as Intel CPUs, but it's still way better than the MS C++ compiler (again, from my experience). The reason is that you can also target a generic CPU with SSE2 support (for example). Then AMD CPUs with SSE2 will not be discriminated much. Intel compiler on Intel CPU really steals the show, though. It's not all double rainbows and shiny unicorns, however. There have been some heavy accusations about binaries not-running at all on non-GenuineIntel CPUs and (this one is admitted) artificially induced inferior performance on CPUs by other vendors. Also note this is information from at least 3 years ago and it's validity as of now is unknown, BUT the new product descriptions gives binaries a carte blanche to run as slow as Intel sees fit on non-Intel CPUs.I don't know what it is about Intel and why they make so good numeric computation tools, but have a look at this, too: http://julialang.org/. There is a comparison and if you look at the last row, MATLAB shines by defeating both C code and Julia, what strikes me is that the authors think the reason is Intel's Math Kernel Library.I realize this sounds a lot like an advertisement for the Intel Compiler toolkit, but in my experience it really did the job well, and even simple logic dictates that the guys who make CPUs should know best how to program for them. IMO, the Intel C++ compiler squeezes every last bit of performance gain possible.
_unix.379653
I am using zsh.I will right-click copy something from the zsh window, and then right-click paste it. I always lose some amount of characters and the capitalization of the last character flips. e.g.echo this is a long messagepastes as (empty line)this is a long messaGandvim hello.txtbecomesm hello.tXWhat could be causing this and how do I fix it?
Paste has odd behavior in shell
shell;zsh;clipboard
null
_unix.293460
I must have read about 65 web pages about this issue, and tried them all, but so far I can't get it to work. I can access my exchange email using Thunderbird, with the imap server set to localhost at port 1143, and the smtp server again set to localhost; this time with port 1025. These port values are set in my davmail setup, and as I say this is what I use with Thunderbird.The relevant portion of my .muttrc file isset realname = 'Jim Bloggs'set imap_user = 'AD\12345'set imap_pass = 'My Password' set from = 'Jim.Bloggs@jimsmail.org'# REMOTE FOLDERSset folder = 'imap://12345@localhost:1143/Inbox'set spoolfile ='imap://12345@localhost:1143/' set trash = 'imap://12345@localhost:1143/Trash'# LOCAL FOLDERS FOR CACHED HEADERS AND CERTIFICATESset header_cache =~/.mutt/jim_bloggs/cache/headersset message_cachedir =~/.mutt/jim_bloggs/cache/bodiesset certificate_file =~/.mutt/jim_bloggs/certificates# SMTP SETTINGSset smtp_url = 'smtp://12345@localhost:1025/'set ssl_starttls=yesset smtp_pass = 'My Password' # use the same password as for IMAPCurrently what happen is this: it logs in and starts to download mail (and takes a v..e..r..y long time - about 15 or 20 minutes - to download 1902 headers), and then hangs on Sorting mailbox...I am using mutt quite happily for accessing several gmail accounts, and I would like to use it to access my exchange mail as well. But how...?
Reading exchange email with mutt and davmail?
arch linux;mutt;exchange;davmail
null
_softwareengineering.302588
I am trying to understand the subtleties of Publisher-subscriber and similar notification mechanisms. I was checking Cocoa NSNotificationCenter, but it's unclear to me if it can be defined as a PubSub, strictly speaking. Is NSNotificationCenter a PubSub?
Is NSNotificationCenter an implementation of a PubSub model?
pubsub
null
_cstheory.11382
I noticed that regular languages over the alphabet $\Sigma$ can be naturally thought of as a poset, and indeed a lattice. Moreover, concatenation together with the empty language $\epsilon$ defines a strict monoidal structure on this category that is distributive over joins (I'm not sure about meets). Is this a useful construct in theory or practice of regular languages? Are there some nice adjunctions to be found, e.g. can we define the Kleene star as one?This is a copy of a question asked at the Compilers course at Coursera:https://class.coursera.org/compilers/forum/thread?thread_id=311
Regular languages from category-theoretical point of view
fl.formal languages;regular language;ct.category theory
There has been a lot done applying category theory to regular languages and automata. One starting point is the recent papers:Bialgebraic Review of Deterministic Automata, Regular Expressions and Languagesby Bart JacobsA Bialgebraic Approach to Automata and Formal Language Theory by James Worthington.In the first of these papers, the structure of regular expressions is treated algebraically and the languages generated are dealt with coalgebraically. These two views are integrated in a bialgebraic setting. A bialgebra is an algebra-coalgebra pair with a suitable distributive law capturing the interplay between the syntactic terms (the regular expressions) and the computational behaviour (languages generated). The basis of this paper is algebra and coalgebra, as treated in computer science under the umbrellas of universal algebra and coalgebra, rather than what one sees in mathematics (groups etc).The second paper uses techniques that come from the more traditional mathematical treatment of algebra (modules etc) and coalgebra, but I'm afraid that I don't know the details.Neither treats Kleene star as an adjunction, as far as I can tell.More generally, there is a lot of work applying category theory to automata instead of regular expressions. A sample of this work includes:Bloom S.L.; Sabadini N.; Walters R.F.C. Matrices, Machines and Behaviors. Applied Categorical Structures, Volume 4, Number 4, December 1996 , pp. 343-360(18) Michael A. Arbib, Ernest G. Manes: A categorist's view of automata and systems. Category Theory Applied to Computation and Control 1974: 51-64M.A. Arbib and E.G. Manes. Adjoint machines, state-behaviour machines, and duality. Journal of Pure and Applied Algebra, 6:313-344, 1975.M.A. Arbib and E.G. Manes. Machines in a category. Journal of Pure and AppliedAlgebra, 19:9-20, 1980.Jir Admek and Vera Trnkov's book Automata and Algebras in Categories, as pointed out in a comment.Finally, there's the work on iteration theories, Iteration theories: the equational logic of iterative processes by Stephen L. Bloom and Zoltn sik, which focusses on iteration (e.g., Kleene star), but from a more general perspective, where regular languages are just one thing that falls under the theory.
_unix.30659
I want to change an image depth of bitmap for testing purposes. Right now I am trying to get a 2 bit palette image, and a 4444 Hicolor image. I have a true color bitmap. I used the below command line convert -depth 2 /media/bitmap/rule.bmp lut2bpp.bmpthen when I used identify I got thisImage: lut2bpp.bmpFormat: BMP (Microsoft Windows bitmap image)Class: PseudoClassGeometry: 720x480Type: PaletteEndianess: UndefinedColorspace: RGBChannel depth:Red: 8-bitsGreen: 8-bitsBlue: 8-bitsIt changed it to a palette which is great, how do I get to change channel depth?How about changing that true color 24 bit image to hi color 4444 image?
Using Image Magick Convert to Change Channel Depth?
linux;command line;conversion;imagemagick
null
_unix.208838
I'm trying to retrieve the group ID of two groups (syslog and utmp) by name using an Ansible task. For testing purposes I have created a playbook to retrieve the information from the Ansible host itself.---- name: My playbook hosts: enabled sudo: True connection: local gather_facts: False tasks: - name: Determine GIDs shell: getent group {{ item }} | cut -d : -f 3 register: gid_{{item}} failed_when: gid_{{item}}.rc != 0 changed_when: false with_items: - syslog - utmpUnfortunately I get the following error when running the playbook:fatal: [hostname] => error while evaluating conditional: gid_syslog.rc != 0How can I consolidate a task like this one into a parametrized form while registering separate variables, one per item, for later use? So the goal is to have variables based on the group name which can then be used in later tasks.I'm using the int filter on gid_syslog.stdout and gid_utmp.stdout to do some calculation based on the GID in later tasks.I also tried using gid.{{item}} and gid[item] instead of gid_{{item}} to no avail.The following works fine in contrast to the above:---- name: My playbook hosts: enabled sudo: True connection: local gather_facts: False tasks: - name: Determine syslog GID shell: getent group syslog | cut -d : -f 3 register: gid_syslog failed_when: gid_syslog.rc != 0 changed_when: false - name: Determine utmp GID shell: getent group utmp | cut -d : -f 3 register: gid_utmp failed_when: gid_utmp.rc != 0 changed_when: false
How would I register a dynamically named variable in an Ansible task?
ansible
I suppose there's no easy way for that. And register with with_items loop just puts all results of them into an array variable.results. Try the following tasks: tasks: - name: Determine GIDs shell: getent group {{ item }} | cut -d : -f 3 register: gids changed_when: false with_items: - syslog - utmp - debug: var: gids - assert: that: - item.rc == 0 with_items: gids.results - set_fact: gid_syslog: {{gids.results[0]}} gid_utmp: {{gids.results[1]}} - debug: msg: {{gid_syslog.stdout}} {{gid_utmp.stdout}}You cannot either use variable expansion in set_fact keys like this: - set_fact: gid_{{item.item}}: {{item}} with_items: gids.results
_softwareengineering.299907
Sometimes I find it useful to have a single class with multiple instances (configured differently via their properties), rather than multiple classes (inheritance).??? PatternSingle class (Fruit)Different fruit are instances of Fruit, with properties configured correctly.Behavior implemented as blocks.class Fruit { var name: String var color: UIColor var averageWeight: Double var eat: () -> ()}class FruitFactory { static func apple() -> Fruit { let fruit = Fruit() fruit.name = Apple fruit.color = UIColor.redColor() fruit.averageWeight = 50 fruit.eat = { washFruit(fruit) takeBite(fruit) } return fruit } static func orange() -> Fruit { let fruit = Fruit() fruit.name = Orange fruit.color = UIColor.orangeColor() fruit.averageWeight = 70 fruit.eat = { peelFruit(fruit) takeBite(fruit) } return fruit }}Inheritance PatternFor reference, the same could have been implemented using inheritance:Multiple classes (Fruit, Apple, Orange)Different fruit are classes that inherit from Fruit.Behavior implemented using standard methods that are overridden in subclasses.class Fruit { var name: String var color: UIColor var averageWeight: Double func eat() { // abstract method }}class Apple: Fruit { var name = Apple var color = UIColor.redColor() var averageWeight = 50 override func eat() { washFruit(self) takeBite(self) }}class Orange: Fruit { var name = Orange var color = UIColor.orangeColor() var averageWeight = 70 override func eat() { peelFruit(self) takeBite(self) }}class FruitFactory { static func apple() -> Fruit { return Apple() } static func orange() -> Fruit { return Orange() }}What is the first pattern called?Are there any resources to help me decide when to use one of these patterns over the other?Off the top of my head, I can think of at least one reason to prefer inheritance:Imagine we need to add a new property to Apple, but not to Orange (e.g. averageCoreWeight). If you use inheritance this is trivial. If you use the first pattern, you will be left with a property that is only sometimes used.
What is the pattern that uses multiple instances rather than multiple classes called? When would I use it?
design patterns;inheritance
null
_webmaster.31814
Possible Duplicate:What is duplicate content and how can I avoid being penalized for it on my site? My compony's site's home page was not specificly optimized to any location. Now, I am planning to optimize it to Boston, and create ten or so other landing pages for other locations we serve. If we made these new pages by copying the original Boston one and changing the location's name (s/Boston/Montreal/), would Google consider them as duplicate pages and penalize us? What is the best practice for this?
Does Google penalize pseudo-duplicate pages for different locations?
seo;google;search engines;duplicate content
null
_webmaster.52096
Keyword research tools like Keyword Planner seem to fulfill two basic functions:Generate a list of possible keywordsProvide estimates (CPC, traffic, ...) to whittle down this list to the most effective keywordsDo I need the second step? Is there any downside in uploading a huge list with thousands of keywords and just wait and see how they perform? It's pay per click so I'm not losing money on low performing keywords.Ultimately I'm only interested in conversions and that's a metric that can't be estimated by the tools anyway.Edit:As Joshak points out, I need to remove all obviously non-converting keywords. What about other keywords that could theoretically convert but the estimates show it's unlikely. For example 0 traffic or very high cpc, so it's unlikely I will win the bid. It would be more work to remove them and there are slight changes that they bring conversions. Is there any downside in using them?
Should I enter only effective keywords into AdWords?
google adwords
Generally the step 1 you refer to generates a lot of keywords that are marginally relevant or keywords that have no chance of converting, these keywords will still likely drive clicks so step 2 helps you save some (or a lot) of money when you begin your campaigns. It doesn't take long to weed out those keywords that are obviously not helpful as well as gather ideas for negative keywords that will help your campaigns perform to the fullest. In short, even after you whittle down the list there will still be more keywords to weed out once you get real data, but I wouldn't just load the list from step 1 unless you have money to burn.
_unix.325549
I am working with a xml file that looks a bit like this<w:ins w:id=0 w:author=Nick w:date=2016-11-23T00:16:00Z><w:r w:rsidR=009C39E2><w:rPr><w:ins w:id=1 w:author=Nick w:date=2016-11-23T00:16:00Z>I am trying to delete everything involving w:date so the product would look like.<w:ins w:id=0 w:author=Nick><w:r w:rsidR=009C39E2><w:rPr><w:ins w:id=1 w:author=Nick>Currently, I am trying this incorrect sed command. sed 's/w:date=.*//g' I know this is wrong but I am not sure how I would go about fixing this.EDIT:cat testing.txt <w:ins w:id=0 w:author=Nick w:date=2016-11-23T00:16:00Z><w:r w:rsidR=009C39E2><w:rPr><w:ins w:id=1 w:author=Nick w:date=2016-11-23T00:16:00Z>sed 's/ w:date=[^\]*//g' testing.txt<w:ins w:id=0 w:author=Nick>
Sed command for deleting an inclusive range of characters
sed;regular expression
Your expression is too greedy. You want to match the attribute, a quote, some non-quote characters then the ending quote:sed 's/ w:date=[^]*//g' file# ..............^^^^<w:ins w:id=0 w:author=Nick><w:r w:rsidR=009C39E2><w:rPr><w:ins w:id=1 w:author=Nick>
_webmaster.86885
When I try to Create a similar ad on Facebook, there is the option to choose an existing ad set in which to publish the ad. But these settings are never updated on the ad creation page, and all ads I create also create a new ad set.How do I create a new ad within an existing ad set?I tried the power editor in Google Chrome, but every new ad was rejected even before upload for not complying to something Instagram, even if it was an exact duplicate of an existing (accepted and active) ad.
Creating new ad in existing ad set does not work
advertising;facebook
Worked fine the next morning. I guess the problem was that these functions rely on Javascript, and dynamic JavaScript functionality often fails without error (here in Germany) when the USA wake up and Facebook gets really busy. So if you have a similar problem, try when America sleeps. Solved the problem for me.
_codereview.153271
This is a continued discussion from (4 sum challenge) by return count only.ProblemGiven four lists A, B, C, D of integer values, compute how many tuples (i, j, k, l) there are such that A[i] + B[j] + C[k] + D[l] is zero.To make problem a bit easier, all A, B, C, D have same length of N where \$0 \le N \le 500\$. All integers are in the range of \$-2^{28}\$ to \$2^{28} - 1\$ and the result is guaranteed to be at most \$2^{31} - 1\$.Example:Input:A = [ 1, 2]B = [-2,-1]C = [-1, 2]D = [ 0, 2]Output:2Explanation:The two tuples are:(0, 0, 0, 1) -> A[0] + B[0] + C[0] + D[1] = 1 + (-2) + (-1) + 2 = 0(1, 1, 0, 0) -> A[1] + B[1] + C[0] + D[0] = 2 + (-1) + (-1) + 0 = 0I'm wondering if there are any ideas to have a solution less than \$O(n^2)\$ time complexity.Source code in Python 2.7,from collections import defaultdictdef four_sum(A, B, C, D): sum_map = defaultdict(int) result = 0 for i in A: for j in B: sum_map[i+j] += 1 for i in C: for j in D: if -(i+j) in sum_map: result += sum_map[-(i+j)] return resultif __name__ == __main__: A = [1, 2] B = [-2, -1] C = [-1, 2] D = [0, 2] print four_sum(A,B,C,D)
4 sum challenge (part 2)
python;algorithm;programming challenge;python 2.7
Proof that it cannot be done (at least based on our current understanding) in much better than \$O(n^2)\$:Suppose A = B = C and D is a list of zeros. Then the problem reduces to finding three numbers in A that sum to zero. This is the famous 3SUM problem, for which we do not have a much better general solution than \$O(n^2)\$.
_softwareengineering.321361
I'm looking for suggestions on how to read large Javascript codebases, for example, of a framework. For example, let's say P5js, but this applies to any large framework (i.e like AngularJS, Ember, etc)My goal is to be able to look through a Javascript framework's source code and be able to understand what various functions do and how they work. I want to be able to investigate the inner workings of the framework and understand what its important objects and variables are.The problem is that the files are so large, functions that are exposed through the documentation internally call several more layers of private functions, and an assortment of internal objects and data structures are referred to. This is true for most frameworks I've examined. On top of that, there are also events, watchers and other mechanisms that make it harder to track what is happening under the hood.With Java, this was a lot easier for me - though still time consuming - because I could open the project in Eclipse and easily navigate through the call stack, call hierarchies, identify types, parameters, etc. With Javascript it just seems impossible.So, what are some good techniques you could recommend for reading and understanding large (multi-thousand line) framworks, particularly in Javascript (though general cross=language techniques are also welcome)
What are some good approaches for reading Javscript code?
javascript;frameworks;reverse engineering;reading code
null
_unix.291304
I load the RHEL 7.2, I forget to select the options before loading OS, after my machine booted through console mode runlevel 3, I installed some dependency package, after reboot I got GUI mode, but after every reboot machine I get console mode then I run the command init 5 for GUI mode.My question is how to set runlevel 5 to default?
How to switch console mode to GUI mode on RHEL7?
rhel
null
_reverseengineering.13248
I tried firmware-mod-kit's extract-firmware.sh script and I receive the following output which ends with No supported filesystem found.The firmware belongs to the TL-WR740Nv5 router.The filesystem of the router is Squashfs 4.0.Here's the output:http://pastebin.com/FM9uE47tWhat do I do?
Firmware-mod-kit says No supported filesystem along with strange and long output
binary analysis;firmware;binary
null
_cogsci.3939
I'm working with a dataset wherein participants rate five different attributes of six device variants; the attribute ratings different variants are very tightly correlated, suggesting that this dataset has a problem with halo error--participants form an overall impression of the quality of the device, and then instead of reassessing the device for each attribute, they answer each attribute with their overall evaluation of the device. What strategies are used to combat this effect, either before execution in the design of the study or after execution in the analysis? Citations for evidence for any strategies particularly desired.
Minimizing Halo Error
methodology;statistics;bias;survey;halo effect
Murphy & Cleveland (1995) mention, that a good way to reduce rater errors in general is to inform raters of the existence and nature of these errors and then to simply urge to avoid them. While this reduces rater errors, it also decreases the accuracy of ratings, though. These findings come from the literature on performance assessment, where halo is usually thought of as the opposite of accuracy. The unexpected association has been termed the halo-accuracy-paradox.Some authors have proposed that the paradoxical effect is due to different operational definitions of halo (Fisicaro, 1988). Apparently there is some evidence that the paradoxcal effect vanishes when this problem is taken care of.An interesting explanation comes from Latham (Woehr & Huffut, 1994). For him it's all in the way that raters are informed about halo. If raters are told that halo is a global tendency across different ratings and that it is a bad thing, then raters are going to avoid exactly that. But that does not make the ratings more accurate. So when training raters one has to be careful not to create this kind if effect.In contrast, a meta-analysis by Woehr & Huffcut (1994) that investigates the effectiveness of different kinds of rater trainings does not find the paradoxical effect. Instead, rater trainings moderately decreases halo and increases accuracy. The authors also found support for Latham's hypothesis. The mean effect size when rater training was in accordance with his view was bigger compared to the other cases. Still, Murphy and Cleveland (1995) are quite radical in their proposal. To them, there are serious problems with all operational definitions of halo (and rater errors in general). Therefore measurements of rater errors should be abandoned altogether. Hence, in their view it doesn't make sense to speak of a paradox.References:Fisicaro, S. A. (1988). A reexamination of the relation between halo error and accuracy. Journal of Applied Psychology. 73(2), 239.Murphy, K. R. & Cleveland, J.N. (1995). Understanding performance appraisal: Social, organizational, and goal-based perspectives. Thousand Oaks: Sage.Woehr, D. J., & Huffcutt, A. I. (1994). Rater training for performance appraisal: A quantitative review. Journal of Occupational and Organizational Psychology, 67(3), 189-205.
_unix.226438
The getpid system call returns the process id of the invoking process.How does the kernel figure out which process is invoking the system call ?
How does getpid work?
process;c;system calls
The kernel does job scheduling and provides system calls.When a process is running, the kernel schedules its runtime - especially it assigns a PID to it - such information is stored inside the kernel address space, in data structures (e.g. inside a task struct).Thus, when a process calls the getpid() system call, the kernel just has to look in the task structure of the calling (i.e. currently running) process.
_unix.247851
I found this line script in package added by Composerdir=$(echo $dir | sed 's/ /\ /g')I tried in the Git Bash$ echo $(echo foo\bar\ foo/baz/ qux\\bax\\ | sed 's/ /\ /g')foo\bar\ foo/baz/ qux\bax\Can you explain how this works? I can't see match for double backslash.EDIT.Now I see my mistake. Turning double backslash to one backslash in echo has nothing to do with sed.I don't have od in the Git Bash, but I tried.$ echo foo\bar\ foo/baz/ qux\\bax\\ >in.txt$ echo $(echo foo\bar\ foo/baz/ qux\\bax\\ | sed 's/ /\ /g') >out.txt$ cmp -l in.txt out.txt 27 40 12cmp: EOF on out.txtThe out.txt is one character shorter than in.txt.But I still don't get what does sed 's/ /\ /g' actually do and why.Might the entire context be useful for viewers#!/usr/bin/env shdir=$(d=${0%[/\\]*}; cd $d; cd ../squizlabs/php_codesniffer/scripts && pwd)# See if we are running in Cygwin by checking for cygpath programif command -v 'cygpath' >/dev/null 2>&1; then # Cygwin paths start with /cygdrive/ which will break windows PHP, # so we need to translate the dir path to windows format. However # we could be using cygwin PHP which does not require this, so we # test if the path to PHP starts with /cygdrive/ rather than /usr/bin if [[ $(which php) == /cygdrive/* ]]; then dir=$(cygpath -m $dir); fifidir=$(echo $dir | sed 's/ /\ /g')${dir}/phpcs $@
How works sed 's/ /\ /g'
sed
null
_codereview.60334
I am building a classifieds website here in Portugal, and I'm now in the security phase. So until now, I already made what I think is a good measure against SQL injection:$firstname = chunk_split(mysql_real_escape_string($_POST[firstname]),1,'.');$lastname = chunk_split(mysql_real_escape_string($_POST[lastname]),1,'.');$email = chunk_split(mysql_real_escape_string($_POST[email]),1,'.');etc...This will save valid and invalid emails, for example, like this:good email: u.s.e.r.@.e.m.a.i.l.h.o.s.t...c.o.m.bad email: Example of text in the email field:Y'; UPDATE table SET email = 'hacker@ymail.com' WHERE email = 'joe@ymail.com';and after stored in the database:Y.\.'.;. .U.P.D.A.T.E. .t.a.b.l.e. .S.E.T. .e.m.a.i.l. .=. .\.'.h.a.c.k.e.r.@.y.m.a.i.l...c.o.m.\'. .W.H.E.R.E. .e.m.a.i.l. .=. .\.'.j.o.e.@.y.m.a.i.l...c.o.m.\.'.;.So, I don't care if the size of the stored string is bigger because I think this is a solid approach and a very fast one. And if I had to use other ways, the scripts would probably consume the same time that is here exchanged by size.But now I have the serious problem of XSS attacks. I'm trying to prevent only attacks based on text, not images or JavaScript because I will not have untrusted data there.The solution I find is based on the last example, and looks like this:$str = <script>alert('XSS attack');</script>;$ad_title = chunk_split($str,1,<span style='font-size:0px;'>.</span>);So when the page loads the users will see the inserted text and the alert will not work:<script>alert('fabio');</script>This is the source code:<<span style='font-size:0px;'>.</span>s<span style='font-size:0px;'>.</span>c<span style='font-size:0px;'>.</span>r<span style='font-size:0px;'>.</span>i<span style='font-size:0px;'>.</span>p<span style='font-size:0px;'>.</span>t<span style='font-size:0px;'>.</span>><span style='font-size:0px;'>.</span>a<span style='font-size:0px;'>.</span>l<span style='font-size:0px;'>.</span>e<span style='font-size:0px;'>.</span>r<span style='font-size:0px;'>.</span>t<span style='font-size:0px;'>.</span>(<span style='font-size:0px;'>.</span>'<span style='font-size:0px;'>.</span>f<span style='font-size:0px;'>.</span>a<span style='font-size:0px;'>.</span>b<span style='font-size:0px;'>.</span>i<span style='font-size:0px;'>.</span>o<span style='font-size:0px;'>.</span>'<span style='font-size:0px;'>.</span>)<span style='font-size:0px;'>.</span>;<span style='font-size:0px;'>.</span><<span style='font-size:0px;'>.</span>/<span style='font-size:0px;'>.</span>s<span style='font-size:0px;'>.</span>c<span style='font-size:0px;'>.</span>r<span style='font-size:0px;'>.</span>i<span style='font-size:0px;'>.</span>p<span style='font-size:0px;'>.</span>t<span style='font-size:0px;'>.</span>><span style='font-size:0px;'>.</span>Without being concerned about all the code it produces, my question is: is this enough or actually safe?
Is this safe against major XSS attacks?
javascript;html;security
Is it safe? Maybe. Is it the right way to do it? No.Even if it were safe, theres a big problem with the SQL, and thats that youre inserting the dots after escaping, rather than before. I notice that for the input ', it will be escaped to \', and then your dot-insertion turns it into \.'.. Before, the \ was escaping the ', but now it isnt. Now its escaping the .. I dont know if a malicious entity could do anything bad with that, but I wouldnt take any chances.And here are some problems with the HTML:Copy-and-pasting will catch those dots.Search engines will catch those dots.You break any Unicode characters, e.g. you turn into ?.?.?.?.?.?.?.?.?.. (But I only have an old version of PHP installed; maybe newer versions are more intelligent, but in that case, you might have other problems like putting a combining character onto your >)And thats not to mention the gigantic size increase, and potentially having invalid HTML all around, its just not a very good way to do it.So whats the right way to do it? Well, for inserting data into the database, you should be using PDO with prepared statements. Then you dont have to deal with SQL escaping at all: you prepare a query with placeholders, and send in the placeholder data, and since the data doesnt have to touch the query, you dont have to worry about that at all.Sidebar: using PDOYou said that itd be a lot of work to use PDO. Well, Ive never found it particularly difficult. Your code perhaps looks like this:mysql_connect('localhost', 'myapp', 'letmein');mysql_select_db('myapp');// ...mysql_query(insert into users (firstname, lastname, email) values ('$firstname', '$lastname', '$email'));But its not actually that hard to use PDO. That code could be translated to use prepared statements and PDO like so:$db = new PDO('mysql:host=localhost;dbname=myapp', 'myapp', 'letmein');// ...$stmt = $db->prepare('insert into users (firstname, lastname, email) values (:firstname, :lastname, :email)');$stmt->bindValue('firstname', $_POST['firstname']); // look ma, no escaping!$stmt->bindValue('lastname', $_POST['lastname']);$stmt->bindValue('email', $_POST['email']);$stmt->execute();And besides being more secure, youre also moving onto something that the PHP developers have committed to keep in place: the mysql_* functions will be removed, as the PHP documentation says:Warning: This extension is deprecated as of PHP 5.5.0, and will be removed in the future. Instead, the MySQLi or PDO_MySQL extension should be used. See also MySQL: choosing an API guide and related FAQ for more information.End of sidebarOn the HTML side, using htmlentities or htmlspecialchars is the standard way to do it.Then, finally, Id like to point out one thing about how youre accessing POST data: youre currently using $_POST[firstname]. Its supposed to be an expression thats between the brackets, but you have the name of the field literally in there. It turns out that thats okay currently, as undefined constants evaluate to their name. But that does generate a notice, and depending on your error reporting level, you will probably see lots of notices in your error log saying that this is bad and/or deprecated. You should explicitly quote it: $_POST['firstname'].
_computergraphics.1566
I'm in the process of making a tool that requires rendered texture to follow the contours of a piece of clothing. An example would be this website https://knyttan.com/editor/jumper-editor/. The effect here is achieved by using a colour map:I looked at the shaders that are used for this and it seems that the texture offset is calculated based on the colour channels from this map. Now I was wondering if this is a complete bespoke way of doing this, or if this is a known technique and if it is what is it called ?
Help me find out what this texture mapping technique is called
texture;webgl
What you see in the image called a UV map. That is, it is simply texture coordinates to be looked up encoded in a image. Same thing happens in all texture lookup in 3D there is a underlying sampler that picks where to pick texture color from.Image 1: Image showing UV map of two overlapped triangles and sampled texture with same UV coordsHere are the sources for those images, please do not overwrite the sources.Demo showing the UV mapDemo showing the texture
_webmaster.71164
I have a site at a big hosting company and I checked the apache log generated by the site and I saw requests like this in it:GET hostname/~username/proj/favicon.ico HTTP/1.1Where hostname is the hostname of the site, username is my username at the hosting company, and proj is the directory where the site is hosted.What surprises me is that the request shows the actual directory structure instead of the actual request (GET hostname/favicon.ico HTTP/1.1). Is this normal? Shouldn't it show the actual HTTP request, instead of this translated request path?I find it strange that the actual HTTP request does not appear in the log at all. Surely, the browsers don't know my username and the project directory at the hosting company, so it can't be the actual HTTP request what the client made.
Is it normal when the request line of apache log contains a ~username component?
apache;apache log files
null
_cstheory.19906
A language $L$ is calledi) locally testable in the strict sense iff there exists $P, S, I \subseteq X^*$ such that$$ w \in L \mbox{ iff } pref^k(w) \in P, suffix^k(w) \in S, infix^k(w) \subseteq I.$$for some $k > 0$.ii) locally testable iff for $u,v \in X^*$ the following holds:If $pref^k(u) = pref^k(v), suffix^k(u) = suffix^k(v), infix^k(u) = infix^k(v)$ then$$ u \in L \mbox{ iff } v \in L.$$Meaning if two words coincide in there infixes, suffix and prefix up to a specific length $k > 0$ then they are either both in the language or they are both not.iii) the class of locally testable events with order is defined as the smallest class of languages containing the locally testable languages and closed under the boolean operations union, intersection and complementation. (this could be equivalently defined with locally testable in the strict sense instead of locally testable)In what sense do they differ, that iii) contains more languages is clear, for example the language which contains for example $00$ followed by $01$ is in iii) but not in ii) or i) I think (because it involves some kind of order in requiring that $01$ need to follow $00$), but in what sense are ii) and i) different, what is a languge contained in ii) but no in i)?
Difference between locally testable and it's boolean closure
fl.formal languages;automata theory
In ii), you say that $u$ being in $L$ can be deduced only by knowing $pref^k(u)$, $inf^k(u)$ and $suff^k(u)$.This mean that $L$ can be given by a set $E\subseteq (X^*\times 2^{X^*}\times X^*)$, namely $u\in L$ iff $(pref^k(u),inf^k(u),suff^k(u))\in E$.The reason why condition i) is stronger is because it forces $E$ to be of the form $P\times 2^I\times S$.An example of a language which is in ii) but not i) is therefore given by $E=\{ (a,X^*,a), a\in X\}$. This means that $L$ is just the language of words whose last letter is equal to the first. $L$ is locally testable with $k=1$ for condition ii), but it is not for any $k$ for condition i).
_unix.320795
I have a cluster of 23 machines. CentOS 6.7 They have been successfully authenticating to AD via SSSD for over a year.Unbeknownst to me, someone moved one computer out of the search base OU and renamed it from lowercase to uppercase. It is now the only machine that cannot authenticate to AD.sssd.conf has case_sensitive = false ive also changed ldap_sasl_authid from lowercase to upper to match AD but still cannot connect after clearing /var/lib/sss/db and sssd restart. getent passwd only shows local accounts.When compared to other machines that are still authenticating, every service and config file is the same.Getting pushback from network folks on the rename. Is there any way to make this work ?
SSSD not authenticating to AD
active directory;sssd
null
_softwareengineering.279798
I'm creating a web application that is visually enormous. I'm talking 2 million pixels wide and 2 million pixels tall (about). My goal is to show dynamically changing spots all over the site. For loading's sake, would it be best to load a single large, low quality image per screen view? or show about 2000 html elements per screen view?I have about 255 sections on the site, each section is 120,390px square. I only show the correct section on the screen, which means I would only ever see 4 sections at a time. Inside of these sections I have little spots that are about 30px square. each little square is being pulled from a database to get the color, which could potentially change at any time. But these squares are only pulled when the user's view goes there. So at most, it would show around 2-3000 squares. Again, would it be less processing power and faster to load a single picture (let's say the picture is about 5000px wide, with the colors on it, it would be around 50k at low quality) or would it be faster/better to load every square individually?
Web App - Better to load Large, low quality image or many html elements?
html;image processing
null
_unix.268378
While reading about environment variables, the one I came across was LOGNAME, I'd like to know the difference between this variable and whatever the command logname returns - as both of them did differ in what they returned.-bash-3.2$ lognameuser11-bash-3.2$ echo $LOGNAMEuser1Although, whoami returns the same user as LOGNAME-bash-3.2$ whoamiuser1
Difference between logname and $LOGNAME
users;environment variables;whoami
logname goes up the user that owns the tty (by reading it from /var/run/utmp), while $LOGNAME is an env variable that contains the user that executes the current shell process. You can easily verify this with the following commands:# ssh guido@localhost# whoamiguido# wUSER TTY FROM LOGIN@ IDLE JCPU PCPU WHATguido pts/3 localhost 13:02 0.00s 0.12s 0.03s sshd: guido [priv]# echo $LOGNAMEguido# sudo su$ whoamiroot$ echo $LOGNAMEroot$ lognameguido$ ps aux | grep bashroot 1145 0.5 0.1 110176 3604 pts/3 S 13:11 0:00 bashroot 1161 0.0 0.0 103304 844 pts/3 S+ 13:11 0:00 grep bashguido 28363 0.0 0.1 110048 3516 pts/3 Ss 13:02 0:00 -bash
_webmaster.38084
Last week I moved all the images on coffeeandvanilla.com to a cdn( maxcdn.coffeeandvanilla.com ).The problem I'm having is that although the sitemapgenerated by Yoast WordPress SEO pluginpoints images to the correct location, Google only indexes[sic] images from the category and page site maps but 0 images from the posts sitemap( see screenshot https://dl.dropbox.com/u/4635252/sitemap.png )This website has been doing quite well with Google image-search before the change, visits from Google image search have dropped from ~200/day to 11 yesterdayHere is an example entry from the generated posts.xml sitemap http://pastebin.com/vcMRf9VWCan anyone suggest where the problem lies? Why have I lost all my google image juice? Should I just wait some more, how long before really worrying?
Images not indexed by google since moving to cdn
seo;google;images;google image search
Have you tried updating Yoast? It did have a rather nasty image sitemap bug:1.2.8 Bug fixes: Fix for images not showing up in XML sitemap1.EDIT: Final answerI suspected that may be the case. I can't take any credit for this, but please find someone with the same problem and the resultant solution from the Yoast plugin creator:function wpseo_cdn_filter( $uri ) { return str_replace( 'http://example.com', 'http://cdn.example.com', $uri ); } add_filter( 'wpseo_xml_sitemap_img_src', 'wpseo_cdn_filter' );CDN Images in Sitemap
_unix.293739
I have a few PDF files which suffer the problem shown in the image:(the blue part is mouse-selected, the right part in black is how the entire document looks!)I tried /prepressing and even -dSUBSTFONTing them with ghostscript but it didn't help, and I'm clueless now.Any suggestions? ... Thanks.
evince: Bad PDF font rendering
fonts;pdf;evince;ghostscript
null
_unix.42877
I am working on some batch scripts involving the following:Run some non-terminating sub-processes (asynchronously)Wait for t secondsPerform other task X for some timeTerminate subprocessesIdeally, I would like to be able to differentiate the stdout of the sub-processes which has been emitted before X from that which has been emitted after X.A few ideas come to mind, although I have no idea as to how I would implement them:Discard stdout for t secondsInsert some text (for instance, 'Task X started') to visually separate the sectionsSplit stdout into various output streams
Discard stdout of a command for t seconds
shell script;process management;stdout
While you could complicate the matter with exec and extra file descriptor wrangling, your second suggestion is the simplest. Before starting X, echo a marker string into the log file.All those commands would be appending to the same file, so maybe it would be a good idea to prepend all output of X with a marker, so you can tell apart its output from the one of the still running previous commands. Something along the lines of:{ X; } | sed 's,^,[X say] ,'This would make further analysis much simpler. It is not safe and for very verbose programs, race conditions would happen often.If you're willing to take the chance to break one log line and can interrupt the first batch of apps without consequence, this would work too:{ Y; } >> log &sleep $tkill -STOP %% # last job, like the same as %1echo -e \nX started >> logkill -CONT %%{ X; } >> log2
_codereview.159399
I took the source code from this question, (thank you gaessaki for the motivation!) and did a lot of refactoring.I use 4 projects:Common: contains mainly interfaces and enumerations referenced by otherprojects.Model: Knows nothing of the others, just maintains the game logic and state.Presenter: References the Model directly and accesses the View via the IView interface. Acts as mediator between them.View: References the Presenter. Is pretty dumb, can only query the board situation and the game status and display them. For every action (playing a move, restarting, etc.) it simply informs the presenter.The View is actually the executable project who references all others and in its Program.cs instantiates a Model, a View, a Presenter and connects them. But other than that, the visual components only touch the presenter. I did not want to create another project just to this instantiation.The intention is to be able to interchange the view (e.g. use console or WPF), keeping the model and the presenter.I appreciate your comments. I have experience with C# and programming in general, but not with MVP. I think, I tend to overcomplicate the design.Help classesExceptionBuilderusing System;namespace Mfanou.Common { public static class ExceptionBuilder { public static void CheckArgumentRangeInclusive(string varName, int value, int lowerRange, int upperRange) { if (value < lowerRange || value > upperRange) throw new ArgumentOutOfRangeException(varName); } }}CommonGameAction enumerationnamespace Mfanou.TicTacToe.Common { public enum GameAction { Restart, Exit }}Move enumerationnamespace Mfanou.TicTacToe.Common { public enum Move { ShowPreview, HidePreview, Play }}IPlayernamespace Mfanou.TicTacToe.Common { public interface IPlayer { int Id { get; } }}ISquareContentnamespace Mfanou.TicTacToe.Common { public interface ISquareContent { bool IsEmpty { get; } /// <summary> /// Player whose piece is on the square. /// Valid only when IsEmpty is false. /// </summary> IPlayer Player { get; } /// <summary> /// True if the piece is a move preview. /// Valid only when IsEmpty is false. /// </summary> bool IsPiecePreview { get; } /// <summary> /// True if the piece is part of a game-winning piece sequence. /// Valid only when IsEmpty is false. /// </summary> bool IsWinning { get; } }}SquarePositionusing Mfanou.Common; namespace Mfanou.TicTacToe.Common { public class SquarePosition { public static readonly int ROWCOL_MIN = 1; public static readonly int ROWCOL_MAX = 3; public SquarePosition(int row, int col) { CheckRowColRange(nameof(row), row); CheckRowColRange(nameof(col), col); Row = row; Column = col; } public int Row { get; } public int Column { get; } public override bool Equals(object obj) { if (obj == null || GetType() != obj.GetType()) return false; return Equals((SquarePosition)obj); } public bool Equals(SquarePosition sp) => Row == sp.Row && Column == sp.Column; public override int GetHashCode() => 1024 * Row.GetHashCode() + Column.GetHashCode(); private void CheckRowColRange(string varName, int value) { ExceptionBuilder.CheckArgumentRangeInclusive(varName, value, ROWCOL_MIN, ROWCOL_MAX); } } }IGameStatusnamespace Mfanou.TicTacToe.Common { public interface IGameStatus { bool IsOver { get; } /// <summary>Valid only when IsEmpty is true.</summary> bool IsTie { get; } /// <summary>Valid only when IsEmpty is true and IsTie is false.</summary> IPlayer WinningPlayer { get; } }}IViewnamespace Mfanou.TicTacToe.Common { public interface IView { void RefreshBoard(); bool ConfirmAction(GameAction action); void Exit(); }}ModelGame (main class, it being the actual model)using Mfanou.TicTacToe.Common;using System;namespace Mfanou.TicTacToe.Model { public class Game { public Game() { MoveFactory = new MoveFactory(this); GameActionFactory = new GameActionFactory(this); Board = new Board(); Turn = new Turn<IPlayer>(Player.GetAll()); new RestartAction(this).Execute(); } public event Action OnExit; public IGameStatus Status => InternalStatus; public MoveFactory MoveFactory { get; private set; } public GameActionFactory GameActionFactory { get; private set; } public ISquareContent GetSquareContent(SquarePosition position) => Board.GetSquare(position).Content; internal Board Board { get; } internal Turn<IPlayer> Turn { get; } internal GameStatus InternalStatus { get; set; } internal void Exit() { OnExit?.Invoke(); } internal void UpdateStatus() { InternalStatus = StatusJudge.GetStatus(Board); } }}Boardusing Mfanou.TicTacToe.Common;using System.Collections.Generic;using System.Linq;namespace Mfanou.TicTacToe.Model { internal class Board { public Board() { _squares = new List<Square>(); Reset(); } public IEnumerable<Square> Squares => _squares; public IEnumerable<IEnumerable<Square>> RowsColumnsAndDiagonals => Rows.Concat(Columns).Concat(Diagonals); public Square GetSquare(SquarePosition position) => _squares.Where(sq => sq.Position.Equals(position)).Single(); public void Reset() { _squares.Clear(); for (int r = SquarePosition.ROWCOL_MIN; r <= SquarePosition.ROWCOL_MAX; r++) for (int c = SquarePosition.ROWCOL_MIN; c <= SquarePosition.ROWCOL_MAX; c++) _squares.Add(new Square(new SquarePosition(r, c))); } private List<Square> _squares; private IEnumerable<IEnumerable<Square>> Rows => Squares.GroupBy(sq => sq.Position.Row); private IEnumerable<IEnumerable<Square>> Columns => Squares.GroupBy(sq => sq.Position.Column); private IEnumerable<IEnumerable<Square>> Diagonals { get { // Top left - bottom right diagonal: row equals column. yield return Squares.Where(sq => sq.Position.Row == sq.Position.Column); // Bottom left - top right diagonal: sum of row and column is constant. yield return Squares.Where(sq => sq.Position.Row + sq.Position.Column == SquarePosition.ROWCOL_MAX + SquarePosition.ROWCOL_MIN); } } }}GameAction subdirTo the GameAction enumeration in common corresponds a simple hierarchy of action classes deriving from the GameAction abstract class.The Presenter uses the GameFactory provided by Game to translate the GameAction enum to a GameAction descendant class and then ask for a confirmation and/or execute it.GameActionusing System.Linq;namespace Mfanou.TicTacToe.Model { public abstract class GameAction { public GameAction(Game game) { Game = game; } public bool NeedsConfirmation() { if (Game.InternalStatus.IsOver) return false; bool boardHasMoves = Game.Board.Squares.Any((sq) => sq.Content.HasMove); return boardHasMoves; } public abstract void Execute(); protected Game Game { get; } }}ExitActionnamespace Mfanou.TicTacToe.Model { internal class ExitAction : GameAction { internal ExitAction(Game game) : base(game) {} public override void Execute() { Game.Exit(); } }}RestartActionnamespace Mfanou.TicTacToe.Model { internal class RestartAction : GameAction { internal RestartAction(Game game) : base(game) {} public override void Execute() { Game.Board.Reset(); Game.Turn.Reset(); Game.UpdateStatus(); } }}GameActionFactoryusing System;namespace Mfanou.TicTacToe.Model { public class GameActionFactory { public GameActionFactory(Game game) { _game = game; } public GameAction CreateGameAction(Common.GameAction action) { GameAction gameAction; switch (action) { case Common.GameAction.Restart: gameAction = new RestartAction(_game); break; case Common.GameAction.Exit: gameAction = new ExitAction(_game); break; default: throw new NotImplementedException(); } return gameAction; } private Game _game; }}GameStatus subdirGameStatususing Mfanou.TicTacToe.Common;using System;using System.Collections.Generic;namespace Mfanou.TicTacToe.Model { internal class GameStatus : IGameStatus { public static GameStatus Running() { return new GameStatus() { _isOver = false }; } public static GameStatus Tie() { return new GameStatus() { _isOver = true, _isTie = true }; } public static GameStatus Winner(IPlayer winner, IEnumerable<SquarePosition> winningSquares) { return new GameStatus() { _isOver = true, _isTie = false, _winner = winner, _winningSquares = winningSquares }; } public bool IsOver => _isOver; /// <exception cref=InvalidOperationException>When game is not over.</exception> public bool IsTie { get { if (!IsOver) throw new InvalidOperationException(); return _isTie; } } /// <exception cref=InvalidOperationException>When game is not over, or is a tie.</exception> public IPlayer WinningPlayer { get { if (!IsOver || IsTie) throw new InvalidOperationException(); return _winner; } } /// <exception cref=InvalidOperationException>When game is not over, or is a tie.</exception> public IEnumerable<SquarePosition> WinningSquares { get { if (!IsOver || IsTie) throw new InvalidOperationException(); return _winningSquares; } } private GameStatus() {} private bool _isOver; private bool _isTie; private IPlayer _winner; private IEnumerable<SquarePosition> _winningSquares; }}StatusJudgeusing Mfanou.TicTacToe.Common;using System.Collections.Generic;using System.Linq;namespace Mfanou.TicTacToe.Model { /// <summary>Contains logic for getting the game status.</summary> internal static class StatusJudge { public static GameStatus GetStatus(Board board) { // For each row, column, diagonal... foreach (IEnumerable<Square> squares in board.RowsColumnsAndDiagonals) { IEnumerable<SquarePosition> positions = squares.Select((sq) => sq.Position); IEnumerable<SquareContent> contents = squares.Select((sq) => sq.Content); // ...if all its squares are covered by the same player, it's a win. SquareContent singleContent = contents.Distinct().Count() == 1 ? contents.First() : null; if (singleContent != null && singleContent.HasMove) return GameStatus.Winner(singleContent.Player, positions); } bool isBoardFull = board.Squares.All((sq) => sq.Content.HasMove); return isBoardFull ? GameStatus.Tie() : GameStatus.Running(); } }}Move subdirTo the Move enumeration in Common corresponds here a class hierarchy deriving from the abstract Move class. The logic whether the move is allowed and its effects is thus extracted from the Game class.The presenter uses the MoveFactory provided from the Game to translate the Move enumeration to a Move class descendant which then can query for allowing and executing a move requested by the form.Moveusing Mfanou.TicTacToe.Common;using System;namespace Mfanou.TicTacToe.Model { public abstract class Move { public Move(Game game, SquarePosition position) { Game = game; Position = position; } public SquarePosition Position { get; } public abstract bool CanExecute(); public void Execute() { if (!CanExecute()) throw new InvalidOperationException(); DoExecute(); } protected Game Game; protected abstract void DoExecute(); }}ShowPreviewMoveusing Mfanou.TicTacToe.Common;using System.Linq;namespace Mfanou.TicTacToe.Model { internal class ShowPreviewMove : Move { public ShowPreviewMove(Game game, SquarePosition position) : base(game, position) {} public override bool CanExecute() { // No other preview should exist on the board. if (Game.Board.Squares.Any(sq => !sq.Content.IsEmpty && sq.Content.IsPiecePreview)) return false; // Square should be empty. return Game.Board.GetSquare(Position).Content.IsEmpty; } protected override void DoExecute() { Game.Board.GetSquare(Position).Content = SquareContent.WithPiecePreview(Game.Turn.Current); } }}HidePreviewMoveusing Mfanou.TicTacToe.Common;namespace Mfanou.TicTacToe.Model { internal class HidePreviewMove : Move { public HidePreviewMove(Game game, SquarePosition position) : base(game, position) {} public override bool CanExecute() { var targetContent = Game.Board.GetSquare(Position).Content; return !targetContent.IsEmpty && targetContent.IsPiecePreview; } protected override void DoExecute() { Game.Board.GetSquare(Position).Content = SquareContent.Empty(); } }}PlayMoveusing Mfanou.TicTacToe.Common;using System.Linq;namespace Mfanou.TicTacToe.Model { internal class PlayMove : Move { public PlayMove(Game game, SquarePosition position) : base(game, position) {} public override bool CanExecute() { // If there is a preview, it can only be played in the previewed square. var previewSquare = Game.Board.Squares. Where(sq => !sq.Content.IsEmpty && sq.Content.IsPiecePreview).SingleOrDefault(); if (previewSquare != null) return previewSquare.Position.Equals(Position); // No preview: It can only be played in an empty square. return Game.Board.GetSquare(Position).Content.IsEmpty; } protected override void DoExecute() { Game.Board.GetSquare(Position).Content = SquareContent.WithPiece(Game.Turn.Current); Game.UpdateStatus(); if (Game.InternalStatus.IsOver) { // If this move just won the game, highlight the winning squares. if (!Game.InternalStatus.IsTie) Game.InternalStatus.WinningSquares.ToList(). ForEach(sp => Game.Board.GetSquare(sp).Content = SquareContent.WithPieceWinning(Game.Turn.Current)); } else Game.Turn.MoveToNext(); } }}MoveFactoryusing Mfanou.TicTacToe.Common;using System;namespace Mfanou.TicTacToe.Model { public class MoveFactory { public MoveFactory(Game game) { _game = game; } public Move CreateMove(Common.Move action, SquarePosition position) { Move move; switch (action) { case Common.Move.ShowPreview: move = new ShowPreviewMove(_game, position); break; case Common.Move.HidePreview: move = new HidePreviewMove(_game, position); break; case Common.Move.Play: move = new PlayMove(_game, position); break; default: throw new NotImplementedException(); } return move; } private Game _game; }}Player subdirPlayerusing Mfanou.TicTacToe.Common;using System.Collections.Generic;namespace Mfanou.TicTacToe.Model { internal class Player : IPlayer { public static IEnumerable<IPlayer> GetAll() { for (int i=1; i<=NUM_PLAYERS; i++) yield return new Player(i); } public int Id { get; } public override bool Equals(object obj) { if (obj == null || GetType() != obj.GetType()) return false; return Equals((Player)obj); } public bool Equals(Player p) { return (Id == p.Id); } public override int GetHashCode() { return Id.GetHashCode(); } private static readonly int NUM_PLAYERS = 2; private Player(int id) { Id = id; } }}Turnusing Mfanou.TicTacToe.Common;using System.Collections.Generic;using System.Linq;namespace Mfanou.TicTacToe.Model { internal class Turn<T> { public Turn(IEnumerable<T> players) { _players = players.ToArray(); Reset(); } public IEnumerable<T> Players => _players; public T Current => _players[_indexCurrent]; public void Reset() { _indexCurrent = 0; } public void MoveToNext() { _indexCurrent++; if (_indexCurrent >= _players.Length) _indexCurrent = 0; } private T[] _players; private int _indexCurrent; }}Square subdirSquareusing Mfanou.TicTacToe.Common;namespace Mfanou.TicTacToe.Model { internal class Square { public Square(SquarePosition position) { Position = position; Content = SquareContent.Empty(); } public SquarePosition Position { get; } public SquareContent Content { get; set; } }}SquareContentusing Mfanou.TicTacToe.Common;using System;namespace Mfanou.TicTacToe.Model { internal class SquareContent : ISquareContent { public static SquareContent Empty() { return new SquareContent() { IsEmpty = true }; } public static SquareContent WithPiece(IPlayer player) { return new SquareContent() { IsEmpty = false, Player = player }; } public static SquareContent WithPiecePreview(IPlayer player) { return new SquareContent() { IsEmpty = false, Player = player, IsPiecePreview = true }; } public static SquareContent WithPieceWinning(IPlayer player) { return new SquareContent() { IsEmpty = false, Player = player, IsWinning = true }; } public bool IsEmpty { get; private set; } public bool HasMove => !IsEmpty && !IsPiecePreview; /// <exception cref=InvalidOperationException>When the square is empty.</exception> public IPlayer Player { get { if (IsEmpty) throw new InvalidOperationException(); return _player; } private set { _player = value; } } public bool IsPiecePreview { get { if (IsEmpty) throw new InvalidOperationException(); return _isPreview; } private set { _isPreview = value; } } public bool IsWinning { get { if (IsEmpty) throw new InvalidOperationException(); return _isWinning; } private set { _isWinning = value; } } public override bool Equals(object obj) { if (obj == null || GetType() != obj.GetType()) return false; return Equals((SquareContent)obj); } public bool Equals(SquareContent sc) { if (IsEmpty && sc.IsEmpty) return true; if (!IsEmpty && !sc.IsEmpty && Player == sc.Player && IsPiecePreview == sc.IsPiecePreview) return true; // Exactly one of {this,sc} IsEmpty. return false; } public override int GetHashCode() { return IsEmpty ? IsEmpty.GetHashCode() : IsEmpty.GetHashCode() * 1024 + Player.GetHashCode(); } public SquareContent Clone() { var sc = new SquareContent(); sc.IsEmpty = IsEmpty; if (!IsEmpty) { sc.Player = Player; sc.IsPiecePreview = IsPiecePreview; sc.IsWinning = IsWinning; } return sc; } private SquareContent() {} private IPlayer _player; private bool _isPreview; private bool _isWinning; }}PresenterGamePresenterusing Mfanou.TicTacToe.Common;using Mfanou.TicTacToe.Model;namespace Mfanou.TicTacToe.Presenter { public class GamePresenter { public GamePresenter(Game model, IView view) { Model = model; Model.OnExit += ExitGame; View = view; } public IGameStatus GameStatus => Model.Status; public ISquareContent GetSquareContent(SquarePosition position) => Model.GetSquareContent(position); public void RequestAction(Common.GameAction action) { Model.GameAction gameAction = Model.GameActionFactory.CreateGameAction(action); if (gameAction.NeedsConfirmation() && !View.ConfirmAction(action)) return; gameAction.Execute(); View.RefreshBoard(); } public void RequestMove(Common.Move action, SquarePosition position) { Model.Move move = Model.MoveFactory.CreateMove(action, position); if (!move.CanExecute()) return; move.Execute(); View.RefreshBoard(); } private Game Model { get; set; } private IView View { get; set; } private void ExitGame() { View.Exit(); } }}ViewProgram.csusing Mfanou.TicTacToe.Model;using Mfanou.TicTacToe.Presenter;using System;using System.Windows.Forms;namespace Mfanou.TicTacToe.UI.WinForms { internal static class Program { [STAThread] static void Main() { Application.EnableVisualStyles(); Application.SetCompatibleTextRenderingDefault(false); Application.Run(CreateMainForm()); } static Form CreateMainForm() { var model = new Game(); var view = new TicTacToeForm(); var presenter = new GamePresenter(model, view); view.Presenter = presenter; return view; } }}TicTacToeFormusing Mfanou.TicTacToe.Common;using Mfanou.TicTacToe.Presenter;using Mfanou.UI.Winforms;using System;using System.Drawing;using System.Windows.Forms;namespace Mfanou.TicTacToe.UI.WinForms { public partial class TicTacToeForm : MyForm, IView { public TicTacToeForm() { InitializeComponent(); Size = VisualFormatter.FormDefaultSize; MinimumSize = VisualFormatter.FormMinimumSize; Text = VisualFormatter.GAME_TITLE; CreateMainPanel(); CreateMenu(); FormClosing += Form_Closing; } public GamePresenter Presenter { get { return _presenter; } set { _presenter = value; RefreshBoard(); } } public bool ConfirmAction(GameAction action) => VisualFormatter.ConfirmAction(action); public void RefreshBoard() { var status = Presenter.GameStatus; BoardPanel.Enabled = !status.IsOver; foreach (Control ctrl in BoardPanel.Controls) VisualFormatter.FormatSquare(ctrl, Presenter.GetSquareContent(GetSquarePosition(ctrl))); ResultLabel.Text = VisualFormatter.GameResult(status); } public void Exit() { _formOrderedToClose = true; Close(); } private GamePresenter _presenter; private bool _formOrderedToClose = false; private TableLayoutPanel BoardPanel; private Label ResultLabel; private void CreateMenu() { var RestartSubmenuItem = new ToolStripMenuItem() { Text = &Restart, ShortcutKeys = Keys.Control | Keys.N, }; var ExitSubmenuItem = new ToolStripMenuItem() { ShortcutKeys = Keys.Control | Keys.X, Text = E&xit, }; var GameMenuItem = new ToolStripMenuItem() { Text = &Game }; GameMenuItem.DropDownItems.AddRange(new ToolStripItem[] { RestartSubmenuItem, ExitSubmenuItem }); var LicenseSubmenuItem = new ToolStripMenuItem() { Text = &License }; var HelpMenuItem = new ToolStripMenuItem() { Text = &Help }; HelpMenuItem.DropDownItems.AddRange(new ToolStripItem[] { LicenseSubmenuItem }); MainMenuStrip = new MenuStrip(); Controls.Add(MainMenuStrip); MainMenuStrip.Items.AddRange(new ToolStripItem[] { GameMenuItem, HelpMenuItem }); RestartSubmenuItem.Click += RestartToolStripMenuItem_Click; ExitSubmenuItem.Click += ExitToolStripMenuItem_Click; LicenseSubmenuItem.Click += LicenseToolStripMenuItem_Click; } private void CreateMainPanel() { var MainPanel = new TableLayoutPanel() { Dock = DockStyle.Fill, Margin = new Padding(0) }; Controls.Add(MainPanel); MainPanel.RowStyles.Add(new RowStyle(SizeType.Percent, 100)); MainPanel.RowStyles.Add(new RowStyle(SizeType.Absolute, 20)); BoardPanel = CreateBoardPanel(); MainPanel.Controls.Add(BoardPanel, column: 0, row: 0); ResultLabel = CreateResultLabel(); MainPanel.Controls.Add(ResultLabel, column: 0, row: 1); } private TableLayoutPanel CreateBoardPanel() { var panel = new TableLayoutPanel() { Dock = DockStyle.Fill, Margin = new Padding(0), }; int size = SquarePosition.ROWCOL_MAX - SquarePosition.ROWCOL_MIN + 1; Construct2DGridInPanel(panel, size); return panel; } private void Construct2DGridInPanel(TableLayoutPanel panel, int size) { panel.ColumnCount = size; panel.RowCount = size; for (int i = 0; i < size; i++) { panel.ColumnStyles.Add(new ColumnStyle(SizeType.Percent, 100)); panel.RowStyles.Add(new RowStyle(SizeType.Percent, 100)); } for (int row = 0; row < size; row++) for (int col = 0; col < size; col++) panel.Controls.Add(CreateSquare(), row, col); } private Control CreateSquare() { var square = new Button() { FlatStyle = FlatStyle.Popup, Dock = DockStyle.Fill, Font = VisualFormatter.SquareFont, }; square.MouseEnter += Square_MouseEnter; square.MouseLeave += Square_MouseLeave; square.MouseClick += Square_MouseClick; return square; } private Label CreateResultLabel() { return new Label() { AutoSize = true, Dock = DockStyle.Left, Font = VisualFormatter.ResultLabelFont, ForeColor = Color.Red, }; } private SquarePosition GetSquarePosition(Control ctrl) { return new SquarePosition( SquarePosition.ROWCOL_MIN + BoardPanel.GetRow(ctrl), SquarePosition.ROWCOL_MIN + BoardPanel.GetColumn(ctrl) ); } private void Form_Closing(object sender, FormClosingEventArgs e) { // Form allowed to close only if ordered by presenter. if (!_formOrderedToClose) { Presenter.RequestAction(GameAction.Exit); e.Cancel = true; } } private void Square_MouseEnter(object sender, EventArgs e) { Presenter.RequestMove(Common.Move.ShowPreview, GetSquarePosition(sender as Control)); } private void Square_MouseLeave(object sender, EventArgs e) { Presenter.RequestMove(Common.Move.HidePreview, GetSquarePosition(sender as Control)); } private void Square_MouseClick(object sender, MouseEventArgs e) { Presenter.RequestMove(Common.Move.Play, GetSquarePosition(sender as Control)); } private void RestartToolStripMenuItem_Click(object sender, EventArgs e) { Presenter.RequestAction(GameAction.Restart); } private void ExitToolStripMenuItem_Click(object sender, EventArgs e) { Presenter.RequestAction(GameAction.Exit); } private void LicenseToolStripMenuItem_Click(object sender, EventArgs e) { new LicenseForm().ShowDialog(); } }}LicenseFormusing System;using System.IO; namespace Mfanou.TicTacToe.UI.WinForms { internal partial class LicenseForm : MyForm { public LicenseForm() { InitializeComponent(); Text = License; textBoxLicense.Text = File.ReadAllText( Path.Combine(AppDomain.CurrentDomain.BaseDirectory, Resources, License.txt)); } } }VisualFormatterusing Mfanou.TicTacToe.Common;using System;using System.Collections.Generic;using System.Drawing;using System.Linq;using System.Windows.Forms;namespace Mfanou.TicTacToe.UI.WinForms { internal class VisualFormatter { public static readonly string GAME_TITLE = Tic Tac Toe; public static Font SquareFont => new Font(Arial, 48F, FontStyle.Regular, GraphicsUnit.Point, 0); public static Font ResultLabelFont => new Font(Microsoft Sans Serif, 10F, FontStyle.Bold, GraphicsUnit.Point, 0); public static Size FormDefaultSize = new Size(500, 549); public static Size FormMinimumSize = new Size(331, 362); public static void FormatSquare(Control square, ISquareContent content) { Color STANDARD_FCOLOR = SystemColors.ControlText; Color STANDARD_BCOLOR = SystemColors.Window; if (content.IsEmpty) { square.Text = string.Empty; square.ForeColor = STANDARD_FCOLOR; square.BackColor = STANDARD_BCOLOR; } else { VisualPlayer player = ToVisualPlayer(content.Player); square.Text = player.BoardSquareMark.ToString(); square.ForeColor = content.IsPiecePreview ? player.MovePreviewForeColor : player.MoveForeColor; square.BackColor = content.IsWinning ? player.WinBackColor : STANDARD_BCOLOR; } } public static bool ConfirmAction(GameAction action) { const string CONFIRMATION = Confirmation; const string GAME_NOT_OVER = Game is not over.\nAre you sure you want to {0}?; var GameActionDescr = new Dictionary<GameAction, string>() { { GameAction.Restart, restart }, { GameAction.Exit, exit }, }; return MessageBox.Show( string.Format(GAME_NOT_OVER, GameActionDescr[action]), CONFIRMATION, MessageBoxButtons.YesNo, MessageBoxIcon.Question, MessageBoxDefaultButton.Button2 ) == DialogResult.Yes; } public static string GameResult(IGameStatus status) { const string GAMERESULT_TIE = It's a tie!; const string GAMERESULT_WINNER = Player {0} wins!; if (!status.IsOver) return string.Empty; if (status.IsTie) return GAMERESULT_TIE; return string.Format(GAMERESULT_WINNER, ToVisualPlayer(status.WinningPlayer).Name); } private static VisualPlayer ToVisualPlayer(IPlayer player) { var VisualPlayers = new List<VisualPlayer>() { new VisualPlayer() { Id = 1, Name = X, BoardSquareMark = 'X', MovePreviewForeColor = Color.LightBlue, MoveForeColor = Color.Blue, WinBackColor = Color.LightBlue, }, new VisualPlayer() { Id = 2, Name = O, BoardSquareMark = 'O', MovePreviewForeColor = Color.LightCoral, MoveForeColor = Color.Crimson, WinBackColor = Color.LightCoral, }, new VisualPlayer() { Id = 3, Name = +, BoardSquareMark = '+', MovePreviewForeColor = Color.LightGreen, MoveForeColor = Color.Green, WinBackColor = Color.LightGreen, }, new VisualPlayer() { Id = 4, Name = $, BoardSquareMark = '$', MovePreviewForeColor = Color.PaleTurquoise, MoveForeColor = Color.DarkTurquoise, WinBackColor = Color.PaleTurquoise, }, }; if (VisualPlayers.Count(vp => vp.Id == player.Id) == 0) throw new ArgumentOutOfRangeException(); return VisualPlayers.Single(vp => vp.Id == player.Id); } }}VisualPlayerusing Mfanou.TicTacToe.Common;using System.Drawing;namespace Mfanou.TicTacToe.UI.WinForms { internal class VisualPlayer : IPlayer { public int Id { get; set; } public string Name; public char BoardSquareMark; public Color MovePreviewForeColor; public Color MoveForeColor; public Color WinBackColor; }}My general comments:The messiest thing seems to be the TicTacToeForm. I experimented with code (instead of using the designer) for creating the menu and the grid of the board.ShowLicense should probably be an action, but then it should be athird kind.I try to keep code explaining itself. So I used comments only where I thought that it was not very clear from the code itself and/or the class/method/variable names of what happens. I have in my todo list to extract all strings (including menu captions) to a resource and write unit tests (none yet. q-:)(This is a long question. Thank you anyway for reaching that far!)P.S.: After reading the code here myself:Instead of exposing GameActionFactory and MoveFactory in Model, Icould simply expose the functions CreateGameAction and CreateMove. 2public classes less.
TicTacToe in MVP Winforms
c#;.net;tic tac toe;mvp
To begin with...public static class ExceptionBuilder { public static void CheckArgumentRangeInclusive(string varName, int value, int lowerRange, int upperRange) { if (value < lowerRange || value > upperRange) throw new ArgumentOutOfRangeException(varName); }}This is not a builder. It's a validator so I suggest naming it like ArgumentValidator and the method ValidateArgumentRangeInclusive.public enum Move { ShowPreview, HidePreview, Play}To me, ShowPreview and HidePreview are rather view options then something that has anything to do with Move.public static readonly int ROWCOL_MIN = 1;public static readonly int ROWCOL_MAX = 3;We don't use UPPER_CASE for constants in C# and the name ROWCOL isn't clear. Is it row or column? You can put them inside a static class to give the a better meaning.private void CheckRowColRange(string varName, int value) { ExceptionBuilder.CheckArgumentRangeInclusive(varName, value, ROWCOL_MIN, ROWCOL_MAX);}This method can be made static because it does not require any state information from the owning class.public interface IGameStatus { bool IsOver { get; } /// <summary>Valid only when IsEmpty is true.</summary> bool IsTie { get; } /// <summary>Valid only when IsEmpty is true and IsTie is false.</summary> IPlayer WinningPlayer { get; }}There is no IsEmpty.public Game() { MoveFactory = new MoveFactory(this); GameActionFactory = new GameActionFactory(this); Board = new Board(); Turn = new Turn<IPlayer>(Player.GetAll()); new RestartAction(this).Execute();}The constructor should not be doing any thing but initializing data. Something like new RestartAction(this).Execute(); is a very bad idea and I'd be really surprised when I created an instance of a Game and it already did something even though it's not fully created yet. What's even worse, the RestartAction dependency is not passed as a such via the costructor so there is no way to override it for testing.public IGameStatus Status => InternalStatus;internal GameStatus InternalStatus { get; set; }This doesn't seem right. Why would you make an internal status settable and the public one not? This looks like hacking something.
_unix.136229
Using st with dwm, depending on the way I select text (e.g. mouse vs keyboard), and where the text is (e.g. document body vs address bar), copying text from firefox and pasting it into st does not always work.Are there two different clipboards?Is there a way to unify them?
Copy/paste does not always work from Firefox to terminal
x11;firefox;clipboard
null
_cogsci.6406
Unfortunately the school I'm starting at this coming fall doesn't offer a cognitive science degree (note: I've just read that they do have a cognitive science lab in the psychology department), but seeing as cognitive science is a multidisciplinary field, I was wondering whether or not it would be unwise to focus on one particular sub-field (say, philosophy or computer science) and try to work my way into cognitive science from there.The problem is that I have a very strong background in computer science, and I'm up in the air about whether I want to study philosophy or linguistics. Based on my understanding, all of these fall under the umbrella of cognitive science, am I correct?Is it unheard of for someone to move into cognitive science from a linguistic or philosophical background? What do you suggest I do?
Can someone move into cognitive science from a linguistic or philosophical background?
study cognitive sciences
It is common for psychology departments to have a cognitive specialization available at the graduate level. At the undergrad level, I would expect to be forced to choose between psychology, neuroscience, linguistics, or something else. That is, cognitive science does not include philosophy or computer science by most academic systems for demarcating scientific domains. In scientific reality, all these domains bleed into one another of course, but you can probably expect to have to choose among them for any degree program. One can often pursue interdisciplinary study to some extent too (e.g., double-major, minor, or have two graduate advisors).It is not unheard of to approach any degree from any other degree / background as far as I'm aware. Of course it's less likely for a creative writing major to apply for an organic chemistry doctoral program as compared to a biology major, but that's not to say either option is impossible. I don't think it would be exceptionally difficult to transition into a cognitive science from a philosophy background, and I imagine it would be easier still to transition from linguistics to psychology or neuroscience, but it's probably the easiest to stick with a single degree program the whole way. Easiest is not necessarily best, of course.Consider how competitive you can be for each program, and compare with how badly you want to pursue each and what the world needs most from you. Consider what career options are for each, and which suits you best. Travel into the future and ask your future self what choices you regret...No other method is foolproof, unfortunately. Fortunately, career change is always an option, if not always easy.
_unix.336629
Does this mean that my HDD will not work. /boot is having xfs partition. I wanted to repair it but don't know if the data would get lost.xfs_repair /dev/sdxx Do I need to run this.
Unbootable system I/O error
hard disk;troubleshooting;xfs;badblocks
null
_codereview.135570
I wrote an age verification module in Python 2.5 . How can I improve on current_year? import time perhaps?current_year = 2016year_of_birth = int(raw_input('Enter Year Of Birth: '))age = current_year - year_of_birthmytext = 'You are %s years old.'print(mytext % age)if age < 18: print('YOU SHALL NOT PASS!')else: print('Welcome To The Portal.')
Age verification module in Python
python;datetime;validation
Current yearIf you don't want the current year to be hardcoded, you could use the method today() from datetime.date.from datetime import datecurrent_year = date.today().yearUser inputYou should always put your user input request in a try/except block, because you never knows what the user will think and do. I'd go with:def ask_for_birth_year(): while True: try: return int(raw_input('Enter Year Of Birth: ')) except ValueError: print('This is not a number, try again.')This way it will keep asking until the user enters a proper number.UPDATE (following comment) :If you need some restriction on the input number, you could try this kind of structure :def ask_for_birth_year(): while True: try: nb = int(raw_input('Enter Year Of Birth: ')) if nb < 0: # can be any condition you want, to say 'nb' is invalid print('Invalid year') else: # if we arrive here, 'nb' is a positive number, we can stop asking break except ValueError: print('This is not a number, try again.') return nbOther remarksSince age is an integer, you'll prefer using %d instead of %s (strings) in your print call.mytext = 'You are %d years old.'It is also recommended that you put all level-0 code under a __name__ == '__main__' condition, to avoid having it launched later when you import this module. This is a good habit to take, you can read about it in the brand new StackOverflow Documentation here.if __name__ == '__main__': # do stuffFinally, the limit age (18), is what we call a magic number. It should be avoided, if you plan on making your code grow, and replaced by a meaningful constant.#at the beginningLIMIT_AGE = 18#your 'if' statementif age < LIMIT_AGE: ...Altogetherfrom datetime import dateLIMIT_AGE = 18 def ask_for_birth_year(): while True: try: nb = int(raw_input('Enter Year Of Birth: ')) if nb < 0: print('Invalid year') else: break except ValueError: print('This is not a number, try again.') return nbdef print_message(age): mytext = 'You are %d years old.' print(mytext % age) if age < LIMIT_AGE: print('YOU SHALL NOT PASS!') else: print('Welcome To The Portal.')if __name__ == '__main__': year_of_birth = ask_for_birth_year() current_year = date.today().year age = current_year - year_of_birth print_message(age)
_vi.2326
When I write VHDL Vim uses a mix of tabs and spaces which aim to align columns beneath the last parenthesis. For example, Vim will produce something likeInst_IMem: IMem PORT MAP( CLK => clk, ADDR => foo, DATA => bar );instead ofInst_IMem: IMem PORT MAP( CLK => clk, ADDR => foo, DATA => bar);How can I make Vim indent VHDL as it would indent programming languages such as C and Java? I.e. a new indentation level (either a tab or say four spaces) for every new nesting level.
Indenting VHDL as other programming languages
indentation
This seems to be fairly simple, you only need to use::let g:vhdl_indent_genportmap = 0And you're done :-)I found this in /usr/share/vim/vim74/indent/vhdl.vim: option to disable alignment of generic/port mappingsif !exists(g:vhdl_indent_genportmap) let g:vhdl_indent_genportmap = 1endifWhich is used further below:if g:vhdl_indent_genportmap return ind2 + stridx(prevs_noi, '(') + &swelse return ind2 + &swendifSo if it's off (0), it will only indent a single shiftwidth, if it's on (1), it uses the location of the ( on the previous line + `shiftwidth.This (and some other things) are also documented in :help ft-vhdl-indent; I found this page by typing :help vhdl (a number of filetypes have their own help pages).
_softwareengineering.293815
I am working in a project in which at many point I need to change the code and fix the bug of the system but how can I inform other team members about this change? Usually I add single line comment to that particular point or create task in eclipse and write bug fix as follows,//Fixed PRJ110-345 called checkAndClear to clear myObjectmyObject = myObject.checkAndClear();But this does not look good when I change something in properties or sql file of the project.Moreover I need to search for this PRJ110-345 to find the bug fixed or use regular expression to find all issue number but those might have solved by other team member. What is the best way to keep track of issues in the code among the team member ? Sometimes I can not differentiate between my solved bugs and other team member's bug fix. Currently we don't have different accounts for issue tracking system (JIRA) and team leader just distributes the Bugs among team member and it's difficult for me to track my fixes every time.
Track Bug fixes in code
java;issue tracking
You shouldn't track bug fixes in the code. It might make sense to track some unfixed bugs in the code, as a warning to other developers that look at that code that it has bugs that you didn't get around to fixing it. Something like://Currently crashes - see PRJ110-345myObject.initialize();But noting in the code that the bug has been fixed is pointless, because the bug is something that shouldn't have been part of the code in the first place - why documenting something that isn't there and shouldn't be there? I can easily introduce new bugs to your code by randomly typing commands into it - for example I can set myObject to null at random places. Are you going to document on every line why myObject isn't set to null on that line?The place to put these comments is in the source control - and it sounds like you are not using source control, so you should start using one - I suggest Git. A source control (among other things it does) represents the history of your project as commits - each commit is a diff that (together with previous commits) describes how the code looked before the change and how it looks after the change, and allows you to enter a commit message that describes the change - what you did and/or why. This is where you document the bug fix.So, instead of looking at comments in the code you look at commit messages and see what your teammates did - here is how it looks in on BitBucket. That way, your code stays clean but you still get access to all the info.
_webapps.88694
The Webclipper is too much. I don't want the whole webpage, just the URL and a comment. Any ideas?
Way to save URL + comment into Evernote?
url;evernote
null
_unix.251871
I have a shell script which makes some analysis and prepares (i.e.: writes) some commands to run in a separated file.So I have something like that:echo my_command_to_run >> /tmp/file_command_to_run.txtI have the feeling that the program is slower and slower.Is it possible that the program takes longer when the file is bigger (~3M of lines)?I am also storing some stuff in memory, so this is also probably a source of my problem, but I just want to know if I need to redirect the output in different files. (e.g.: write several files of 2000 lines)EDIT:My script is preparing the move of ~64M (millions) files into a much better architecture. So I go through all the different structured folders, and prepare the move.I have such array in memory:topic1 -> /path/to/my/foldertopic1_number_of_files -> nbso my array is also getting bigger because I have several entries (max ~ 4'000). Otherwise this is always the same OPs which are run.Only my array and my file are getting bigger.EDIT2: Below is my scriptNote: I have several folders containing maximum 100'000 files inside.I can have : folder1 -> (source1__description1, source1__description2, source2__description3)Goal: have something like that:source1/folder1 -> (source1__description1, source1__description2)source2/folder1 -> (source2__description3, ...)Current performances:~900'000 lines inserted in 14 hours <=> this will take around 40 days to prepare all the move commands#!/bin/bashargument=$1if [[ -n $argument ]] && [[ -e $argument ]]; then html_folder=$argument echo We will move [folder]/files from your parameters: '$html_folder'else html_folder=/var/files/html_files/ echo NO PARAMETER (or folder does not exist) - We will move [folder]/files from $html_folderfi######################## create the list ########################filename=/var/files/html_files/list_folder.txt # list generated with ls -1 -f (this doesn't take everything in memory)ls -1 -fp $html_folder | grep '/$' | grep 'folder'> $filename#################### END create the list ########################echo # --------------------------------------------------------------# -------------- Global variables for moving part --------------# -------------------------------------------------------------- # Variables for storing the folder/files tree declare -A folder_array # array of folder '/files/publisher_html/10.3390' => 4 (i.e.: 4 folders for mdpi) declare -A folder_files_array # array of files in last folder '/files/publisher_html/10.3390' => 51 (i.e.: 51 files in the 4th folders for mdpi) storageFolder=/files/publisher_html/ nb_limit=100000 # max number of file per folder file_nb=0 current_folder=# --------------------------------------------------------------# --------------------------------------------------------------# --------------------------------------------------------------# --------------------------------------------------------------# -------------- Global functions for moving part --------------# -------------------------------------------------------------- countNumberOfFilesPerFolder () { nb=0 if [[ -e $1 ]]; then nb=$(ls -1fp $1 | grep -v '/$' | wc -l ) fi echo $nb } createFolderIfNeeded () { # $1 # first arg (/path/to/htmlfiles/10.3390) tmp_folder= nb_folder=1 nb_files=0 if [[ ! -e $1 ]]; then # if folder doesn't exist sudo mkdir -p $1/folder$nb_folder ; # create the folders if don't exist else #echo THE FOLDER $tmp_folder ALREADY EXISTED...BE AWARE!!! if [[ -e ${folder_array[$1]} ]]; then nb_folder=${folder_array[$1]} # take the value from memory if available else nb_folder=$(ls -1f $1 | grep folder | wc -l ) fi if (($nb_folder==0)); then # if no subfolder for the publisher folder nb_folder=1 nb_files=0 sudo mkdir -p $1/folder$nb_folder # simply create the first folder else # if [[ -e ${folder_files_array[$1]} ]]; then if [[ ${folder_files_array[$1]} ]]; then nb_files=${folder_files_array[$1]} # value from memory #echo value from MEEEEEM: $1 => $nb_files else nb_files=`countNumberOfFilesPerFolder $1/folder$nb_folder` #echo value from COOOOOOUNT: $1 => $nb_files fi if (($nb_files >= $nb_limit)); then # create a new folder + reset memory value ((nb_folder++)) nb_files=0 sudo mkdir -p $1/folder$nb_folder #`createFolderIfNeeded $1/folder$nb_folder` # NO CORRECT -> will create a subfolder fi fi fi #((nb_files++)) folder_files_array[$1]=$nb_files folder_array[$1]=$nb_folder current_folder=$1/folder$nb_folder # change the global variable } extractPrefix() { whotest[0]='test' || (echo 'Failure: arrays not supported in this version of bash.' && exit 2) array=(${1//__/ }) prefix=${array[0]} echo $prefix }# --------------------------------------------------------------# --------------------------------------------------------------# --------------------------------------------------------------toMoveFolder=$html_foldertoMove/toMoveFileIndex=1toMoveCmdNumber=0maxCmdInFile=2000if [[ ! -e $toMoveFolder ]]; then # if folder doesn't exist sudo mkdir -p $toMoveFolder ; # create the foldersficd $html_folderwhile read -r folder # for each folderdo if [[ -e $folder ]]; then echo Will manage folder: $folder# ---------------------------------------------------------------------------------------------------# -------------------------------------- MOVE INDIVIDUAL FILES --------------------------------------# --------------------------------------------------------------------------------------------------- argument=$html_folder$folder cpt=0 #argument=$1 if [[ -n $argument ]] && [[ -e $argument ]]; then html_files_folder=$argument else html_files_folder=/var/files/html_files/html_files/ fi ######################## create the list ######################## htmlList=/var/files/html_files/list_html.txt # list generated with ls -1 -f (this doesn't take everything in memory) ls -1f $html_files_folder > $htmlList # no need to exclude the . and .. (we exclude from the foreach) #################### END create the list ######################## echo current_folder=$storageFolder # probably useless while read -r line do name=$line if [[ $name != . ]] && [[ $name != .. ]]; then # don't take the folder itself prefix=`extractPrefix $name` if [ -n $prefix ]; then # change the global $current_folder # + create new subfolder if needed # + increment nb of files in folder createFolderIfNeeded $storageFolder$prefix ((cpt++)) if(( $toMoveCmdNumber >= $maxCmdInFile )); then toMoveCmdNumber=0 ((toMoveFileIndex++)) fi echo sudo mv $html_files_folder$name $current_folder/$name | sed -r 's/[\(\)]+/\\&/g' >> $toMoveFoldercommand_$toMoveFileIndex.txt ((toMoveCmdNumber++)) ((folder_files_array[$storageFolder$prefix]++)) if (( $cpt % 50 == 0 ));then echo echo Remind: folder -> $current_folder/ echo ${#folder_array[@]} publishers in memory! fi echo #$cpt - $name (${folder_files_array[$storageFolder$prefix]} files) else echo ERROR -> $name has not been moved as expected fi fi # >> $toMoveFolderfile$toMoveFileIndex.txt # <== does not take the toMoveFileIndex variation in consideration done < $htmlList # useful if we use the while echo Folder $html_files_folder has been processed echo # ---------------------------------------------------------------------------------------------------# ---------------------------------------------------------------------------------------------------# --------------------------------------------------------------------------------------------------- else # END if [[ -e $folder ]]; then echo ; echo ERROR -> folder $folder does NOT exist!; echo continue fidone < $filename # useful if we use the whileecho The script to prepare the move of the html files FROM FOLDER in other folders finished!echo echo echo FOLDER ARRAY AT THE END: for i in ${!folder_array[@]}; do echo folder : $i => nb_folder: ${folder_array[$i]} / nb__file in last folder: ${folder_files_array[$i]}; doneecho echo echo This is the end of the scriptAnd the partitions:$df -h/dev/sdb1 2.0T 370G 1.7T 19% /var/filesX.X.X.X:/files 11T 2.8T 7.2T 28% /filesLAST EDIT:After further analysis, I found that /var/files/html_files/ was a symlink to /files/html_files/So the source and destination were actually the same (remote) server.I placed my script to run on the remote server, and it seems to be much faster.Thanks for your help and interesting comments!
is linux redirect >> slower with bigger files?
io redirection
I just want to know if I need to redirect the output in different files. (e.g.: write several files of 2000 lines)Splitting into a larger number of files will not necessarily equal faster execution. Three simple test cases illustrate this. These three cases print 3M lines each. These are listed in the order of execution speed, fastest to slowest.One Redirection Outside of Loopfor i in $(seq $((3000000/2000))); do seq 2000; done > fileAppending to the Same File, Inside the Loopfor i in $(seq $((3000000/2000))); do seq 2000 >> file; doneSplitting Output to Multiple Filesfor i in $(seq $((3000000/2000))); do seq 2000 > file$i; doneThe latter commands consistently take more user and system time than the former commands.From this we can conclude that splitting into a larger number of files does not guarantee performance increase in this simple case. The opposite is true.Number of I/O OperationsThe performance depends not only on the size of the file but also the number of IO operations. When appending (>>) even more I/O calls take place in order to seek to the end of the file.This first script performs the i/o operations (>>) outside the for loop:$ cat outloop.sh#!/bin/sh>filefor i in $(seq 1 ${1:?})do echo $idone >> fileThis script, on the other hand, performs the i/o operations (>>) on each iteration, inside the for loop:$ cat inloop.sh#!/bin/sh>filefor i in $(seq 1 ${1:?})do echo $i >> filedoneRun and compare, see how the location of the >> operator affects performance:$ x=500000; time sh outloop.sh $x; time sh inloop.sh $x; real 0m1.227suser 0m0.389ssys 0m0.859sreal 0m2.996suser 0m0.809ssys 0m2.197sPlacing the redirection operator outside the loop doubles the performance when writing 500000 lines (on my system).
_unix.3125
I am building a Linux kernel, via the Debian linux-2.6 source package.Now there's CONFIG_VZ_FAIRSCHED=y in a sub-config, which gets merged into the final .config, where apparently also y gets used:# grep FAIRSCHED debian/config/**/*debian/config/featureset-openvz/config:CONFIG_VZ_FAIRSCHED=yThe .config used during build:# grep FAIRSCHED debian/build/build_amd64_openvz_amd64/.configCONFIG_VZ_FAIRSCHED=yI could understand the warning, if now n would be used, but nothing appears to have been changed?!This is the output during the make -f debian/rules.gen binary-arch_amd64_openvz_amd64 binary-indep call:make[2]: Entering directory `/var/lib/vz/private/linux.nobackup/linux-2.6/debian/build/source_amd64_openvz' HOSTCC scripts/basic/fixdep HOSTCC scripts/basic/docproc HOSTCC scripts/basic/hash GEN /var/lib/vz/private/linux.nobackup/linux-2.6/debian/build/build_amd64_openvz_amd64/Makefile HOSTCC scripts/kconfig/conf.o HOSTCC scripts/kconfig/kxgettext.o SHIPPED scripts/kconfig/zconf.tab.c SHIPPED scripts/kconfig/lex.zconf.c SHIPPED scripts/kconfig/zconf.hash.c HOSTCC scripts/kconfig/zconf.tab.o HOSTLD scripts/kconfig/confscripts/kconfig/conf -R arch/x86/Kconfig.config:3518:warning: override: VZ_FAIRSCHED changes choice stateWhat is this warning referring to?
What does warning: override: VZ_FAIRSCHED changes choice state mean?
kernel;configuration
null
_unix.94036
I have put together this script for recording the microphone, the desktop audio and the screen using ffmpeg:DATE=`which date`RESO=2560x1440FPS=30PRESET=ultrafastDIRECTORY=$HOME/Video/FILENAME=videocast`$DATE +%d%m%Y_%H.%M.%S`.mkvffmpeg -y -vsync 1 \-f pulse -ac 2 -i alsa_output.pci-0000_00_1b.0.analog-stereo.monitor \-f pulse -ac 1 -ar 25000 -i alsa_input.usb-0d8c_C-Media_USB_Headphone_Set-00-Set.analog-mono \-filter_complex aresample=async=1,amix=duration=shortest,apad \-f x11grab -r $FPS -s $RESO -i :0.0 \-acodec libvorbis \-vcodec libx264 -pix_fmt yuv420p -preset $PRESET -threads 0 \$DIRECTORY$FILENAMEEverything is recorded and between the screen and the microphone sound there are no issues what so ever, however the desktop audio falls behind badly. It begins in sync but gets worse over time during playback, also in ffplay. It does not matter what application playing sound: both Youtube-videos in the browser, desktop sounds and Rhythmbox (playing a couple of seconds of song then stops, wait and repeat) gets out of sync.The terminal output complain about ALSA lib pcm.c:7843:(snd_pcm_recover) overrun occurred22.73 bitrate=10384.5kbits/s ALSA lib pcm.c:7843:(snd_pcm_recover) underrun occurred and similar but I do not know what that means.Full terminal output here:ffmpeg version 2.0.1 Copyright (c) 2000-2013 the FFmpeg developers built on Aug 11 2013 14:52:28 with gcc 4.8.1 (GCC) 20130725 (prerelease) configuration: --prefix=/usr --disable-debug --disable-static --enable-avresample --enable-dxva2 --enable-fontconfig --enable-gpl --enable-libass --enable-libbluray --enable-libfreetype --enable-libgsm --enable-libmodplug --enable-libmp3lame --enable-libopencore_amrnb --enable-libopencore_amrwb --enable-libopenjpeg --enable-libopus --enable-libpulse --enable-librtmp --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libv4l2 --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libxvid --enable-pic --enable-postproc --enable-runtime-cpudetect --enable-shared --enable-swresample --enable-vdpau --enable-version3 --enable-x11grab libavutil 52. 38.100 / 52. 38.100 libavcodec 55. 18.102 / 55. 18.102 libavformat 55. 12.100 / 55. 12.100 libavdevice 55. 3.100 / 55. 3.100 libavfilter 3. 79.101 / 3. 79.101 libavresample 1. 1. 0 / 1. 1. 0 libswscale 2. 3.100 / 2. 3.100 libswresample 0. 17.102 / 0. 17.102 libpostproc 52. 3.100 / 52. 3.100Guessed Channel Layout for Input Stream #0.0 : stereoInput #0, pulse, from 'alsa_output.pci-0000_00_1b.0.analog-stereo.monitor': Duration: N/A, start: 0.014093, bitrate: 1536 kb/s Stream #0:0: Audio: pcm_s16le, 48000 Hz, stereo, s16, 1536 kb/sGuessed Channel Layout for Input Stream #1.0 : monoInput #1, pulse, from 'alsa_input.usb-0d8c_C-Media_USB_Headphone_Set-00-Set.analog-mono': Duration: N/A, start: 0.006172, bitrate: 400 kb/s Stream #1:0: Audio: pcm_s16le, 25000 Hz, mono, s16, 400 kb/s[x11grab @ 0x218a6e0] device: :0.0 -> display: :0.0 x: 0 y: 0 width: 2560 height: 1440[x11grab @ 0x218a6e0] shared memory extension foundInput #2, x11grab, from ':0.0': Duration: N/A, start: 1379021580.184321, bitrate: N/A Stream #2:0: Video: rawvideo (BGR[0] / 0x524742), bgr0, 2560x1440, -2147483 kb/s, 30 tbr, 1000k tbn, 30 tbc[libx264 @ 0x21ae560] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX[libx264 @ 0x21ae560] profile Constrained Baseline, level 5.0[libx264 @ 0x21ae560] 264 - core 133 r2339 585324f - H.264/MPEG-4 AVC codec - Copyleft 2003-2013 - http://www.videolan.org/x264.html - options: cabac=0 ref=1 deblock=0:0:0 analyse=0:0 me=dia subme=0 psy=1 psy_rd=1.00:0.00 mixed_ref=0 me_range=16 chroma_me=1 trellis=0 8x8dct=0 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=0 threads=12 lookahead_threads=2 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=0 weightp=0 keyint=250 keyint_min=25 scenecut=0 intra_refresh=0 rc=crf mbtree=0 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=0Output #0, matroska, to '/home/anders/Video/videocast12092013_23.33.00.mkv': Metadata: encoder : Lavf55.12.100 Stream #0:0: Audio: vorbis (libvorbis) (oV[0][0] / 0x566F), 25000 Hz, mono, fltp Stream #0:1: Video: h264 (libx264) (H264 / 0x34363248), yuv420p, 2560x1440, q=-1--1, 1k tbn, 30 tbcStream mapping: Stream #0:0 (pcm_s16le) -> aresample (graph 0) Stream #1:0 (pcm_s16le) -> amix:input1 (graph 0) amix (graph 0) -> Stream #0:0 (libvorbis) Stream #2:0 -> #0:1 (rawvideo -> libx264)Press [q] to stop, [?] for helpALSA lib pcm.c:7843:(snd_pcm_recover) overrun occurred22.73 bitrate=10384.5kbits/s ALSA lib pcm.c:7843:(snd_pcm_recover) underrun occurredALSA lib pcm.c:7843:(snd_pcm_recover) underrun occurred3.22 bitrate=10423.3kbits/s ALSA lib pcm.c:7843:(snd_pcm_recover) overrun occurred25.25 bitrate=11011.0kbits/s ALSA lib pcm.c:7843:(snd_pcm_recover) underrun occurredALSA lib pcm.c:7843:(snd_pcm_recover) underrun occurred5.76 bitrate=11013.7kbits/s ALSA lib pcm.c:7843:(snd_pcm_recover) overrun occurred27.25 bitrate=11175.4kbits/s ALSA lib pcm.c:7843:(snd_pcm_recover) underrun occurred7.76 bitrate=11168.7kbits/s ALSA lib pcm.c:7843:(snd_pcm_recover) underrun occurred8.24 bitrate=11176.4kbits/s ALSA lib pcm.c:7843:(snd_pcm_recover) overrun occurred55.48 bitrate=11243.8kbits/s ALSA lib pcm.c:7843:(snd_pcm_recover) underrun occurredALSA lib pcm.c:7843:(snd_pcm_recover) underrun occurredframe=12871 fps= 30 q=-1.0 Lsize= 542369kB time=00:07:09.31 bitrate=10349.3kbits/s video:539762kB audio:2363kB subtitle:0 global headers:3kB muxing overhead 0.044476%[libx264 @ 0x21ae560] frame I:52 Avg QP:15.46 size:725888[libx264 @ 0x21ae560] frame P:12819 Avg QP:18.26 size: 40172[libx264 @ 0x21ae560] mb I I16..4: 100.0% 0.0% 0.0%[libx264 @ 0x21ae560] mb P I16..4: 2.6% 0.0% 0.0% P16..4: 18.1% 0.0% 0.0% 0.0% 0.0% skip:79.3%[libx264 @ 0x21ae560] coded y,uvDC,uvAC intra: 57.8% 49.8% 25.3% inter: 8.9% 8.7% 2.2%[libx264 @ 0x21ae560] i16 v,h,dc,p: 23% 29% 32% 16%[libx264 @ 0x21ae560] i8c dc,h,v,p: 45% 28% 18% 9%[libx264 @ 0x21ae560] kb/s:10306.26Please help me, I am really close to get this working! UPDATE: The desktop audio is out of sync when skipping filter_complex and microphone also, bit in a smaller amount. Using copy instead of libvorbis does not change anything either.
Desktop audio falls behind when recording microphone + desktop audio + screen using ffmpeg
audio;pulseaudio;ffmpeg;screencasting
Not sure if this will fix it for you, but I have a script that I haven't had problems with. Comparing our two scripts, the only differences I can see are:my filter_complex is just amergeI force the use of 4 threadsMy audio codec is mp3lameI'm thinking the audio codec change is the most relevant difference. I think that some audio codecs get interlaced with the video somehow so they can't get out of sync. Unfortunately I'm no video engineer so I can't be so sure.Here is my script:#!/usr/bin/bash# video informationINRES=1920x1080OUTRES=1280x720FPS=24QUAL=fastFILE_OUT=$1#audio informationPULSE_IN=alsa_input.pci-0000_00_1b.0.analog-stereoPULSE_OUT=alsa_output.pci-0000_00_1b.0.analog-stereo.monitorffmpeg -f x11grab -s $INRES -r $FPS -i :0.0 \ -f pulse -i $PULSE_IN -f pulse -i $PULSE_OUT \ -filter_complex amerge \ -vcodec libx264 -crf 30 -preset $QUAL -s $OUTRES \ -acodec libmp3lame -ab 96k -ar 44100 -threads 4 -pix_fmt yuv420p \ -f flv $FILE_OUT
_unix.374199
Assume a text string my_string$ my_string=foo bar=1ab baz=222;I would like to extract the alphanumeric string between keyword baz and the semi-colon.How do I have to modify the following grep code using regex assertions to also exclude the trailing semi-colon?$ echo $my_string | grep -oP '(?<='baz=').*'222;
Extracting string via grep regex assertions
grep;regular expression;string
Unless the string that you want to extract may itself contain ;, the simplest thing is probably to replace . (which matches any single character) with [^;] (which matches any character excluding ;)$ printf '%s\n' $my_string | grep -oP '(?<='baz=')[^;]*'222With grep linked to libpcre 7.2 or newer, you can also simplify the lookbehind using the \K form:$ printf '%s\n' $my_string | grep -oP 'baz=\K[^;]*'222Those will print all occurrences in the string and assume the matching text doesn't contain newline characters (since grep processes each line of input separately).
_unix.103551
I'm trying to install BLT2.4z, with Tcl/tk8.4. When I run command make I see this:(cd src; make all) gcc -c -Wall -O6 -I. -I. -I/Users/scarter/tk8.4.20/unix/include -I/Users/scarter/tcl8.4.20/unix/include bltAlloc.cerror: invalid value '6' in '-O6' make[1]: *** [bltAlloc.o] Error 1What's going on here?
BLT2.z installation
make
null
_cs.66526
Given an undirected graph $G$ of $N$ nodes and a starting node $s$, I want to build, with dynamic programming, a table whose $(k, n)$-th entry is $1$ if node $n$ is reachable with exactly $k$ steps from node $s$.Specifically, for the first row of the table, I set $(1, n)$ to 1 if node $n$ is connected to node $s$. Then for the second row, I set $(2, n)$ to 1 if node $n$ is connected to any of the nodes who have value $1$ in the first row. I can stop the construction whenever I have a repeated row. In other words, I want to build the table to the point where more rows will be redundant, as there are already exact copies of them up in the table. So $k$ should be expressed in terms of $N$.I want to check if this table construction can be done in polynomial time. I think it boils down to whether the number of rows is polynomially bounded. Is it so?My intuition is yes, because in the worst case, where the $N$ nodes are serially connected one next to another, and starting node $s$ is at one end, I can still reach the other end with $N-1$ steps.
Nodes reached with exactly $k$ steps in an undirected graph?
graphs;graph theory
null
_cstheory.19459
What is an informational density and why numeral system with the base of e (2,71828...) has the maximum informational density? How do you calculate informational density of a given numeral system?
Numeral system information density
it.information theory
Let you want to represent integer values from 0 to 999. You need 3 digits, each has 10 states, 30 states overall. When using binary digits, you need 10 digits with 2 states, 20 states overall, So binary representation is more dense. In general case, p*ln(N)/ln(p) states is required. This formula has minimum at p=e. However, this is pure theoretical speculation. Numeral system with the base 3 is slightly more dense than that of 2, and 3-based computers were really built, but appeared more complex than binary computers.
_cs.69084
I was reading about CRC coding from two books:Data Communication and Networking by Forouzan Page 294Computer Network by Tanenbaum Page 188They use following notations:$d(x)$: dataword to be sent (as a polynomial)$c(x)$: codeword sent (as a polynomial)$e(x)$: error (as a polynomial)$c(x)+e(x)$: codeword sent with error introduced (if any)$g(x)$: generator polynomial to be used at CRC encoder (for creating c(x)) and decoder (for checking if error is introduced or not during transmission)Forouzan then states following:A single-bit error is $e(x)=x^i$, where i is the position of the bit. If a single-bit error is caught, then $e(x)=x^i$ is not divisible by $g(x)$. (Note that when we say not divisible, we mean that there is a remainder.) If $g(x)$ has at least two terms and the coefficient of $x^0$ is not zero (the rightmost bit is 1), then $e(x)$ cannot be divided by $g(x)$ and all single bits errors can be caught.He then gives example of generator:$g(x)=x+1$ saying that it can catch all single bit error (with which I have some doubts)$g(x)=x^3$ saying that all single-bit errors in positions 1 to 3 are caught rest are left uncaught (with which I dont have any confusion).For example, consider the below example for generator $g(x)=x+1$:Tanenbaum says:If $g(x)$ contains two or more terms, $e(x)$ will never divide into $g(x)$, so all single-bit errors will be detected.My doubts:Q1. Forouzan states additional requirement to Tanenbaum that the coefficient of $x^0$ is not zero (the rightmost bit is 1). Whats correct? and why? Will generator $g(x)=x^3+x^2$ (which satisfies Tanenbaum's statement) capture all single bit errors? Do we always need $+1$ in $g(x)$ for capturing any single bit error?Q2. I feel I dont understand the reason/logic behind statements made by both authors, thats why I am not able to decide on myself whether $+1$ should be there in $g(x)$ or not. Whats the logic then behind both of statements (whichever is correct)? More precisely: why $e(x)$ with single bit error will not be dividable by $g(x)$ if $g(x)$ has at least two terms and the coefficient of $x^0$ is not zero as Forouzan says or if $g(x)$ contains two or more terms as Tanenbaum says (whichever is correct).I must be missing something very very basic stuff here.
Polynomial generator required to detect single bit error in Cyclic Redundancy Check codes
computer networks;polynomials;crc
null
_webapps.105746
I am presently working on an application that very specifically requires me to have page numbers in the top left hand corner. As I do not have Word on my present computer, and am most familiar with Docs, I would like to use it for this application. Unfortunately, the only options Docs seems to allow me off of the Insert tab are top and bottom right hand corners for page numbers. Is there a hack or app that grants a way around this? Or am I just missing something?
How to move page numbers in a Google Doc?
google documents
null
_unix.52991
After a long struggle I finally seem to have installed the non-free wireless firmware for my wireless NIC. I'm trying to set up a file server, so I want to configure the network to be static. Would one of you guys mind helping me?For example I don't know what my /etc/network/interfaces file should look like, currently it looks like this:auto loiface lo inet loopbackallow-hotplug wlan1iface wlan1 inet static address 192.168.10.111 netmask 255.255.255.0 network 192.168.10.0 broadcast 192.168.10.255 gateway 192.168.10.1 # wireless-* options are implemented by the wireless-tools package wireless-mode managed wireless-essid Optimus Pwn wpa-psk s:roonwolf # I changed this from wiresless-key1 or something like that dns-* options are implemented by the resolvconf package, if installed dns-nameservers 192.168.10.1 dns-search localdomainMy ifconfig command looks like this:lo Link encap: Local Loopbackinet addr: 127.0.0.1 Mask: 255.0.0.0inet6 addr: ::1/128 Scope: HostUP LOOPBACK RUNNING MTU:16436 Metric:1RX packets: 95 errors: 0 dropped: 0 overruns: 0 frame: 0TX packets: 95 errors: 0 dropped: 0 overruns: 0 carrier: 0collisions: 0 txqueuelen: 0RX bytes: 10376 (10.1KiB) TX bytes: 10376 (10.1 KiB)wlan1 Link encap: Ethernet HWaddr 00:18:f3:85:99:07inet addr:192.168.10.111 Bcast:192.168.10.255 Mask:255.255.255.0UP BROADCAST MULTICAST MTU:1500 Metric:1RX packets:0 errors:0 dropped:0 overruns:0 frame:0TX packets:0 errors:0 dropped:0 overruns:0 carrier: 0collisions:0 txqueuelen:1000RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)Here's what I get when I iwlist my ssid:wlan1 Scan completed : Cell 01 - Address: 00:14:D1:A4:0A:36 Channel:6 Frequency:2.437 GHz (Channel 6) Quality=70/70 Signal level=-17 dBm Encryption key:on ESSID:Optimus Pwn Bit Rates:1 Mb/s; 2 Mb/s; 5.5 Mb/s; 11 Mb/s; 6 Mb/s 9 Mb/s; 12 Mb/s; 18 Mb/s Bit Rates:24 Mb/s; 36 Mb/s; 48 Mb/s; 54 Mb/s Mode:Master Extra:tsf=00000003ff6381c1 Extra: Last beacon: 100ms ago IE: Unknown: 000B4F7074696D75732050776E IE: Unknown: 010882848B960C121824 IE: Unknown: 030106 IE: Unknown: 0706555320010B1B IE: Unknown: 200100 IE: WPA Version 1 Group Cipher : TKIP Pairwise Ciphers (1) : TKIP Authentication Suites (1) : PSK IE: Unknown: 2A0100 IE: Unknown: 32043048606C IE: Unknown: DD180050F2020101070003A4000027A4000042435E0062322F00 IE: Unknown: DD1E00904C334C101BFFFF000000000000000000000000000000000000000000 IE: Unknown: 2D1A4C101BFFFF000000000000000000000000000000000000000000 IE: Unknown: DD1A00904C3406001900000000000000000000000000000000000000 IE: Unknown: 3D1606001900000000000000000000000000000000000000 IE: Unknown: DD0900037F01010000FF7FWhen I ping 192.168.10.101(My primary desktop) I getPING 192.168.10.101 (192.168.10.101) 56(84) bytes of data.From 192.168.10.111 icmp_seq=2 Destination Host UnreachableWhen I ping google.com I get(after a lengthy pause):ping: unknown host google.comWhat exactly am I doing wrong here? Should I restart the network?
Configuring Wireless Network
linux;debian;networking;configuration;wifi
Have you tried Network Manager? It's easy to set up static IPs for wireless networks using the GUI. Once you get things working there, if you want the connection available all the time even when you're not logged in (e.g. for a file server), just select the Connect Automatically and Available to all users checkboxes.If you're allergic to GUIs, you can configure the connection by creating a file in the /etc/NetworkManager/system-connections/ directory, as described on this page.
_softwareengineering.254714
Is the following method considered to be doing one thing only?I'm wondering about that since it takes an optional argument.public function findErrors($name = null){ if ($name) { return isset($this->errors[$name]) ? $this->errors[$name] : []; } return $this->errors ?: [];}If not, would it be better / matter to have it separated like the following:public function findErrors(){ return $this->errors ?: [];}public function findErrorsOf($name){ return isset($this->errors[$name]) ? $this->errors[$name] : [];}
Does this function do one thing only?
design patterns;php;functions;methods
No, not really. It's clearly got two independent paths. It would be better to separate them into two (possibly overloaded) functions to better decouple them. Better yet, I would look to eliminate the name specific behavior unless it's really that common. What happens if you want to find errors since Thursday? What about errors that contain the word Banana? You shouldn't go back to edit this code every time you have a new search criteria - and you shouldn't have one way to find names and another way to find bananas.
_codereview.37736
Some time ago I created a markdown parser in clojure and I would like to get some feedback, since I'm a clojure noob in the first place (is the code understandable?/is it idiomatic?/can some things be improved?). So I'm looking for feedback on best practices and design pattern usage (performance isn't my main concern).The most relevant parts are:blocks.clj(ns mdclj.blocks (:use [clojure.string :only [blank? split]] [mdclj.spans :only [parse-spans]] [mdclj.misc]))(defn- collect-prefixed-lines [lines prefix] (when-let [[prefixed remaining] (partition-while #(startswith % prefix) lines)] [(map #(to-string (drop (count prefix) %)) prefixed) remaining]))(defn- line-seperated [lines] (when-let [[par [r & rrest :as remaining]] (partition-while (complement blank?) lines)] (list par rrest)))(declare parse-blocks)(defn- create-block-map [type content & extra] (into {:type type :content content} extra))(defn- clean-heading-string [line] (-> line (to-string) (clojure.string/trim) (clojure.string/replace # #*$ ) ;; match space followed by any number of #s (clojure.string/trim)))(defn match-heading [[head & remaining :as text]] (let [headings (map vector (range 1 6) (iterate #(str \# %) #)) ;; ([1 #] [2 ##] [3 ###] ...) [size rest] (some (fn [[index pattern]] (let [rest (startswith head pattern)] (when (seq rest) [index rest]))) headings)] (when (not (nil? rest)) [(create-block-map ::heading (parse-spans (clean-heading-string rest)) {:size size}) remaining])))(defn- match-underline-heading [[caption underline & remaining :as text]] (let [current (set underline) marker [\- \=] markers (mapcat #(list #{\space %} #{%}) marker)] (when (and (some #(= % current) markers) (some #(startswith underline [%]) marker) (< (count (partition-by identity underline)) 3)) [(create-block-map ::heading (parse-spans caption) remaining {:size 1}) remaining])))(defn- match-horizontal-rule [[rule & remaining :as text]] (let [s (set rule) marker [\- \*] markers (mapcat #(list #{\space %} #{%}) marker)] (when (and (some #(= % s) markers) (> (some #(get (frequencies rule) %) marker) 2)) [{:type ::hrule} remaining])))(defn- match-codeblock [text] (when-let [[code remaining] (collect-prefixed-lines text )] [(create-block-map ::codeblock code) remaining]))(defn- match-blockquote [text] (when-let [[quote remaining] (collect-prefixed-lines text > )] [(create-block-map ::blockquote (parse-blocks quote)) remaining]))(defn- match-paragraph [text] (when-let [[lines remaining] (line-seperated text)] [(create-block-map ::paragraph (parse-spans (clojure.string/join \n lines))) remaining]))(defn- match-empty [[head & remaining :as text]] (when (and (blank? head) (seq remaining)) (parse-blocks remaining)))(def ^:private block-matcher [match-heading match-underline-heading match-horizontal-rule match-codeblock match-blockquote match-paragraph match-empty])(defn- parse-blocks [lines] (lazy-seq (when-let [[result remaining] (some #(% lines) block-matcher)] (cons result (parse-blocks remaining)))))(defn parse-text [text] (parse-blocks (seq (clojure.string/split-lines text))))spans.clj(ns mdclj.spans (:use [mdclj.misc]))(def ^:private formatter [[` ::inlinecode] [** ::strong] [__ ::strong] [* ::emphasis] [_ ::emphasis]])(defn- apply-formatter [text [pattern spantype]] Checks if text starts with the given pattern. If so, return the spantype, the text enclosed in the pattern, and the remaining text (when-let [[body remaining] (delimited text pattern)] [spantype body remaining]))(defn- get-spantype [text] (let [[spantype body remaining :as match] (some #(apply-formatter text %) formatter)] (if (some-every-pred startswith [body remaining] [* _]) [spantype (-> body (vec) (conj (first remaining))) (rest remaining)] match)))(defn- make-literal [acc] Creates a literal span from the acc {:type ::literal :content (to-string (reverse acc))})(declare parse-spans)(defn- span-emit [literal-text span] Creates a vector containing a literal span created from literal-text and 'span' if literal-text, else 'span' (if (seq literal-text) [(make-literal literal-text) span] ;; if non-empty literal before next span [span]))(defn- concat-spans [acc span remaining] (concat (span-emit acc span) (parse-spans [] remaining)))(defn- parse-span-body ([body] (parse-span-body nil body)) ([spantype body] (if (in? [::inlinecode ::image] spantype) (to-string body) (parse-spans [] body)))) ;; all spans except inlinecode and image can be nested(defn- match-span [acc text] ;; matches ::inlinecode ::strong ::emphasis (when-let [[spantype body remaining :as match] (get-spantype text)] ;; get the first matching span (let [span {:type spantype :content (parse-span-body spantype body)}] (concat-spans acc span remaining))))(defn- extract-link-title [text] (reduce #(clojure.string/replace % %2 ) (to-string text) [#\$ #'$ #^\ #^'])) (defn- parse-link-text [linktext] (let [[link title] (clojure.string/split (to-string linktext) # 2)] (if (seq title) {:url link :title (extract-link-title title)} {:url link})))(defn- match-link-impl [acc text type] (when-let [[linkbody remaining :as body] (bracketed text [ ])] (when-let [[linktext remaining :as link] (bracketed remaining ( ))] (concat-spans acc (into {:type type :content (parse-span-body type linkbody)} (parse-link-text linktext)) remaining))))(defn- match-link [acc text] (match-link-impl acc text ::link))(defn- match-inline-image [acc [exmark & remaining :as text]] (when (= exmark \!) (match-link-impl acc remaining ::image)))(defn- match-break [acc text] (when-let [remaining (some #(startswith text %) [ \n\r \n \r])] ;; match hard-breaks (concat-spans acc {:type ::hard-break} remaining)))(defn- match-literal [acc [t & trest :as text]] (cond (seq trest) (parse-spans (cons t acc) trest) ;; accumulate literal body (unparsed text left) (seq text) (list (make-literal (cons t acc))))) ;; emit literal (at end of text: no trest left)(def ^:private span-matcher [match-span match-link match-inline-image match-break match-literal])(defn parse-spans ([text] (parse-spans [] text)) ([acc text] (some #(% acc text) span-matcher)))misc.clj(ns mdclj.misc)(defn in? true if seq contains elm [seq elm] (some #(= elm %) seq))(defn startswith [coll prefix] Checks if coll starts with prefix. If so, returns the rest of coll, otherwise nil (let [[t & trest] coll [p & prest] prefix] (cond (and (= p t) ((some-fn seq) trest prest)) (recur trest prest) (= p t) '() (nil? prefix) coll)))(defn partition-while ([f coll] (partition-while f [] coll)) ([f acc [head & tail :as coll]] (cond (f head) (recur f (cons head acc) tail) (seq acc) (list (reverse acc) coll))))(defn- bracketed-body [closing acc text] Searches for the sequence 'closing' in text and returns a list containing the elements before and after it (let [[t & trest] text r (startswith text closing)] (cond (not (nil? r)) (list (reverse acc) r) (seq text) (recur closing (cons t acc) trest))))(defn bracketed [coll opening closing] Checks if coll starts with opening and ends with closing. If so, returns a list of the elements between 'opening' and 'closing', and the remaining elements (when-let [remaining (startswith coll opening)] (bracketed-body closing '() remaining)))(defn delimited [coll pattern] Checks if coll starts with pattern and also contains pattern. If so, returns a list of the elements between the pattern and the remaining elements (bracketed coll pattern pattern))(defn to-string [coll] Takes a coll of chars and returns a string (apply str coll))(defn some-every-pred [f ands ors] Builds a list of partial function predicates with function f and all values in ands and returns if any argument in ors fullfills all those predicates (let [preds (map #(partial f %) ands)] (some true? (map #((apply every-pred preds) %) ors))))Some highlights:(def ^:private block-matcher [match-heading match-underline-heading match-horizontal-rule match-codeblock match-blockquote match-paragraph match-empty])(defn- parse-blocks [lines] (lazy-seq (when-let [[result remaining] (some #(% lines) block-matcher)] (cons result (parse-blocks remaining)))))This piece always seemed somewhat strange to me. Is using a list of function and when-let idiomatic here? Are there alternatives?(defn- create-block-map [type content & extra] (into {:type type :content content} extra))I'm using this function to create hashmaps in a certain format. Is this an idiomatic approach? P.S.: While looking at the code myself, I spot two minor things: I can use when-not instead of when(not (... and clojure.string/join coll instead of (apply str coll)
Idiomatic clojure code in a markdown parser
parsing;clojure;markdown
This is overall very impressive! Here are my thoughts:It looks like you've pretty much written your parser from scratch -- you have it set up so that it takes text as an input and dissects it line by line, looking for blocks and spans, and labeling and converting them appropriately. This is great, but a much easier way to go about this would be to use a parsing library like instaparse. Using this approach, you define a simple grammar as either a string or a separate text file, and then use instaparse to turn it into a custom parser, then you just use the parser as a function on the text, returning a parse tree that contains all of the information you need, in either hiccup or enlive format. There's a little bit of a learning curve if you've never defined a grammar before, but I found it pretty easy to learn, and instaparse is one of the most intuitive parsing libraries out of the handful that I tried. I would recommend this method -- it lets you worry more about defining your grammar and leave the implementation details to instaparse. At this point you've already done most (all?) of the work manually, so you might want to stick with the structure you have in place, but you should at least consider re-doing it with a parsing library -- it would at least make it easier to add new Markdown features.I think your block-matcher/parse-blocks section is elegant and idiomatic as far as I'm concerned. It's a nice demonstration of first-class functions in Clojure. The only thing is, I'm not sure that you need to wrap it in a lazy-seq, since don't you need to realize the entire sequence? I haven't looked super thoroughly at your code, but I'm assuming you would use this function to parse all the lines of text from the input, so this sequence might not necessarily need to be lazy. It all depends on how you're using that function, though.I think your create-block-map function is nice and idiomatic, too. It's such a simple function that you could potentially do without it, and just have all of your match-* functions return something like this:[{:type ::heading :content (parse-spans (clean-heading-string rest)) :size size} remaining]But you have so many different match-* functions, it would get tedious having that show up under every single one of them, so I think you did the right thing by pulling it out into a separate function, thereby enabling you to express the above as just, e.g., (create-block-map ::heading (parse-spans (clean-heading-string rest)) {:size size}. My only suggestion would be to consider renaming it to just block-map for the sake of simplicity -- that's just a minor aesthetic preference, though.Lastly, I saw this bit in your match-heading function:(when (not (nil? rest)) [(create-block-map ...You could simplify this to just:(when rest [(create-block-map ...Since you're only using rest in a boolean context (it's either nil or it's a non-nil value) within a when expression, you can just use the value of rest itself. If it's not nil or false, the rest of the expression will be returned, otherwise nil will be returned. The only reason you might not want to do it this way is if you still want the rest of the when expression to be returned if the value of rest is false -- i.e., literally anything but nil. If that's not a concern, though, (when rest ... is more concise and idiomatic.Hope that helps. You're off to a great start!
_webmaster.10841
Whether clickjacking is an ethically responsible way of earning advertisement revenues is a subjective discussion and should not be discussed here.However, it appears that quite a lot of popular sites generate popups when you click either of their links or buttons. An example is the Party Poker advertisement (I am sure many of you will have seen this one).I wonder though, what kind of advertisement companies allow such techniques? Surely Google Adsense does not? But which do, and are they reliable partners?Update: an example of an organization using clickjacking to earn advertisement revenues is The Pirate Bay. When clicking on a torrent link, advertisements of Party Poker will popup.
Advertisement programs that allow clickjacking (earning advertisement revenues by popups generated by clicks on the website)?
google adsense;advertising
null
_unix.47444
The problem is that I fail to SSH to a remote machine via its hostname, while using its IP works.The hostname returned by command hostname is: california_desertwhile the name returned by command nslookup $IP_address is: pcpp3238782. They did not match each other.I think that's why I cannot connect to remote machine using its hostname.I have checked /etc/hosts, /etc/hostname, /etc/sysconfig/network: all set the hostname to california_desert.Checked with /etc/resolve.conf, the name server is set to the right one.Also tried strace but no new clue.Anybody can please help?
Fail to ssh to remote machine via hostname
linux;ssh;hostname
The problem here is that the hostname and hosts files are only used for the computer they're on. In order for other computers to be able to use the hostname, it needs to be in the DNS zone for the domain.Think of it like this - you get a phone, and it has a phone number 555-5555. You now know that to call California_desert, you need to dial 555-5555. But nobody else knows this. In order for others to know how to reach you, you need to register your phone number in the directory. DNS is that directory service.Of course, you can also tell a friend that your number is 555-5555 and then they can call you directly without looking it up in the directory. For a unix system, this would be like adding the hostname and ip for California_desert to the hosts file on every server that wants to connect to it.
_cs.69129
Let's say we have a pipeline with 20 stages. If the testing for the jump condition is done at stage 14 and we have a wrong prediction, then the instructions processed in those 14 stages, that shouldn't have been processed due to the wrong prediction made the processor lose 14 cycles. We know that the jump instruction entered the pipeline 14 stages back, therefore from this behaviour I deduced that 14 cycles were lost. Am I correct? If no, why?
Branch wrong prediction pipeline
cpu;cpu pipelines
It's not correct. The instruction that would determine the result of the condition is read at cycle x. The conditional branch is read at cycle x + k. k can be any value, depending on the code. For example I can have a comparison compare x and 0, then half a dozen unrelated instructions, then an instruction branch if the comparison result is 'greater'. You are saying the correct condition is determined at cycle x + 14. You lose 14 - k cycles.You may find out 14 cycles later that the branch was predicted wrongly, or 8 cycles later, or in the next cycle. What is lost is the cycles between branch prediction and the point where the correct way became known, which is variable. In this situation, compilers will often try to issue the comparison as early as possible before the branch instruction, to minimise the penalty for an incorrect branch. To clarify: If a compare instruction is read at cycle x, and the result of the comparison is available at cycle x+14, and a conditional branch instruction is read at cycle x + k, then it doesn't matter at which stage in the pipeline the incorrect branch is detected. What matters is the time from the start of the conditional branch, to the time of detection of the incorrect branch, and that time is variable. If the compare instruction is issued early enough then the branch isn't even predicted because the result of the compare instruction is known.
_codereview.66446
I've made a program that outputs you the most common word in txt file. Does anybody know how to optimize it so that it would work for bigger files and run faster?#include <iostream>#include <string>#include <fstream>#include <cstdlib>#include <vector>#include <algorithm>#include <math.h>using namespace std;int main(){ ifstream in(file.txt); if(!in){ cerr << Could not open file.txt.; return EXIT_FAILURE; } string str, str2, strn, tab[10000], tab2[10000]; int i, k, j, n, l, tabl; char c = 179; vector<int> tabs; vector<string> stringi; while(getline(in, str2)){ str += str2; str += ' '; } k = 0; for(i = 0; i < str.length(); i++){ if(str[i] != ' ' && str[i] != '.' && str[i] != '\t' && str[i] != ',' && str[i] != ';' && str[i] != ':' && str[i] != '}' && str[i] != '{'){ tab[k] += tolower(str[i]); }else{ k++; } if(str[i] == '.' || str[i] == '\t' || str[i] == ',' || str[i] == ';' || str[i] == ':' || str[i] == '}' || str[i] == '{') { k--; } } tabl = k; k = 0; for(i = 0; i < tabl; i++){ for(j = 0; j < tabl; j++){ if(tab[i] == tab[j]){ k++; } } tabs.push_back(k); k = 0; } for(i = 0; i < tabl; i++){ for(j = 0; j < tabl-1; j++){ if(tab[j] < tab[j+1]){ n = tabs.at(j); tabs.at(j) = tabs.at(j+1); tabs.at(j+1) = n; strn = tab[j]; tab[j] = tab[j+1]; tab[j+1] = strn; } } } for(i = 0; i < tabl; i++){ for(j = 0; j < tabl-1; j++){ if(tabs.at(j) < tabs.at(j+1)){ n = tabs.at(j); tabs.at(j) = tabs.at(j+1); tabs.at(j+1) = n; strn = tab[j]; tab[j] = tab[j+1]; tab[j+1] = strn; } } } tab2[0] = tab[0]; for(i = 0; i < tabl; i++){ if(tab[i] != tab[i+1]){ tab2[i] = tab[i+1]; } } k = 1; l++; for(i = 0; i < tabl; i++){ if(!tab2[i].empty()){ l++; } } cout << ------------------------------------ << endl; cout << |--->TABLE OF MOST COMMON WORDS<---| << endl; cout << ------------------------------------ << endl; for(i = 0; i < tabl; i++){ if(!tab2[i].empty() && k <= 20 ){ cout << c << k++ << . << '\t' << c << tab2[i] << '\t' << c << * << tabs.at(i+1) << '\t' << c << roundf(((float)tabs.at(i+1)*100/l)*100)/100 << % << endl; } } cout << ------------------------------------ << endl ; cout << |----->Dif. strings: << '\t' << l << <-------| << endl ; cout << ------------------------------------ << endl; return 0;}Output image:
Most common words in a text file
c++;strings;file
OverallYou seem to have stuffed everything into a single function. This makes it harder than necessary to follow. A couple of functions to break things up into manageable units would probably be a good idea (its part of self documenting code).Your naming convention is also a bit shoddy. string str, str2, strn, tab[10000], tab2[10000]; int i, k, j, n, l, tabl; char c = 179; vector<int> tabs; vector<string> stringi;None of these names coneys any meaning of what they are being used for. Just like functions variables should be given meaningful names so that reading the code becomes self explanatory. std::string inputLine; std::getline(std::cin, inputLine);There is no real reason to use built-in C-arrays. std::vector and std::array are always going to be a better alternative (unless you are yourself building a container).Basic Code ReviewPrefer to use C++ header files:#include <math.h>// Prefer#include <cmath>It is guaranteed to know about namespaces and put the functions into it. It also includes some templated maths functions that are not available from C (that you may or may not get when using math.h)How many time must we say this!Did you not read any of the other previous review?Don't do thisusing namespace std;See: Why is using namespace std; considered bad practice?Why are you reading the whole file into memory? while(getline(in, str2)){ str += str2; str += ' '; }All this is doing is reading the whole file into memory (inefficiently) replacing newlines with space. Your main problem with this is going to be the continuous reallocation of the str buffer as it grows. If you know the size you can do this much better with a read(). std::size_t size = getFileSize(in); std::string buffer(size); in.read(&buffer[0], size);I would not even do this. It is relatively efficient to read a word at a time and processes that. The standard stream reads a space separated words quite easily with the operator>>.Here you are manually parsing the input buffer and removing punctuation (or a limited set of it). for(i = 0; i < str.length(); i++){ if(str[i] != ' ' && str[i] != '.' && str[i] != '\t' && str[i] != ',' && str[i] != ';' && str[i] != ':' && str[i] != '}' && str[i] != '{'){ tab[k] += tolower(str[i]); }else{ k++; } if(str[i] == '.' || str[i] == '\t' || str[i] == ',' || str[i] == ';' || str[i] == ':' || str[i] == '}' || str[i] == '{') { k--; } } tabl = k;There are a couple of things you can do to make this more efficient and easier to read. First lets make it function to hold the test: bool isMyPunct(unsigned char x) { return str[i] == '.' || str[i] == '\t' || str[i] == ',' || str[i] == ';' || str[i] == ':' || str[i] == '}' || str[i] == '{'; }Notice how much easier that is to read. Now you can modify your code too: for(i = 0; i < str.length(); i++){ if(str[i] != ' ' && !isMyPunct(str[i]){ tab[k] += tolower(str[i]); }else{ k++; } if(isMyPunct(str[i]) { k--; } } tabl = k;Back to the isMyPuct(). The best optimization here is we can can convert multiple tests into a single test by using a table lookup. bool isMyPunct(unsigned char x) { static char const punctTestTable[] = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, // '\t' 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, // ',' '.' 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, // ':' ';' 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, // '{' '}' 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 }; return punctTestTable[x]; }But doing this all manually is a pain. You can get the stream to do it for you by using a specialized local that treats your punctuation like spaces. See How to tokenzie (words) classifying punctuation as spaceThe rest of the code is incomprehensible.I could probably sit down and work it out. But the point is that is hard. You want people to be able to read your code at first glance and understand at least the gist of what you are doing. k = 0; for(i = 0; i < tabl; i++){ for(j = 0; j < tabl; j++){ if(tab[i] == tab[j]){ k++; } } tabs.push_back(k); k = 0; } .....Prefer \n to std::endl. cout << ------------------------------------ << endl;It does not force a flush when you don't need one. Note: you hardly ever need to force a flush it is much more efficient to let the system to flush at appropriate times.Re-ThinkVersion 1:int main(){ std::ifstream inputFile; inputFile.imbue(std::locale(std::locale(), new IgnorePuct(std::locale())); inputFile.open(file.txt); std::map<std::string, std::size_t> countOfWords; TopTenWordsWithCount topTen; std::string word; while(inputFile >> word) { std::size_t& count = countOfWords[word]; ++count; topTen.add(word, count); } std::cout << topTen;}Two things to implement:TopTenWordsWithCount: Use the C++ heap to keep an ordered set of words.IgnorePuct: See How to tokenzie (words) classifying punctuation as space
_softwareengineering.77061
In many langauges, super() lets you call the parent method which you have overridden. I've been using super in my Javascript (with fake object oriented implementation) to run common code for a long time without problems. Now I've finally hit a case where calling a protected base class method would have been better than calling super.Here's a concrete example. Grandparent -> Parent -> KidI needed to add a Kid. It's identical to Parent except for one method called someMethod. Kid's someMethod needed to do Grandparent's common stuff, but not Parent's specific stuff. So Kid's someMethod could not call super because that would trigger Parent specific code. Instead of changing my grand class structure, I just made Kid duplicate Grandparent's common code.Grandparent { someMethod { do common stuff };}Parent: Grandparent { someMethod { super(); do parent specific stuff }}Kid: Parent { someMethod { duplicate of common stuff do kid specific stuff }}In my book, duplicate code is bad. If only Grandparent wrote its common stuff as a protected method, I wouldn't need to duplicate code. Grandparent { protected commonMethod virtual someMethod}Parent: Grandparent { someMethod { commonMethod do parent specific stuff }}Kid: Parent { someMethod { commonMethod do kid specific stuff }} Should I just completely stop using super and switch to protected methods for running common code? Are there any pitfalls to protected methods for running common stuff?
Super vs protected method for running common code
object oriented;inheritance
No, you shouldn't stop using super, you should stop implementing bad OO designs.Notwithstanding the fact that we're not talking about true inheritance, why are you inheriting from the parent if you don't want its functionality? You're breaking the abstraction. The leaf-level class shouldn't know anything about the actual implementation of the parent, certainly not enough to know that it doesn't want the parent specific stuff to be done.More generally, you should be honouring the Liskov Substitution Principle whenever possible and only creating derived types that can actually be substituted wholesale for the base types. This seems to have been violated here; the parent has altered the contract of the grandparent in such a way that further generations of descendants have to disable it in order to function correctly.As far as I can tell, your kid should be a descendant of grandparent.Separately:There are other ways to avoid duplication. Not everything has to be based on inheritance; composition and plain old referencing are just fine. Make sure you're not violating the SRP.At least in most OO implementations, protected and virtual are not mutually exclusive. An override method is still free to call protected methods of its base, so if you foresee a need for derived classes to use specific functionality of the base, then abstract it into a protected method. That has absolutely no bearing on whether or not another method should be virtual.I think the root of the problem really lies in this sentence:I've been using super in my Javascript... to run common codeInheritance is not primarily a tool for code reuse. It's a tool for... well, inheritance, i.e. when you need to be able to substitute one implementation for another without loss of fidelity (polymorphism) or when the base class won't have all the information it needs to execute a particular operation and it needs to be able to delegate this to a derived class (abstraction).If you're using inheritance for the sole purpose of jamming in code that's frequently used, then you've completely misunderstood the concept; use composition for that, or hell, just write a bunch of functions. There's no reason to try to fake inheritance when the best possible result is still inferior to simpler alternatives.