id
stringlengths
5
27
question
stringlengths
19
69.9k
title
stringlengths
1
150
tags
stringlengths
1
118
accepted_answer
stringlengths
4
29.9k
_unix.370888
By security request, I need to set up Ubuntu desktops from my network to not allow the use of flashdrives or USB storage devices. But the USB mouse and keyboard should normally be charged. Is there a simple and effective way to prevent the use of these devices?
Lock Usb Flash-Drive on Ubuntu Linux
ubuntu;usb drive;lock
null
_codereview.82165
I need to write a function in Python that makes sure the user entered a valid JavaScript origin. If I understand it correctly, the origin includes the scheme, hostname and port (port and scheme might be implicit, defaulting to 80 and http respectively), so would this be a correct way to validate it?import urlparsedef validate_javascript_origin(origin): parsed = urlparse.urlsplit(origin) if parsed.scheme and parsed.scheme not in [http, https]: raise ValueError(Only the http and https url schemes are supported.) if not parsed.netloc: raise ValueError(The origin must include a hostname.) if parsed.path or parsed.query or parsed.fragment: raise ValueError(The origin must not contain a path, query string or fragment.)The origin will be used to pass as the preferredOrigin to window.postMessage.The main thing I'm worried about is that I'm not sure how credentials in the url (username:password@example.com) are handled.Going to http://username@frederikcreemers.be, and getting location.originin javascript returns http://frederikcreemers.be, so the origin doesn't include credentials. Would it be sufficient to add a condition like this to the function above:if @ in parsed.netloc: raise ValueError(The origin must not contain credentials.)
Validating JavaScript origins
python;validation
First off, easy task, add a docstring to your function, validate_javascript_origins, and describe this function, and its arguments, preferably in detail.Finally, if you're worried about input like username@site.end, you should add the check for the @ character. If it isn't valid input, or the input is interpreted in the wrong way, you should most definitely add this check.
_cs.57030
Sentiment analysis using Machine Learning is a hot topic. In the present situation when a person doesn't have a problem in having the training data set then which way should we create the classifier possibly the NaiveBayes classifier?
What is the Best and easiest way to create a Classifer for Sentiment Analysis
machine learning;natural language processing;classification
null
_cs.32822
In Is concurrent language CCS or CSP turing-equivalent in language power?, the answer says that CCS or CSP is Turing-complete. But that does not seem to answer whether CCS or CCP is Turing-equivalent. According to my understanding, Turing-equivalence and Turing-completeness is a different thing, but I may have been confused.
Concurrent programming language being Turing-equivalent and difference between Turing-complete and equivalent
computability;programming languages;parallel computing;concurrency;computation models
null
_unix.3092
I would like to organize all incoming email into the following directory structure based on the date of the email:ROOT --+-- YYYYMMDD --+-- HH --+-- mm --+-- YYYYMMDD-HHmmSS-000001 | | | | | | | | .... | | | | | | | +-- YYYYMMDD-HHmmSS-NNNNNN | | +-- mm -- | +-- HH --+-- mm -- Note that each email will be stored as a separate file and the name of the file is YYYYMMDD-HHmmss-NNNNN, where NNNNN is a running number.Can procmail or maildrop do this? If not, what other options are there?Thanks in advance.
Organize Email by Date Using procmail or maildrop
email
null
_unix.217773
I'm running a command like so, with the $OUTPUT variable saving the results of the command. But I also want to save the command itself to a variable for inclusion in a status email.OUTPUT=$(php -f $LOCATION/somefile.php -- -process $INPUTFILE 2>&1)The first part works. Then I tried this:IMPORTCOMMAND='php -f' $LOCATION'/somefile.php -- -process'$INPUTFILEBut instead of saving the string to the variable, it seems to be just executing the command a second time.EDIT:Here is a mockup of how I create my email body. I have single quotes around regular strings and then double quotes around bash variables.BODY='<b color=red>Output:</b><br />'$OUTPUT'<b color=red>Command:</b> '$IMPORTCOMMANDAfter that I try to replace newlines with html like so:BODY=${BODY//$'\n'/<br />}#changed $BODY= to BODY= per yaegashi's suggestionThe following error disappeared after I followed yaegashi's suggestion:/usr/local/bin/some-script.sh: line 59: <b: command not foundBut the original error remains, at the IMPORTCOMMAND variable assignment.
Save command string into bash variable
shell script
I think you have too many quotes and in the wrong places.The following will try run somefile.php and not set IMPORTCOMMAND as there is a space between php -f and somefile.php.IMPORTCOMMAND='php -f' $LOCATION'/somefile.php -- -process'$INPUTFILEShould be:import_command=php -f $LOCATION/somefile.php -- -process $INPUTFILEoutput=$( ${import_command} 2>&1 )OR (if 'LOCATION' or 'INPUTFILE' contain spaces)import_command=( php -f $LOCATION/somefile.php -- -process $INPUTFILE )output=$( ${import_command[@]} 2>&1 )And display it:BODY=<b color='red'>Output:</b><br />${output} <b color='red'>Command:</b> ${import_command[*]}
_unix.319825
I can find a similar topic:How to configure certain programs to always open in full screen?, but it does not solve my question.I wonder which aspect the question is related to, the distro, or the desktop session? I use the Fedora and Gnome 3.
How to configure an application to be started in full screen mode?
fedora;fullscreen
You can install devilspie2 and create a config file ~/.config/devilspie2/max.lua with the following content, which would start every Iceweasel maximized:-- Make Iceweasel always start maximized.if (get_application_name() == Iceweasel) then maximize();end Src: readmeThen make sure to start devilspie2 together with your desktop environment.
_cogsci.16987
I am working on a project related to Brain-Computer Interface & I came across this problem.What are the tools and techniques used to classify the EEG signals which are extracted from the Neurosky Mindwave Mobile Headset. I just needed the birds eye view of what are techniques used. Obviously the signal which are extracted using the electrode contains all the EEG signals present in the EEG spectrum (alpha,beta,gamma etc) normally, so how can I classify it?I read about some of them like using FFT and extraction of band powers but I didn't understand it quite well so can someone please tell me how it is done.I really appreciate your help.Thank You.
Classification of EEG Signals
theoretical neuroscience;eeg;brain training;brain computer interface;brain waves
null
_unix.345433
I have a file like herefile.txtbbb-ccc-cccc# aasdf asdas asdasa fgdg asdfa asfdas adfaq asfa afdaf fafa fafd afafabbb-ccc-cccc#I want to take the word ending in # and I want to add it to each line as the first word. I am a beginner at unix scripting.sed 's/bbb-ccc-cccc#/^/' < file.txt > newfile.txtI don't know the word before # sign ahead of time, so my point is find the word ending with # and put it at the beginning of each line. For this file.txt I need like here:bbb-ccc-cccc#bbb-ccc-cccc# aasdf asdas asdasa fgdgbbb-ccc-cccc# asdfa asfdas adfaq asfabbb-ccc-cccc# afdaf fafa fafd afafabbb-ccc-cccc#
Add specific word to each line
text processing;sed
null
_cstheory.27235
Could there be a logically consistent theory supporting the transmission of non-physical information to a point in time previous to the time it was sent using a computer network (quantum theory, etc)? I'm working on a sci-fi story and need some legit science to back up just such an occurrence - so there's no limit re: real world application.
The Arrow of Time in a Non-Physical Realm
soft question;quantum information;physics;ni.networking internet
For the consequences of such transmission to theoretical computer science (the only aspect of your question that is on-topic here) see Aaronson and Watrous's Closed Timelike Curves Make Quantum and Classical Computing Equivalent.
_unix.314890
I have created a long running screen session with many windows and the C-a A command to rename a window is not working. What is the text command for renaming a window?I have tried :caption string windowname but it doesn't work. Is that the right command or am I missing something?
What is the text command for naming a window in gnu-screen?
gnu screen;window title
That is the title command, e.g,.:title bad-windowIn the manual:title [windowtitle]Set the name of the current window to windowtitle. If no name is specified, screen prompts for one. This command was known as aka in previous releases.If the shortcut is not working, of course, the long name may not work either.
_unix.350397
Hello I'm trying to set up a static ip on a minimal install vm with the IP of the vm is 170.20.x.100 but when I configure the/etc/sysconfig/network-scripts/ifcfg-eth0 file to have an IPADDR of 172.20.x.100 and I restart the network with systemctl I get an error saying Error, some other host already uses address 172.20.x.100 and when I change the ip to any other value the ip does resolve and it validates I have checked all of the other hosts on my network and none has that ip on any interface. Although I did find a file on the main host called /etc/sysconfig/network-scripts/ifcfg-br1 and it does have the ip of 172.20.x.100 in the IPADDR field, but when I do ifconfig on the interface it does not show that ip instead it shows 172.20.x.1 which is the correct address, I'm using my main hosts as the gateway of my network. This is also a cloned VM and I have been encountering several issues before. I couldn't ping my gateway before because the MAC address of the virtual machine and the MAC of ifcfg-eth0 filewere different but I changed it to the MAC Virt-Manager gave me and it worked. Now the only issue is getting my vm to obtain 172.20.x.100 as the ip address. Are you familiar with this issue?UPDATEIt seems like the issue is still the MAC address. When I issue arping -c 2 -w 3 -D -I eth0 172.20.x.100 The reply is Unicast reply from 172.20.x.100 from 0.0.0.0 eth0Unicast reply from 172.20.x.100 [ 52:54:00:D0:5D:3A ] but when I go ifconfig eth0 on the vm the MAC is 52:54:00:4b:c2:30Static configuration of vm /etc/sysconfig/network-script/ifcfg-eth0 DEVICE=eth0ONBOOT=yesBOOTPROTO=staticNETMASK=255.255.255.0IPADDR=172.20.x.100GATEWAY=172.20.x.1DNS1=172.20.x.1DNS2=8.8.8.8HWADDR=52:54:00:4b:c2:30PEERDNS=yesTYPE=EthernetIPV6INIT=noIfconfig on main host: eno1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 10.40.84.195 netmask 255.255.255.0 broadcast 10.40.84.255 inet6 fe80::d4de:7ab0:3cf4:e2ca prefixlen 64 scopeid 0x20<link> ether ec:b1:d7:38:c7:07 txqueuelen 1000 (Ethernet) RX packets 162478 bytes 70643148 (67.3 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 37498 bytes 6406695 (6.1 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 device interrupt 20 memory 0xef100000-ef120000 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 1 (Local Loopback) RX packets 189 bytes 21522 (21.0 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 189 bytes 21522 (21.0 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0virbr0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 inet 192.168.51.1 netmask 255.255.255.0 broadcast 192.168.51.255 ether 52:54:00:7b:f7:52 txqueuelen 1000 (Ethernet) RX packets 34 bytes 1948 (1.9 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 6 bytes 2374 (2.3 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0virbr1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 172.20.x.1 netmask 255.255.255.0 broadcast 172.20.x.255 ether 52:54:00:d0:5d:3a txqueuelen 1000 (Ethernet) RX packets 664 bytes 91395 (89.2 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 397 bytes 493153 (481.5 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0Content of br1: DEVICE=br1ONBOOT=yesTYPE=BridgeBOOTPROTO=noneIPADDR=172.20.x.100GATEWAY=172.20.x.1STP=onDELAY=0.0Error Message of VM:
Static IP Some other host already uses address
centos;networking;network interface
null
_webmaster.935
Currently my company has 3 blogs and what I did was to install three instances of wordpress over Apache/MySQL, in different directories. The problem is that I have a Slicehost VPS with 256Mb RAM with Ubuntu8.04 and MySQL is crashing Linux or making it very slow and unresponsive. Is there some kind of optimal setup for this scenario? I know that my server is too cheap but I'm not sure either if an upgrade to 512 will fix things.I'm thinking about migrating to nginx, but what about MySQL? Is there any solution to this? Is this the right site to post this question or is it serverfault?Thanks
Multi site wordpress setup
wordpress;mysql;nginx
A. Apache is a memory hog. You would get a huge performance boost if you switched to nginx with PHP via fastcgi.B. If MySQL is really your biggest problem (doubt it), you can disable InnoDB if you don't actually need InnoDB support for something else. Edit /etc/mysql/my.cnf and add a line saying skip-innodb, then run /etc/init.d/mysql restart. That should save you a few dozen megs of RAM.C. You're not cheap, Slicehost are the cheapskates. Linode (main competitor) is now $19.95 for a 512 MB slice. Yeah, and it was a free upgrade, too. http://blog.linode.com/2010/06/16/linode-turns-7-big-ram-increase/
_unix.268583
i'm trying to establish a communication between android mobile app , wifi router and a wifi device how can i do it?here wifi device is home built and it is connected to the router and both are static(router and device) only the android mobile is roaming .EX: when you send a msg through whatsapp first it goes to whatsap server --> router(identifing ur ip addr) --> 2nd android device .here is there any way to remove/replace with any other thing intermediate server so that i can send signal to my home built wifi device via router sitting anywhere in the country .suggestions are accepted.Thanks. MORE INFO: message is nothing but ON/OFF signal of one or more divice.
is it possible to connect android mobile app , wifi router and a wifi device?
android
null
_cs.67673
In a related post: Algorithm for solving binary quadratic Diophantine equations (BQDE) and its CTC there was a conclusion that with regard to general binary quadratic diophantine equation (with all non-zero coefficients), the solutions (if existent) can be found in exponential time. However, is this also the case for simple hyperbolic case (i.e. where $A$ and $C$ are zero, leading to form: $Bxy + Dx + Ey + F = 0$)? Alpertron (https://www.alpertron.com.ar/METHODS.HTM) shows a method for such a case which involves finding all integer divisors of $DE-BF$ - a task which can be tricky for large coefficients. For certain, sufficiently large coefficients Alpertron's method would imply that this can be computed in sub-exponential time (using GNFS).Is there any algorithm to be used for SHCDE which would find the solutions in less than exponential time (unlike standard ones for general BQDE)?
solving simple hyperbolic DE
algorithms;complexity theory
null
_unix.35447
I would like to benchmark of few XEN-constallations:PV DomUHVM DomU with PV network and disk driversHVM DomU with emulated IDE and PV networkThe base will be the same VM with the same disk/cpu/os/network setup based on the same original PV clone.I am especially interested in specific VM numbers:For CPU: Ability to switch process context (cs per second?)For Memory: Pure RAM-memory throughput read/writeFor Disk: latency of read/write operationsFor Network: Ability to handle many small packets at the same timeIs there a good (free) linux tool to test these?Is there a specific benchmark covering virtual machines?I am NOT interested inCPU speed benchmark (calculations)Disk transfer rateNetwork bandwidth utilization
VM benchmark tools?
linux;xen;virtualization;benchmark
null
_softwareengineering.319045
I have 2 sets--inputs and outputs--of 70 32-bit integer variables and 70 bools (140 vars altogether). These need to be accessed and modified from 3 threads. What is an appropriate design pattern to facilitate thread-safe read-write access to each of these 140 variables without locking all of them under a single mutex (which I expect will result in bad performance)?Some details about the performance requirements:Thread 1 (CAN Serial Communication) receives packets from hardware sensors every 1ms that contain the updated value for one of the 70 input shared variables; the thread updates the variable with that value. Also, every 5ms Thread 1 needs to make a copy of all the 70 output variables.Thread 2 (Controller) creates a copy of all input variables every 10ms, as well as overwrites all the output variables.Thread 3 (GUI) makes a copy of all input and output variables every 500ms.The system runs on an ARM Cortex-A8 600Mhz.One solution is to create a mutex lock for each of the 140 variables, but this feels like a hack. I would then wrap the variables in a class with 140 getters and setters, which also seems ugly.A side-note about std::atomic:The other alternative is std::atomic. But I feel it is an advanced and complicated feature, for example I was told on IRC that the following example snippet is not thread-safe, despite looking intuitively like it should be:typedef struct MyStruct { std::atomic<int> a; std::atomic<int> b;}std::atomic<MyStruct> atomic_struct;atomic_struct.a = 1;atomic_struct.b = 2;// Make a copy of `atomic_struct`Mystruct normal_struct;normal_struct = atomic_struct;// Edit the values of the copied struct and copy the changes back to the `atomic_struct`.normal_struct.a = 100;normal_struct.b = 200;atomic_struct = normal_struct;
How to facilitate thread-safe access to large set of shared variables?
c++;performance;multithreading;qt
Comments turned into an answer:You are right to worry about performance with locking everything under one mutex, but the better solution is to make sure there is as little going on as possible inside the lock.Thread 1 should have the value and index ready and really only be doing a single write.Thread 2 would operate on an unshared local instance of the class, and swap it with the shared one. Thread 3 also has an unshared local instance that it copies the share into each update
_unix.19681
Possible Duplicate:Open Sakura in home directory I love XTerm but I would like to enable tabbing (multiple terminals in one window separated by tabs).The terminal Sakura is based on XTerm and tabbing is enabled.I can open XTerm in my home directory by executing xterm -e 'cd ~/ && /bin/bash'. However, this doesn't work with Sakura (I replaced xterm by sakura).I also tried sakura -e 'cd ~/' but it doesn't work either.Any ideas would be very much appreciated.Thanks!
Open Sakura in home directory
terminal
null
_softwareengineering.230778
I'm a fan of Dependency Injection, however I don't know how much both public and private methods inside a class should be loosely-coupled.Just to picture it better, when I have both projectId and userId as private properties in my class, and both of them have their own decent setter methods which will do the input validation also -- throwing an error in case of unexpected input, then isn't it better that all the class' methods relies on these properties instead of getting them via method arguments' and repeat the whole validation, etc. again?In this case the methods will tightly-bind to the class itself -- and probably the constructor as well, but the advantage is that all the methods can easily rely on the setter methods and they will always assume also that the proper data is always available for me to process -- because the setter should have thrown an exception otherwise.This seems quite helpful to me and I can make all the classes and libraries loosely-coupled in a way they don't depend to each other, however my question is, can a method inside a class be dependent on the class properties itself or not? Is it considered as a bad-practice?
Should I consider loosely-coupling for class methods as well?
design;object oriented;dependency injection
If the methods of a class cannot depend on the variables of that class then who can? If all of your methods were completely decoupled from all other methods and variables, then all classes involved would be stateless and all methods might as well be static.Of course there are always trade-offs when it comes to design, but I think it simply comes down to the idea of cohesion -- if your private properties increase the cohesion within the class (and if they are indeed always valid and ready to be consumed at all times), then yes, by all means use them, otherwise get rid of them.(Whether or not you actually want to be using setters is another question entirely...)
_webmaster.108234
There must be a way, since services such as RankTrackr.com and WhiteSpark can get pretty accurate results?Is there some sort of parameters I can add to the URL to emulate a local search for a particular city in the United States?
How to emulate a Google local search query?
seo;google search;search engines;serps;local seo
Yes, you run queries through IP addresses for the localities you're interested in. Many of the larger proxy services allow you to choose from multiple metropolitan areas in most developed countries.Obviously, companies doing this at scale (i.e. commercial rank trackers) have to source vast numbers of IP addresses and use them carefully to maintain a light footprint because, as pointed out in comments on your question, it's against Google's ToS.
_unix.271449
I would like to build custom EDID using kernel sources.But I need to use several modelines - more refresh rates for same resolution.Is it possible using method in kernel sources? Could you please show me how my 1920x1080.S file should look like?
Build custom EDID with several modelines
kernel;compiling;monitors;edid
null
_webapps.31059
I would like to allow people to comment on my posts on my timeline (since I post a lot of articles and things that I read, and enjoy discussing them), but I also want to prevent people from posting their own links/messages directly onto my timeline. Is this possible?
Allow comments but no posts on my Facebook timeline
facebook;facebook timeline
null
_unix.276406
I want to close a program through the command line (say Firefox or Thunderbird). The program is working just fine and in theory I could just go FILE > CLOSE. However, I want to do this through the command line so that is not an option. I could kill the process (e.g. pkill firefox), but from the sound of it, that is quite a brutal way to close a program. In fact, I am used to using this as a last resort, especially when a program hangs. In all honesty, I don't know if this is a proper way of quitting a program. Is it? Or are there better ways of closing a program?
What is a non-agressive way of killing a process?
kill
kill sends signals to processes, it defaults to sending the TERM signal. The TERM signal can be 'caught' by processes, i.e. they can watch for it, and when it's received they can take action.In many cases, Linux processes will behave properly when sent the TERM signal - i.e. they will tidy themselves up and then close down cleanly. So kill is a perfectly valid way of shutting many processes down, assuming the developers have properly handled the situation.Whether it works for any given process depends on the developer.Only some signals like KILL can not be caught, you send a KILL using kill by running kill -9, which is far more disruptive to the process because they get no chance to clean up.
_cstheory.10728
Consider the following model: an n-bit string r=r1...rn is chosen uniformly at random. Next, each index i∈{1,...,n} is put into a set A with independent probability 1/2. Finally, an adversary is allowed, for each i∈A separately, to flip ri if it wants to.My question is this: can the resulting string (call it r') be used by an RP or BPP algorithm as its only source of randomness? Assume that the adversary knows in advance the entire BPP algorithm, the string r, and the set A, and that it has unlimited computation time. Also assume (obviously) that the BPP algorithm knows neither the adversary's flip decisions nor A.I'm well-aware that there's a long line of work on precisely this sort of question, from Umesh Vazirani's work on semi-random sources (a different but related model), to more recent work on extractors, mergers, and condensers. So my question is simply whether any of that work yields the thing I want! The literature on weak random sources is so large, with so many subtly-different models, that someone who knows that literature can probably save me a lot of time. Thanks in advance!
Running a BPP algorithm with a half-random, half-adversarial string
cc.complexity theory;randomized algorithms;derandomization;extractors
What you need is a seeded extractor with the following parameters: seed of length $O(\log n)$, crude randomness $n/2$, and output length $n^{\Omega(1)}$. These are known. While I'm not up to date with the most recent surveys, I believe that section 3 of Ronen's survey is enough.The only thing you will need to show is that your source has sufficient min-entropy, i.e. no n-bit string gets a probability of more than $2^{-n/2}$, which I think is clear in your setting.
_softwareengineering.355466
When you compile a C source file into an object file, the function names in the object file will be decorated. Each calling convention will have a different decoration.For example, the following __stdcall function:void __stdcall stdcallFunction(int i){ int j = 12345;}Will be decorated like this in the object file:_stdcallFunction@4And the following __cdecl function:void __cdecl cdeclFunction(int i){ int j = 12345;}Will be decorated like this in the object file:_cdeclFunctionNow my question is, why is name decoration used? I mean why not have the function stdcallFunction be saved in the object file simply as stdcallFunction and not as _stdcallFunction@4?I think the reason is the following:Say I created a library (a .lib library and not a .c library) that contains the above two functions.Now I want to call the function stdcallFunction in this library from my `C source file, I would do the following:void __stdcall stdcallFunction(int i);stdcallFunction(123);This will compile fine. But if I did the following (changed the calling convention for the function declaration):void __cdecl stdcallFunction(int i);stdcallFunction(123);Then this will produce a compilation error.So the reason for using name decoration is for the compiler to make sure that I am using the correct calling convention when calling a function that exists in a library (the name decoration is simply an indication of what is the calling convention of a function in a library).Am I correct?
Why are function names decorated in C?
c
null
_webapps.85916
If I share a photo album on Facebook with custom audience then will their mutual friends will be able to see those photos? I want to upload family albums and I only want only selected people to see. If a person in my custom audience comments on a post, can their friends (or our mutual friends) then see the post? (People who aren't in the custom audience.)
How secure is Facebook's custom search?
facebook;facebook privacy
null
_webmaster.15728
Go ahead, laugh. I forgot to remove the default admin/admin account on my blog. SOmebody got in and has replaced my homepage with some internet graffiti. I've used .htaccess to replace the page with a 403 error, but no matter what I do, my wordpress homepage is this hacker thing.How can I setup my server so that ONLY MYSELF can view it while I'm fixing this via .htaccess?What steps should I take to eradicate them from my server?If I delete the ENTIRE website and change all the passwords, is he completely gone?Thanks.
Site overthrown by Turkish hackers
security;htaccess;server
How can I setup my server so that ONLY MYSELF can view it while I'm fixing this via .htaccess?Use basic authentication to block the site from everyone who doesn't have the login and password (i.e. anyone who is not you).What steps should I take to eradicate them from my server? If I delete the ENTIRE website and change all the passwords, is he completely gone?If you completely uninstall the software, delete everything associated with it (e.g. directories) and re-install it from scratch you should be fine.
_unix.289934
During an update of my Raspbian installation (Pi rev. B+) over SSH I lost connection. After a hard reboot (which may have happened during or after finishing the update) the udev kernel device manager fails on boot.According to the journalctl (-xb), everything non-grey (white and red):systemd-udev-trigger.service: main process exited, code=killed, status=11/SEGVFailed to start udev Coldplug all Devices.Unit systemd-udev-trigger.service entered failed state.<snip>systemd-udevd.service: main process exited, code=killed, status=11/SEGVFailed to start udev Kernel Device Manager.Those last two lines are repeated a couple of times.Unit systemd-udevd.service entered failed state.The result of udev failing becomes apparent later on:random:nonblocking pool is initializedJob dev-mmcblk0p1.device/start timed out.Timed out waiting for device dev-mmcblk0p1.device.<snip>Dependency failed for /boot.<snip>Dependency failed for Local File Systems.<snip>Dependency Failed for File System Check on /dev/mmcblk0p1.<snip>Job dev-ttyAMA0.device/start timed out.Timed out waiting for device dev-ttyAMA0.device.The system is very broken at the moment. I checked the micro-SD for faults with fsck and so far without finding anything wrong.How do I repair this mess?
Interrupted apt upgrade broke udev and systemd
debian;package management;systemd;udev
null
_codereview.72087
Consider the following:#include <iostream>struct State { virtual ~State() = default; };struct Drunk : State { void singWhileDrunk() {std::cout << Singing while drunk.\n;}};struct Person { State* state; void singWhileDrunk() {dynamic_cast<Drunk*>(state)->singWhileDrunk();} // Is this good?};int main() { Person bob; bob.state = new Drunk; bob.singWhileDrunk(); // dynamic_cast<Drunk*>(bob.state)->singWhileDrunk(); // Using this is better?}What I wonder is if Person::singWhileDrunk() should really be defined in Person or not. singWhileDrunk() only has true meaning if the person is drunk, so to define it in Person seems wrong to me. However, it does simplify the code in main(), especially if it is to be used a lot.(dynamic_cast<Drunk*>(bob.state)->singWhileDrunk();is clearly more typing (and may run into difficulties if I want to redefine it everywhere it is used, e.g. change dynamic_cast to static_cast). Another issue I have is that in my program I have many different types of states, each with their own special functions, and to define them all in Person will really bloat the Person class with MANY, MANY functions that don't even seem to belong in Person. So there seems to be pros and cons to both choices and would like to hear what others have to say about this.This is just an example of course. In reality, I have states like FlySpellState, with the function flies(), which also seems to have no place in Person (since people cannot fly normally), though it could be.
Defining a certain member function
c++
null
_datascience.8697
In machine learning, one can use Euclidean distance to measure a cluster$$\mu\in R^k,$$ over data points $$\{x_i\}^N_{i=1}\in R^{k\times N},$$ with the measure $$\text{error}_i = ||\mu-x_i||^2_2$$The total error can be calculated as$$\text{error} = \sum^N_{i=1}error_i$$ To estimate the parameters, one formulates the optimization problem$$\min_{\mu_{new}} \big{(}\text{error}\big{)}$$This has two underlying assumptions: first, that samples are independent, and, second, that the dimensions are independent for each sample. Can anyone explain how I should analyze when either or both assumptions are not satisfied?
about the error additivity
machine learning;data mining
null
_softwareengineering.290903
I need to build a system that can handle a fairly high amount of delayed tasks (e.g. scheduled emails). For non-delayed tasks I would go for something like RabbitMQ. But, is it ok to let tasks lingering in the queue for extended amounts of time, like days?Would it make more sense to store the tasks in a database and then periodically check whether there are tasks which need to be processed?
Message queue vs database for delayed tasks
architecture;database;message queue
Systems such as RabbitMQ (or in your case maybe look into Kafka) can offer persistence, or guaranteed delivery, as well as configurable TTL on messages. However, they are not designed as a long-term persistent storage solution, and if we're talking about days, I would actually store the job in a DB as you suggest.
_codereview.119768
I built my portfolio page using Bootstrap and jQuery, but on lower performance computers the animations seem choppy. I am interested in JavaScript optimization and was hoping you all had some ideas on how to more efficiently execute my code. You can see it live here: bgottschling.github.io.HTML:<!DOCTYPE html><html > <head> <meta charset=UTF-8> <title>Brandon Gottschling's Portfolio</title> <meta http-equiv=X-UA-Compatible content=IE=edge> <meta name=viewport content=width=device-width, initial-scale=1> <!-- Font Awesome --> <link rel=stylesheet href=https://maxcdn.bootstrapcdn.com/font-awesome/4.4.0/css/font-awesome.min.css type='text/css'> <!-- Font MFizz --> <link rel=stylesheet href=http://cdn.ovispot.com/c/font-mfizz/1.2/font-mfizz.css type='text/css'> <link rel='stylesheet prefetch' href='http://maxcdn.bootstrapcdn.com/bootstrap/3.3.5/css/bootstrap.min.css'> <link rel='stylesheet prefetch' href='http://cdnjs.cloudflare.com/ajax/libs/animate.css/3.2.3/animate.min.css'> <link rel=stylesheet href=css/style.css> </head> <body> <div class=container-fluid all> <nav class=navbar navbar-default navbar-fixed-top> <div class=container-fluid> <!-- Brand and toggle get grouped for better mobile display --> <div class=navbar-header> <button type=button class=navbar-toggle collapsed data-toggle=collapse data-target=.navbar-collapse> <span class=sr-only>Toggle Navigation</span> <span class=icon-bar></span> <span class=icon-bar></span> <span class=icon-bar></span> </button> <a class=navbar-brand href=#1>Brandon Gottschling</a> </div> <!-- Collect the nav links, forms, and other content for toggling --> <div class=collapse navbar-collapse> <ul class=nav navbar-nav navbar-right> <li id=home><a href=#1><span class=glyphicon glyphicon-home></span> Home</a></li> <li id=about><a href=#2><i class=fa fa-info-circle nav-icon></i> About</a></li> <li id=portfolio><a href=#3><i class=fa fa-folder-open nav-icon></i> Portfolio</a></li> <li id=contact><a href=#4><i class=fa fa-envelope nav-icon></i> Contact</a></li> </ul> </div> <!-- /.navbar-collapse --> </div> <!-- /.container-fluid --> </nav> <br/> <div class=row> <div class=jumbotron home id=1> <img class=image-border img-responsive text-center src=http://i1382.photobucket.com/albums/ah249/alyssa_marie21/facebrandon_zpsdsvir6wl.jpg alt=Brandon Gottschling in a sweater!> <h2 class=text-center>Brandon Gottschling </h2> <h3 class=text-center>Full Stack Developer</h3> <h4 class=text-center>Atlanta, Georgia</4> </div> </div> <div class=row> <div class=container well about id=2> <h2 class=text-center title-text>About Me</h2> <p class=> I am very passionate about technology and how it advances us as a civilization. Currently in my career I am employed as a Product Specialist supporting a content management system at <a href=http://www.vertafore.com/>Vertafore</a>, an insurance software company. I have life long aspirations to become a software developer. I currently use <strong>HTML5</strong>, <strong>CSS3</strong>, <strong>JavaScript</strong> and other JS frameworks like <strong>Bootstrap</strong>, <strong>JQuery</strong>, <strong>AngularJS</strong>, <strong>ExpressJS</strong>, and <strong>NodeJS</strong>. I also have experience with <strong>MongoDB</strong>, and <strong>T-SQL</strong>. What interests me the most about the JavaScript language is that it allows you to develop front and back-end applications all using one language. I find the MEAN stack, as they call it, practical due to the fact that you are not flipping between different languages. Not to mention its leverage of HTTP for scalability, availability, and versatility. What I mean by this is that you can develop robust applications with next to no footprint, readily available wherever there is an internet connection and a web browser. To me, something about that seems powerful. </p> </div> </div> <div class=row> <div class=container well portfolio id=3> <h2 class= text-center title-text>Portfolio</h2> <div class=row> <div class=col-md-4> <a href=http://codepen.io/brandon-gottschling/full/XmLvmo/ class=thumbnail target=_blank> <img src=http://i1382.photobucket.com/albums/ah271/Brandon_Gottschling/thumbnail1_zpsdbbhlko6.png alt= class=img-thumbnail> <div class=caption> <p>Quote-O-Matic</p> </div> </a> </div> <div class=col-md-4> <a href=# class=thumbnail> <img src=http://i1382.photobucket.com/albums/ah249/alyssa_marie21/iph_zpsrzdkhjpj.jpg alt= class=img-thumbnail> <div class=caption> <p>Project #2</p> </div> </a> </div> <div class=col-md-4> <a href=# class=thumbnail> <img src=http://i1382.photobucket.com/albums/ah249/alyssa_marie21/iph_zpsrzdkhjpj.jpg alt= class=img-thumbnail> <div class=caption> <p>Project #3</p> </div> </a> </div> <div class=col-md-4> <a href=# class=thumbnail> <img src=http://i1382.photobucket.com/albums/ah249/alyssa_marie21/iph_zpsrzdkhjpj.jpg alt= class=img-thumbnail> <div class=caption> <p>Project #4</p> </div> </a> </div> <div class=col-md-4> <a href=# class=thumbnail> <img src=http://i1382.photobucket.com/albums/ah249/alyssa_marie21/iph_zpsrzdkhjpj.jpg alt= class=img-thumbnail> <div class=caption> <p>Project #5</p> </div> </a> </div> <div class=col-md-4> <a href=# class=thumbnail> <img src=http://i1382.photobucket.com/albums/ah249/alyssa_marie21/iph_zpsrzdkhjpj.jpg alt= class=img-thumbnail> <div class=caption> <p>Porject #6</p> </div> </a> </div> </div> </div> <div class=row> <div class=container well contact id=4> <div class= title-text text-center> <h2>Contact Me</h2> <h4>Let My Passion Be Your Product</h4> </div> <div class=row social_buttons> <div class=col-sm-offset-1 col-md-2 text-center linkedin> <a href=https://www.linkedin.com/in/bgottschling class=btn btn-default btn-lg center-block role=button target=_blank><i class=fa fa-linkedin></i> LinkedIn</a> </div> <div class=col-md-2 text-center> <a href=https://github.com/bgottschling class=btn btn-default btn-lg center-block role=button target=_blank><i class=fa fa-github></i> Github</a> </div> <div class=col-md-3 text-center> <a href=http://www.freecodecamp.com/bgottschling class=btn btn-default btn-lg center-block role=button target=_blank><i class=fa fa-fire></i> freeCodeCamp</a> </div> <div class=col-md-2 text-ceneter> <a href=http://codepen.io/brandon-gottschling class=btn btn-default btn-lg center-block role=button target=_blank><i class=fa fa-codepen></i> Codepen</a> </div> </div> </div> </div> <div class=footer> <div class=container> <p class=>Copyright Brandon Gottschling 2015. All Rights Reserved</p> </div> </div> </div> <script src='http://cdnjs.cloudflare.com/ajax/libs/jquery/2.1.3/jquery.min.js'></script> <script src='http://maxcdn.bootstrapcdn.com/bootstrap/3.3.5/js/bootstrap.min.js'></script> <script src=js/index.js></script> </body></html>CSS:body { background: #A9E7F8;}.image-border { border-radius: 50% 5% 50% 5%; height: 15%; width: 15%; margin: 0 auto;}.about { background: #A8FBAD; font-size: 15px; height: 100%;}.portfolio { background: #FFD5AA;}.contact { background: #B2B9FA;}.footer { color: #FFFFFF;}.img-thumbnail { max-height: 346px; max-width: 200px;}.linkedin { margin-left: 12%;}.title-text { margin-bottom: 3%; }JS:$(document).ready( $(.navbar-right li).hover( function() { if (!$(this).hasClass('animated')) { $(this).dequeue().stop().animate({ width: 120px }); } }, function() { $(this).addClass('animated').animate({ width: 103px }, normal, linear, function() { $(this).removeClass('animated').dequeue(); } ); } ), $(#home).hover( function() { $(.home).addClass(animated bounce); }, function() { $(.home).removeClass(animated bounce); }), $(#about).hover( function() { $(.about).addClass(animated bounce); }, function() { $(.about).removeClass(animated bounce); }), $(#portfolio).hover( function() { $(.portfolio).addClass(animated bounce); }, function() { $(.portfolio).removeClass(animated bounce); }), $(#contact).hover( function() { $(.contact).addClass(animated bounce); }, function() { $(.contact).removeClass(animated bounce); }));
Page for personal portfolio animations
javascript;jquery;css;html5
With jQuery it's usually faster to not use the shorthand methods for event binding.There should be a performance increase if you change your hover methods to something like the following:$(#contact) .on(mouseenter, function () { $(.contact).addClass(animated bounce); }) .on(mouseleave, function () { $(.contact).removeClass(animated bounce); })I would also try to avoid jQuery animations. Alternatives might be GSAP or velocity.js (there are many others). Also if you used one of them you might not need jQuery ;)EDITYou should also move the .row containing your Contact Me out of its parent (also .row) so they are on the same level. At the moment it's the reason your page has a horizontal overflow.EDIT 2Nice, happy to help. A further improvement would be to replace$(#+ $(this).attr(id))with$(# + this.id)(same thing goes for the class selector in the mouseleave)If you use the same jQuery object several times it is best to reference it in a variable and use that. Its faster then creating the object each time.EDIT 3An even greater improvement would be to replace$(#+ $(this).attr(id))with$(this)(I had to laugh when I realized it ;)
_unix.62227
I would like to use my headset's microphone both on my Linux laptop and a Windows 7 PC simultaneously, for different VoIP applications. I'm currently running Ubuntu with PulseAudio on the laptop to which the headset is connected and I've heard that there are Windows implementations of PulseAudio. Therefore it should theoretically be possible to make the microphone available over my LAN as an PCM stream.I'm asking now if someone has more detailed insights if this is actually doable with current software or opinions from people with more knowledge about the PulseAudio system if further investigation is likely to pay off. Ideas on doing it in any way without PulseAudio are also acceptable.
Making a Linux audio recording device available on a Windows 7 PC over network
networking;audio;windows;pulseaudio
null
_unix.252795
Is it possible to perform a visudo sanity check for a file in the context of other files included from /etc/sudoers.d?Scenario:I want to add a new file to /etc/sudoers.d the file itself is correct and it passes the visudo -c parser.It does however contain a Cmnd_Alias line which conflicts with another file in etc/sudoers.d.If moved to a /etc/sudoers.d it would break the sudo command with Alias '<name>' already defined near line error.Question:Is there any method which I could employ to check if the new file wouldn't break the sudo after placing it in sudoers.d?Or is there any method to make sudo ignore/stop processing included files if there was any error encountered?
Visudo sanity check for the whole ecosystem of included files?
sudo
You might try checking a concatenation of /etc/sudoers and the proposed new file:sudo bash -c 'visudo -cf <(cat /etc/sudoers /path/to/to-be-added-file)'Since visudo will parse the #includedir in /etc/sudoers, all the relevant files in sudoers.d would be checked as well.$ sudo cat /etc/sudoers.d/fooCmnd_Alias FOO = /bin/bar$ sudo cat /tmp/barCmnd_Alias FOO = /bin/foo$ sudo bash -c 'visudo -cf <(cat /etc/sudoers /tmp/bar)'>>> /dev/fd/63: Alias `FOO' already defined near line 31 <<<parse error in /dev/fd/63 near line 31
_unix.387818
I'm running lvcreate --size $snapshot_size --snapshot --name mdb-snap-00 /dev/vg0/mongodbto create a snapshot of our mongo partition.$snapshot_size is 362MHowever after creating it, lsblk gives menvme0n1 259:0 0 442.4G 0 disk vg0-mongodb-real 252:1 0 221.2G 0 lvm vg0-mongodb 252:0 0 221.2G 0 lvm /mnt/data vg0-mdb--snap--00 252:3 0 221.2G 0 lvm vg0-mdb--snap--00-cow 252:2 0 364M 0 lvm vg0-mdb--snap--00 252:3 0 221.2G 0 lvmThis is an issue for me, because I'm trying to dd the snapshot pipe it to gzip and pipe that to an aws bucket but it timeouts everytime. I just learnt that this happens because it's trying to do the whole 221G disk even though the data on it and the specified snapshot size are only 362MbEditroot@ip-10-0-97-77:~# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert mdb-snap-00 vg0 swi-a-s--- 364.00m mongodb 0.48 mongodb vg0 owi-aos--- 221.15g
lvcreate snapshot creates a larger snapshot than the snapshot size
rhel;lvm
Logical volumes are multiples of the PE (physical extents) size which is by default 4MiB. 362 cannot be divided by 4 thus LVM rounds up the size to 364.I don't think it is easily possible to change the PE size.You could create a 362 MiB file, though, put a loop device onto it and configure the snapshot manually (dmsetup), having it point there. But that is probably something for experienced users.You are reading more than the intended 362/364 MiB because you are reading from the wrong device. The snapshot device is a copy of the original one thus it has the same size. You have to read from the COW device directly.
_unix.231975
I have some device, connected to serial port. Actually, it is Arduino based temperature sensor. I wish to write a script, which will connect to serial port, send a command to device, receive it's answer, print it to stdout and exit.What is correct way to do this?Usually, when accessing local program, it is ok to redirect it's output. For example, this is how I read CPU temperature.datetime=$(date +%Y%m%d%H%M%S)cputemp=$(sensors atk0110-acpi-0 | sed s/CPU Temperature:[^0-9]*\([0-9\.]\+\).*/\1/;tx;d;:x)echo $datetime\t$cputempUnfortunately, $() relies on explicit program end, which is not the case with serial communication. Serial server always online and has no explicit sessions.Of course, I can check to line feeds. But is this correct action? May be I should write my Arduino program so that it send Ctrl-Z after each response or something?
Connect to serial, issue a command, read result, capture it and exit
serial port;serial console
As you say, you cannot read end-of-file from a serial port. (Ctrl-Z is a microsoft thing). So usually, you read until you have the wanted numberof characters, or until you find a delimiter like newline that signalsthe end of the data. For example, I have a usb serial port with output connected back to input, so any writes to the device simply come straight back. The following script can retrieve what the device sends:#!/bin/bashtty=/dev/ttyUSB0exec 4<$tty 5>$ttystty -F $tty 9600 -echoecho abcdef >&5read reply <&4echo reply is $replyThe script connects the serial device as file descriptor 4 for input, 5 foroutput, sets the speed and stops the echo you get for a tty, then writesabcdef and newline to the device. Each character will be sent back immediately, but the kernel driver will buffer up some input, so I dont need to startto read from the device before doing the write. The read ends by default when itsees a newline, and saves it in the variable reply, which is then echoed to stdout. You can put this script inside a v=$() type usage.If your serial device data does not end with a newline, you can specify a different delimiter to the bash read with -d. Or if the reply is of a constantlength, you can specify a length with -n. If you have binary data, you probably should add raw to the stty command to stop any special treatment of input.
_softwareengineering.208257
I have the following lines of code in my application:return Service is alive since: + TimeUnit.MILLISECONDS.toMinutes(mxBean.getUptime()) + minutes;It uses the following package:import java.util.concurrent.TimeUnit;My application is a web application. Does it means that I have something wrong logically if I use something from concurrent package at a web application?
Utilizing a Java Concurrent Utility from a Web App
java;design;packages
null
_codereview.134224
A few years back I interviewed with a company for a Javascript position. After a couple of warm-up challenges I was presented with this:Please write a function that calls back with true if all promises have resolved successfully, or false if at least one promise has rejected.Given 'Promise' API:promise.then( function resolve() { /* called when some async thing was successful */}, function reject() { /* the async thing failed */ });and also was given the following function structure and mocking code:function all (promises, callback) { // TODO call back with `true` if all promises resolve(), or `false` if a promise has reject()ed promises.forEach(function (promise) { promise.then( function resolve() {}, function reject() {} ); });}// Some mocking code (NO NEED TO READ THIS):function P() { return { then: function (resolve, reject) { setTimeout(function() { (5*Math.random()|0) ? resolve() : reject() }, Math.random()*1000);}}}var promises = [1,2,3,4,5].map(P);all(promises, function (success) { console.log('The promises have ' + ( success ? '' : 'not ' ) + 'all resolved!' )});The gist of it: I had to write an almost Promise.all() method that would check if all async functions finished and how they finished (resolve/reject).I didn't finish the challenge in the allocated time frame, so I failed (and years later, looking at the code I've written to finally solve it, if was the interviewer, I would failed me even if I was done in time...)My current implementationA few days ago I found the challenged buried on the hard drive and decided to give it a go (I timed myself to finish it in time):Does not re-include the mocking code from above, but it is included in the jsbin belowfunction all (promises, callback) { const promisesStatus = []; const allPromisesChecked = (promisesArray = promises, promisesStatusArray = promisesStatus) => promisesStatusArray.length === promisesArray.length; const allPromisesPassed = (promisesArray = promisesStatus) => { if (promisesArray.filter(value => !value).length === 0) { return true; } return false; }; promises.forEach(function (promise) { promise.then( function resolve() { promisesStatus.push(true); if (allPromisesChecked() && allPromisesPassed()) { callback(true); } }, function reject() { promisesStatus.push(false); if (allPromisesChecked() && !allPromisesPassed()) { callback(false); } } ); });};JSBin: https://jsbin.com/dajini/edit?js,consoleQuestionsKeeping in mind that this is to be done under the clock and under the interviewer's eyes (pair programming), hence under stress...Implementation - Leaving aside minor performance optimizations, could I have done it better? Another way that I am unaware of ?Time - How long does it take you ? (ballpark it) - Originally, I had to code two functions that dealt with string manipulations + this one in under 1 hour
Pseudo Promise.all() polyfill
javascript;interview questions;promise
ImplementationThe biggest thing that jumped out at me is your all function takes a callback instead of returning a Promise like Promise.all would do. (EDIT: it looks like the interview asked that of you, so that makes sense then. See my edit below)When your handling the reject branch of each promise (in the .then call), there's no reason to do anything fancy. As soon as you encounter an error, you can immediately reject the outer promise (or in your case, callback with an ErrorallPromisesChecked and allPromisesPassed gets a little verbose but as long as it works it kinda doesn't matterThis interested me so I took a shot at implementing it. Here's my code// Promise.all polyfillfunction all(promises) { return new Promise(function(resolve,reject) { var count = promises.length var result = [] var checkDone = function() { if (--count === 0) resolve(result) } promises.forEach(function(p, i) { p.then(function(x) { result[i] = x }, reject).then(checkDone) }) })}// delay helper for creating promises that resolve after ms millisecondsfunction delay(ms, value) { return new Promise(function(pass) { setTimeout(pass, ms, value) })}// basic boilerplate to check an answerfunction checkAnswer(promises) { all(promises).then( function(xs) { console.log(xs) }, function(err) { console.log(err.message) } )}// resolved promises wait for one another but ensure order is keptcheckAnswer([ delay(100, 'a'), delay(200, 'b'), delay(50, 'c'), delay(1000, 'd')])// check that error rejects asapcheckAnswer([ delay(100, 'a'), delay(200, 'b'), Promise.reject(Error('bad things happened')), delay(50, 'c'), delay(1000, 'd')])TimeThis took me about 10 minutes. If someone already has experience with Promises, I would expect someone could come up with a working solution in less than 30 minutes. If you've never seen Promises before, maybe 60 minutes?EDITRe-reading the question, I see that my original answer is an actual polyfill of Promise.all, not what the interview asked of you.Here's a dramatically simplified function that is essentially useless except for answering the interview question.// Promise.all wannabe// ([Promise], (bool-> void)) -> voidfunction all(promises, callback) { var count = promises.length promises.forEach(function(p, i) { p.then( function() { if (--count === 0) callback(true) }, function() { callback(false) } ) })}// delay helper for creating promises that resolve after ms millisecondsfunction delay(ms, value) { return new Promise(function(pass) { setTimeout(pass, ms, value) })}// basic boilerplate to check an answerfunction checkAnswer(label, promises) { all(promises, function(result) { console.log(label, result) })}// resolved promises wait for one another but ensure order is keptcheckAnswer('example1', [ delay(100, 'a'), delay(200, 'b'), delay(50, 'c'), delay(1000, 'd')])// check that error rejects asapcheckAnswer('example2', [ delay(100, 'a'), delay(200, 'b'), Promise.reject(Error('bad things happened')), delay(50, 'c'), delay(1000, 'd')])ImplementationSo in hindsight, I do have some more critique to offer. Considering the function only has to return true or false, there's no reason to make it complex. Basically you just have to count the resolve branches until it reaches the count of promises provided as input. If a reject happens, you can immediately return false. There's no need for any other code.
_unix.327803
I always get a message:IMAP Authentication canceled And then: Retrying plain authentication after [ALERT] application-specificWhen I look at my google security settings I can't find any option to create an application specific password to associate with Alpine on my laptop.https://productforums.google.com/forum/#!topic/gmail/bSQZVxRIjb0
pine (Alpine) with GMail 2-step Authentication enabled?
email;authentication;imap;alpine
Do as recommended by Andreatsh in the comments.Go to http://myaccount.google.comThen Sign-in & security -> Signin in to Google -> App passwordOnce you create the one time password you will also have to run:touch ~/.pine-passfileThis makes it so when you enter the gmail folder on Alpine you will be asked if you want to save the password.
_cstheory.10386
Assume you are given a matrix$$ X= \begin{bmatrix} x_1^1 & x_1^2 & \dots & x_1^m \\ x_2^1 & x_2^2 & \dots & x_2^m \\ \vdots & \vdots & \ddots & \vdots \\ x_n^1 & x_n^2 & \dots & x_n^m \end{bmatrix} $$such that all $x_i^j \in \big\{0,1\big\}$, $\vee$ is the logical OR, and:$$ \forall i,j, \quad x_{i}^j = \begin{cases} x_{i+1}^{j}\vee x_{i+1}^{j-1} & \text{if }\:j\neq1,\\ x_{i+1}^1\vee x_{i}^m & \text{otherwise}. \end{cases} $$This is quite similar to the Pascal triangle with binomials, except here we are dealing with $0/1$ variables and regular addition is replaced by the logical OR.The problem now is to minimize: $$S=\sum_{i,j} x_i^j,$$ where the trivial case $S=0$ with all $x_i^j=0$ is not an option. The sum is the one in the integers: $0<S\leq n m$.EDIT: What can we say about the case where the $\vee$ operator is no longer the logical OR, but is defined by: $0\vee0=0$, $1\vee0=0\vee1=1$ and $1\vee1\in\{0,1\}$.Does this problem reduces to another one? Maybe there are references that I am not aware of. Thanks for your help.
Minimization on a binary matrix
cc.complexity theory;reference request;optimization
null
_unix.337662
Vim-plug installation on Ubuntu 16.10I'm a new user of Ubuntu, and I'd like a little bit of help with plugins. According to this website, I installed vim-plug with this command: curl -fLo ~/.vim/autoload/plug.vim --create-dirs \ https://raw.githubusercontent.com/junegunn/vim-plug/master/plug.vim I have also created the directory ~/.vim/plugged as suggested. So far I know I have to install my plugins inside (in ~/.vimrc file): call plug#begin('~/.vim/plugged')call plug#end()It is indicated I have to make the content of Download plug.vim available inside the 'autoload' directory. Question 1: What is the 'autoload' directory here?In fact, I want to install vim-plug to install several plugins like nerdtree. The way I understand the procedure is to go over the website https://github.com/scrooloose/nerdtree, and take only the part scrooloose/nerdtree to install the plugin : call plug#begin('~/.vim/plugged')Plug 'scrooloose/nerdtree'call plug#end()then execute :PlugInstall.Question 2 : Where do I have an issue (if there are any)?
Managing plugins in Ubuntu
ubuntu;vim;plugin
null
_unix.231832
ShellCheck show the following error for this line of code:printf '%d' $(($(< $1) + 1)) > $1Make sure not to read and write the same file in the same pipelineIs this really a problem? Could reading and writing the same file result in a race condition?
Shellcheck complains that I should not to read and write the same file in the same pipeline
bash;shell;io redirection;shellcheck
Yes, reading and writing from the same file in parallel could result in a race condition. An input and an output redirection for the same file on the same command would truncate the file before starting to read it.But no, this isn't what's happening here. It's a false positive in Shellcheck. Here the redirection is inside an arithmetic expression. All substitutions (arithmetic, variable, command, as well as splitting and globbing) are performed before redirections are executed. So at the time > $1 opens the file, the reading bit is finished.
_cs.1018
The main idea of k-Nearest-Neighbour takes into account the $k$ nearest points and decides the classification of the data by majority vote. If so, then it should not have problems in higher dimensional data because methods like locality sensitive hashing can efficiently find nearest neighbours.In addition, feature selection with Bayesian networks can reduce the dimension of data and make learning easier.However, this review paper by John Lafferty in statistical learning points out that non-parametric learning in high dimensional feature spaces is still a challenge and unsolved. What is going wrong?
Non-Parametric Methods Like K-Nearest-Neighbours in High Dimensional Feature Space
machine learning;artificial intelligence
This problem is known as the curse of dimensionality. Basically, as you increase the number of dimensions, $d$, points in the space generally tend to become far from all other points. This makes partitioning the space (such as is necessary for classification or clustering) very difficult.You can see this for yourself very easily. I generated $50$ random $d$-dimensional points in the unit hypercube at 20 evenly selected values of $d$ from $1..1000$. For each value of $d$ I computed the distance from the first point to all others and took the average of these distances. Plotting this, we can see that average distance is increasing with dimensionality even though the space in which we are generating the points in each dimension remains the same.Average distance vs. dimensionality
_softwareengineering.311472
Let me sketch the situation:I have multiple users, with certain properties (2 enums)For each user I need to fetch data, for some with some basic filtering, for some extended filtering (= basic filtering + extra filtering). I'd like to do that not separate for every user, but I'd rather group the users and do it in two queries.For every user, I need to filter that data depending on the values of the enums. I will always need to do GetFirstData() (method depending on first enum), GetLastData() (method depending on second enum), CheckData() (depending on both enums). I've been looking at the Strategy Pattern, but it seems that's more designed to implement one behavior. I want to combine my behaviors to avoid making the combinations between all GetFirstData and GetLastData, is there any pattern to do this better? I've been thinking on just using 2 delegates and assign the corresponding methods depending on the values of the enums. Would this be the cleanest way?Little example of what I mean:public class User{ public Enum1 FirstEnum {get; set;} public Enum2 SecondEnum {get; set;} ...}public IEnumerable<Data> Filter(int userId, Expression extraFilter){ var data = GetData(userId); if(extraFilter != null) data = data.Where(extraFilter); return data;}public Data GetFirstData(IEnumerable<Data> data);public Data GetLastData(IEnumerable<Data> data);public bool CheckData(IEnumerable<Data> data);My endresult could do something like this:public class EndResult{ public Data FirstResult {get; set;} public Data SecondResult {get; set;} public Func<IEnumerable<Data>,Data> GetFirstData {get; set;} public Func<IEnumerable<Data>,Data> GetLastData {get; set;} public bool ExtendeFiltering {get; set;} public EndResult(User user) { switch(user.enum1) { case: GetFirstData = specificFunction; ExtendedFiltering = true; ... } //Second for GetLastData; } public void Execute() { GetData(); CheckData(); GetFirstData(); GetLastData(); }}Edit: For future readers who are curious, I didn't use delegates (not directly at least). I created 2 interface IFirst and ILast with a corresponding method. In my static create method defined on my processor class, I do the logic to create an instance of those interfaces based on certain conditions. The reason I left the path of directly using delegates is because it turned out I needed more parameters than just User for some of them. So I resorted to different implementations based on the parameters I need in the constructor of those classes.
Strategy Pattern not sufficient for my problem?
c#;design patterns;delegates
Using delegates in the suggested manner is a specific form of the strategy pattern which can always be used if the strategy objects are as simple as a function (see here, for example).However, the essence of your code is that you have an object of type EndResult which encapsulates how a set of operations interact (if the operations are given by strategy objects or strategy delegates does not matter). This is called Mediator pattern, the Mediator here provides what is called context in this description of the strategy pattern.Deciding between the different strategies will typically be the task of a factory method. If you make that method part of your mediator (as in your example, in the constructor of the EndResult), or if you prefer to delegate this to another class, is something you need to decide by yourself. It depends on things of the overall size and structure of that class and the surroundings, and how much separation of concerns you really need. I would typically start with a small solution (both in one class) and refactor as soon as the class starts getting too large.
_softwareengineering.103178
Very commonly we have feature requests for fields that only one customer wants. This, at best, clutters the application's code. Often when we look in their database a few months after adding the fields, we can see that they are not actually even using the extra fields. Also, it's quite an old application so adding a single field requires multiple code changes, changing reports, and making sure that it doesn't affect other customers who do not need to see the field.How can we make sure that a customer actually needs these feature requests? How do we politely say you don't really need that?Currently we are beginning to charge for certain feature requests. (Previously, feature requests were free usually) Is there anything else we can do?
How to handle can you add just a few more fields type of requests from customers?
customer relations;feature requests
null
_unix.367197
I've looked into sorting the result of du before and only ever seen suggestions to sort the result such as du | sort.This is acceptable for most uses but it is specifically unhelpful when listing multiple directories with hardlinks. For example I have an incremental backup:If du doubly count's hard links the content looks like this# du -hl --max-depth 1 /backup/saturn/ | sort -k 23.2G /backup/saturn/456M /backup/saturn/2017-05-19458M /backup/saturn/2017-05-20461M /backup/saturn/2017-05-21464M /backup/saturn/2017-05-22462M /backup/saturn/2017-05-23462M /backup/saturn/2017-05-24465M /backup/saturn/2017-05-25But these results aren't true because each dated dir shares a lot of hard links to other dir's files.... It's an incremental backup.But the more meaningful result looks like this.# du -h --max-depth 1 /backup/saturn/ | sort -k 2666M /backup/saturn/29M /backup/saturn/2017-05-1953M /backup/saturn/2017-05-2025M /backup/saturn/2017-05-2140M /backup/saturn/2017-05-22462M /backup/saturn/2017-05-2314M /backup/saturn/2017-05-2446M /backup/saturn/2017-05-25This is a little nonsensical because it has evaluated the dirs in an arbitrary order and so gives much less meaningful information on how much has changed from one date to the next.So I'm looking for a way to control the order du evaluates directories.
How to change du recursion order
shell script;disk usage
null
_webmaster.4095
Today I saw my website's page rank gone down from 2 to 0. It happened once before as well. I don't remember spamming anywhere and I don't have too many posts on my forum. I heard they are very strict about forum websites and frequently try to keep the rank down. Have anyone of you experienced this?
Why does Google remove PageRank sometimes?
google;pagerank
null
_codereview.83293
I'd like to know if I translated a piece of code correctly from C++ to Delphi. It looks like it is working, but I have a feeling that I'm reading and writing into memory that I'm not supposed to using Delphi.Given C++ code:struct tile_map{ int32 CountX; int32 CountY; uint32 *Tiles;};inline uint32GetTileValueUnchecked(tile_map *TileMap, int32 TileX, int32 TileY){ uint32 TileMapValue = TileMap->Tiles[TileY*TileMap->CountX + TileX]; return(TileMapValue);}uint32 Tiles00[9][17] = { {1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1}, {1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1}, {1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1}, {1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1}, {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1}, {1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1}, {1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1}, {1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1}, {1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1}, };// More tile map declarations ... // uint32 Tiles01[9][17] = ...// uint32 Tiles10[9][17] = ...// uint32 Tiles11[9][17] = ... tile_map TileMaps[2][2]; TileMaps[0][0].CountX = 17; TileMaps[0][0].CountY = 9; TileMaps[0][0].Tiles = (uint32 *)Tiles00; TileMaps[0][1] = TileMaps[0][0]; TileMaps[0][1].Tiles = (uint32 *)Tiles01; TileMaps[1][0] = TileMaps[0][0]; TileMaps[1][0].Tiles = (uint32 *)Tiles10; TileMaps[1][1] = TileMaps[0][0]; TileMaps[1][1].Tiles = (uint32 *)Tiles11;// Usage int32 PlayerTileX = 2; int32 PlayerTileY = 2; uint32 TileMapValue = GetTileValueUnchecked(&TileMap[1][1], PlayerTileX, PlayerTileY);Delphi translation:program Project1;{$APPTYPE CONSOLE}type Puint32 = ^uint32; tile_map = record CountX : int32; CountY : int32; Tiles : Puint32; end; Ptile_map = ^tile_map;{$POINTERMATH ON} function GetTileValueUnchecked(TileMap : Ptile_map; TileX, TileY : int32) : uint32; inline;begin result := TileMap^.Tiles[TileY * TileMap^.CountX + TileX];end;const //in the future these will be read from file, so const for now Tiles00: array [0..8, 0..16] of uint32 = ( (1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1), (1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1), (1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1), (1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1), (0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1), (1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1), (1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1), (1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1), (1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1) ); // More tile map declarations ... //Tiles01: array [0..8, 0..16] of uint32 = ... //Tiles10: array [0..8, 0..16] of uint32 = ... //Tiles11: array [0..8, 0..16] of uint32 = ...var TileMaps : array [0..1, 0..1] of tile_map; PlayerTileX, PlayerTileY : int32; TileMapValue : uint32;begin TileMaps[0][0].CountX := 17; TileMaps[0][0].CountY := 9; TileMaps[0][0].Tiles := Addr(Tiles00); TileMaps[0][1] := TileMaps[0][0]; TileMaps[0][1].Tiles := Addr(Tiles01); TileMaps[1][0] := TileMaps[0][0]; TileMaps[1][0].Tiles := Addr(Tiles10); TileMaps[1][1] := TileMaps[0][0]; TileMaps[1][1].Tiles := Addr(Tiles11); // Usage PlayerTileX := 2; PlayerTileY := 2; TileMapValue = GetTileValueUnchecked(@TileMaps[1][1], PlayerTileX, PlayerTileY);end.
Translating array pointer access from C++ to Delphi
c++;matrix;delphi
null
_webmaster.85388
We have a GoDaddy account and are using a redirect as a placeholder for our main site as dictated by management. They want dev and staging to access our application but everything else to go to the redirect.We want to have access to our application via a staging subdomain. I can't seem to get this thing to work. Between the GoDaddy Redirect, the DNS Zone file, the sites-enabled (on our server) and the hosts files, I am stuck. We're running on an AWS AMI running Ubuntu 14.04.3 LTS.Here's what I'm trying to do (text and numbers have been changed to protect the innocent):If you type in:http://dev.phishmenot.com -> The site in our development directory on our server.http://staging.phismenot.com -> The site in our staging directory on our server.http://[anything else].phishmenot.com -> (REDIRECT 301: KickoffLabs website)Our configuration is this:Then, we have the actual zone file:My hosts file (on our AWS server):127.0.0.1 localhost67.4.67.45 dev.phishmenot.com67.4.67.45 staging.phishmenot.com# The following lines are desirable for IPv6 capable hosts::1 ip6-localhost ip6-loopbackfe00::0 ip6-localnetff00::0 ip6-mcastprefixff02::1 ip6-allnodesff02::2 ip6-allroutersff02::3 ip6-allhostsAnd here's my staging.phishmenot.com.conf file, sitting in the sites-available directory, with the symlink appropriately sitting in the sites-enabled directory:<VirtualHost staging.phishmenot.com:80> ServerAdmin admin@phishmenot.comServerName phishmenot.com ServerAlias staging.phishmenot.comDocumentRoot /var/www/staging/current ErrorLog ${APACHE_LOG_DIR}/error.log CustomLog ${APACHE_LOG_DIR}/access.log combined</VirtualHost>We're using KickoffLabs to do some of our site promotion and I followed their directions for setup. What is strange is that I DID get the dev site to work. I just can't remember how I did it. I think I just kept tweaking things until it worked and then had to get back to development.I am most assuredly NOT a Unix or Server expert. Call me an informed amateur. I would like to get this configured in a standard way so when we get someone who DOES know what they're doing, it will be remotely recognizable.UPDATE:I tried the changes you suggested and I now can not reach my www.phishmenot.com site OR the www.kickofflabs.com site. Here's my .htaccess file; real names and addresses edited out, but consistent with this question:RewriteEngine onRewriteBase /RewriteCond %{HTTP_HOST} !^(dev|staging).phishmenot.com$RewriteRule ^(.*) http://proxy.kickofflabs.com/$1 [QSA,L,R=301]# Hide the application and system directories by redirecting the request to index.phpRewriteRule ^(application|system|\.svn) index.php/$1 [L]RewriteCond %{REQUEST_FILENAME} !-fRewriteCond %{REQUEST_FILENAME} !-dRewriteRule ^(.*)$ index.php/$1 [QSA,L]Header set Expires Thu, 19 Nov 1981 08:52:00 GMHeader set Cache-Control no-store, no-cache, must-revalidate, post-check=0, pre-check=0Header set Pragma no-cachePer the suggestion, I removed the redirect from GoDaddy, hopefully handling all through the .htaccess file. I also removed the CNAME record for www directing to the kickoff labs proxy. Now staging redirects to www.phishmenot.com which no longer addresses any server. I thought the @ in the a record was supposed to catch anything not listed as an A or CNAME record. NOTE: The additional mod_rewrite commands are supporting our application framework, CodeIgniter. If there is a conflict, I can make adjustments.
Overriding a redirect with an A record -- What am I doing wrong?
redirects;dns;godaddy;ubuntu
null
_webapps.75509
I am running a podcast and every week we do a show, however It is not always on the same day and there are new guests each week, so I don't want to make it a regular meeting.That being said what I really want to be able to do is say Invite the normal group and have it invite everyone in that group so I don't forget anyone
Google Calendar Invite a standard group
google calendar
null
_softwareengineering.170137
We are initiating a Spring + Primefaces project and the first problem we have encountered concerns storing the XHTML pages into the WEB-INF folder.When we use a faces form in a view located inside the WEB-INF folder, then the commandButton does not execute the managed bean method. <h:form id=loginForm> <p:commandButton action=#{LoginMgr.doLogin()} value=Login/> </h:form>Our bean:<bean id=LoginMgr name=LoginMgr class=com.tesipro.channelmanager.business.implemented.CMLoginManager></bean>In fact we think the problem is that with JSF, the pages are rendered using a link to the same page as the action of the form, so if the page is located in WEB-INF it is not publicly accessible.We know that having all our XHTML views in the web folder instead of WEB-INF actually solves the issue, but we would like to store that pages into WEB-INF.
Execute a Managed bean from a JSF view in WEB-INF folder
spring;jsf
Does storing it in app/WEB-INF/view folder help? We do have several actions defined in XHTMLs in app/WEB-INF/view folder. Not invoking a spring bean from there, though, but calling a web-flow action.
_softwareengineering.291207
This SDK have an interface like this:public interface Contract { void update(..); void action(..); void delete(..);}Now, we need to change it to something like this:public interface Contract { String update(..); String action(..); String delete(..);}The interface is implemented internally by the SDK; that is not a problem. The question scope is beyond the case who somebody has implemented this interface too outside the SDK.Does this change introduce a breaking change?
Could changing the return type from void to string introduce breaking changes?
java;refactoring;interfaces
null
_softwareengineering.314118
Im Just getting started with actor pattern. Coming from UnitOfWork pattern.Lets say i want to create Actor Pattern for a Employee Management System to mark they are present or absent.The Problem Im facing isShould I Create a class Say EmployeesActor instantiate it and use it as a single actor to manage all the employees. So then the messages will specify which particular employee to be marked present.ORShould i create a base class EmployeeActor and then create objects of this class for each employee in the System. This was messages would directly go to the object of EmployeeActor.
In Actor Pattern (AKKA.Net) Should actors be classes OR Objects?
.net;actor model;akka
I would suggest you to create something like EmployeesManager. As a Manager this one class particularly does not have a state but will be used to manage other actors that can have state. This will receive all messages. Once Manager receives command that a particular Employee be marked as present , it will get actor referece by providing the ID, similar to a GetByID call in DDD ( assuming you are persisting the same) and do a actor.Tell() to send the Command to the actor reference.(Its been a year since you posted this question. I would love to see a bigger community so newbie like me/us can get more insight)
_cs.67013
The Little Book of Semaphores (2nd Ed. by Allen B. Downey, section 4.5.1) as well as Wiki (link) mentioned that a trivial solution (as shown below) to the 'Cigarette smokers problem' will cause a deadlock. Somehow I couldn't wrap my head around it. Pls help me undersatnd how deadlock can occur for the below solution.P.S. We assumed agent code cannot be modified and We're free to use semaphores and other variables as neededCode sample (from the book): Agent A Agent B Agent C1 agentSem.wait() agentSem.wait() agentSem.wait()2 tobacco.signal() paper.signal() tobacco.signal()3 paper.signal() match.signal() match.signal() Smoker I Smoker II Smoker III1 tobacco.wait() paper.wait() tobacco.wait()2 paper.wait() match.wait() match.wait()3 agentSem.signal() agentSem.signal() agentSem.signal() Assume semaphores 'tobacco', 'paper', and 'match' are initialized with zero, and 'agentSem' is initialized with one.
Why deadlock in cigarette smokers problem
concurrency;deadlocks
Agent A consumes agentSem, and produces tobacco and paper.That might make Smoker I smoke but he is late: Smoker II already took the paper and Smoker III took tobacco.Now, all the smokers are stuck, and the agents as well.
_unix.272563
I am configuring a firewall on my Linux Mint 17.2 to maximize security, mainly to put my mind at rest that no one will be able to do anything malicious. In gufw, there are options for allowing or blocking incoming or outgoing. By denying incoming, what kinds of communications am I blocking? In other words, what does incoming refer to? (And out of curiosity, what does outgoing refer to?) I assume that incoming is when another computer attempts to connect to you in some manner, while outgoing is just the opposite - when you attempt to connect to another computer. However, I am not sure, so I would appreciate an answer here.
What does the Incoming option refer to in ufw?
firewall;ufw
null
_webapps.57808
I have two columns - they look like the following:YESNO | AMOUNT________________N | 13N | 22Y | 13The AMOUNT column is a value output from a condition, =IF(E16=1,13,IF(E16=2,22,IF(E16=3,30)))I would like to SUBTRACT 3 from the AMOUNT column IF the YESNO column has a value of Y and I'm just not sure how to arrange this.Any suggestions? What sort of function should I be looking at to make this happen?
Check cell value then subtract based on outcome
google spreadsheets
Make a new column with the result of the new calculation, and use a simple IF command.=IF(A2 = Y,B2 - 3, B2)If you don't want to make a new column, just append - IF(A2 = Y, 3, 0) to the end of your command.=IF(E16=1,13,IF(E16=2,22,IF(E16=3,30))) - IF(A2 = Y, 3, 0)You will of course have to replace A2 with whatever your YESNO column is.
_webmaster.74177
I would like it to parse standard markdown as parsed in github. I don't like the special syntax of {% highlight java %}.I understand it can parse textile and markdown but I don't get how to change the format of a single blog post to use this syntax with his incompatible with some other systems:```java some java code ```I've noticed that it kind of works, but its syntax highlighting is not as beautiful as with {% highlight java %}. I wish it to be the same.
How to have a blog post in Jekyll parse markdown code with ```java instead of {% highlight java %}
jekyll;markdown
null
_unix.239177
This is my sed command:while ...;do sed -r ${counter}s/^\S+ /$line /g $in > $out;....doneUnfortunately this command isn't doing anything when called from within a bash script/loop. So I thought to check if the variables are being resolved the right way:do echo sed -r ${counter}s/^\S+/$line/g $in > $out;which printed this to the console:sed -r <line number>/^\S+/<replace pattern>/g <infile> > <outfile>When executing this very command (without the ) from the console, I get this:sed: -e expression #1, char 8: unterminated s' commandI guess this is because the ' are missing around the pattern. So how do I combine double (for resolving variables in sed command) and single (for completing the search/replace pattern) quotation marks when calling this from a bash script?
Use sed replace with line number from variable
bash;sed
null
_unix.198848
My roommate has a really old 1280x1024 VGA display that the driver sets to 1600x1200 by default and it causes it to display a message saying it can't display the input. I can ctrl+alt+f1 and use xrandr -d :0 to find out the output that's being used but every time I do xrandr --output CRT1 --mode 1280x1024_60.00 it says that it can't find the display. The mode is displayed when I do xrandr -d :0 so I already know it's been added. I can configure it to work properly if I connect our TV as a secondary display but the second I disconnect it, it resets to 1600x1200. I need to get it set to 1280x1024 all the time so he can use his PC.
Change display output with xrandr?
linux;display;x server;display settings
So after installing other things to fix the drivers the crash message went away and the fix ended up being adding Modes 1280x1024 to the SubSection in the Screen section in xorg.conf
_codereview.94166
A lottery draw consists of choosing 6 different values from 1 to 50.I want to calculate the number of combinations with gaps of at least 3 between every pair of values.I have used the following Python 2.7 script:def func(array,length,min,max,gap): if len(array) == length: # print array return 1 count = 0 for n in range(min,max+1): count += func(array+[n],length,n+gap,max,gap) return count;print func([],6,1,50,3)Questions:Are there any coding improvements that I can apply?Is there a different method with which we can do it more efficiently?Although more suitable for math.stackexchange.com, is there a straightforward formula?
Count the number of combinations with gap-restriction for every pair of values
python;recursion;combinatorics
There's no docstring. How do I use this function? What arguments do I pass? What does it return?It would be very easy to make this code portable to Python 3: just put parentheses around the argument to print.There's no need to terminate a statement with a semicolon in Python.The name func does not give any hint as to what the function might do. A name like combinations_with_gap would be clearer.When you have code that accumulates a sum like this:count = 0for n in A: count += Byou can use the built-in sum:count = sum(B for n in A)If you're just counting combinations (rather than generating the combinations themselves), then you don't need the array variable, just its length.Presumably the caller is supposed to always pass the empty list [] for array and 1 for min. In that case, why make them do it? It would be easier for the caller if the function just took the arguments you need, and used a local function to do the work, like this:def combinations_with_gap(k, max, gap): Return the number of ways of choosing k of the numbers from 1 to max such that no two numbers are closer than gap. def c(length, min): if length == k: return 1 return sum(c(length + 1, n + gap) for n in range(min, max + 1)) return c(0, 1)This runs in just over 4 seconds on my computer:>>> from timeit import timeit>>> timeit(lambda:combinations_with_gap(6, 50, 3), number=1)4.182408502034377Where is it spending its time? If you trace the calls to the inner function c, you'll see that the same values for length and min occur many times. This is a waste of effort: having computed c(5, 44), say, it would be a good idea to remember the result and reuse it instead of computing it again.One way to do this is to memoize the function, for example using the @functools.lru_cache decorator:from functools import lru_cachedef combinations_with_gap(k, max, gap): Return the number of ways of choosing k of the numbers from 1 to max such that no two numbers are closer than gap. @lru_cache(maxsize=None) def c(length, min): if length == k: return 1 return sum(c(length + 1, n + gap) for n in range(min, max + 1)) return c(0, 1)This version takes just a couple of milliseconds:>>> timeit(lambda:combinations_with_gap(6, 50, 3), number=1)0.0017554410151205957
_unix.57827
Zsh in Emacs edit mode comes with the default key binding ALT + Backspace to delete a word on the right side of the cursor and ALT + D to delete a word on the left side. I would like to add the latter function to ALT + DEL additionally.I tried to use the terminfo database to set the escape sequence for the key combination for every $TERM correctly. In man terminfo I read about kDC3 being the Capname which I probably need to use for ALT + DEL.I added the following line to my ~/.zshrc:bindkey -e `tput kDC3` kill-wordThis works nicely when I connect to my machine directly through SSH ($TERM is xterm). But when I start Zsh inside a Tmux-session ($TERM is screen) I get the following error message:tput: unknown terminfo capability 'kDC3'Could that really mean that it's impossible to bind anything to ALT + DEL in Tmux? Or am I just doing something wrong? Maybe kDC3 is not the correct sequence?I'm running Debian Wheezy Beta 4 x86_64.
Tmux Terminfo problem with Zsh key bindings
debian;terminal;zsh;keyboard shortcuts;tmux
The first problem is that your terminfo entry for screen does not define a kDC3 capability; this is probably typical. You can either add this capability to your own custom screen entry, or you can hard code the sequences in your bindkey commands.Adding the capabilities may help other programs know about the keys, but it decentralizes your configuration (it would be easy to forget about this customization when you manually replicate your configuration to a new machine or user account). You can extract the appropriate entries with infocmp and build a new entry with tic:{ infocmp -xT screen ; infocmp -x1T xterm | grep -E '^\tkDC[3-8]?=' ; } >/tmp/stic -x /tmp/sIf you run tic as a user that has write access to your terminfo directory (e.g. /usr/share/terminfo), then the new entry will be placed there (probably overwriting the original entry); otherwise, it will be placed under ~/.terminfo (or TERMINFO, if you have that environment variable set).For completeness, you may want to use (UP|DN|RIT|LFT|PRV|NXT|HOM|END|IC|DC) instead of DC in the grep pattern to capture the modified versions of Up, Down, Right, Left, PageUp, PageDown, Home, End, Insert, and Delete.If you dislike the configuration decentralization caused by customizing your terminfo entry, then you can hard code the value instead. To make it a bit better, you can check for kDC3 first:bindkey -e ${$(tput kDC3 2>/dev/null):-'\e[3;3~'} kill-wordTo restrict this hard coding to just screen-based TERM values:altdel=$(tput kDC3 2>/dev/null)[[ -z $altdel && $TERM == screen(|-*) ]] && altdel='\e[3;3~'[[ -n $altdel ]] && bindkey -e $altdel kill-wordunset altdelThis will work as long as your terminal emulator (stack) ends up generating the xterm-style sequence for the modified key.Once to have a binding, you will still need to turn on the xterm-keys option in tmux so that it will generate the xterm-style sequences for keys passed into its panes. E.g. in your ~/.tmux.conf:set-option -wg xterm-keys on
_ai.3295
I implemented Actor-Critic with N-step TD prediction to learn to play 2048 (link to the game : http://2048game.com/)For the enviroment I don't use this 2048 implementation. I use a simple one without any graphical interface, just pure matrices. The input for the neural network is the log2 of the game board.The structure of my network is: 1. Input layer 2. Hidden layer with 16 units 3. Softmax layer with 4 units (up, down, left, right) for the actor 4. Linear regression for the criticThe hidden layer is shared between the third and fourth layer. The reward in the orginal game is the value of the merged cells. For example, if two fours merged than the reward is eight. My reward function is almost the same, except I take the log2 of it.I tried these parameteres and I also tweaked the learning rate, the gamma, but I couldn't achive any good result. Could You recommend what should I change?
Reinforcement learning for 2048
neural networks;reinforcement learning;game ai
null
_softwareengineering.100528
Another 2 days of reading and watching demos and here we go.For my enterprise LoB Silverlight app I'm going to use:Prism for UI aspects and modularity.MVVM pattern (using Prism)??? to bring data over and validations...Entity Framework for Data accessSQL Server for dataOk, main dilemma is #3. If I won't use any framework then I will have to figure out how to do all CRUD stuff myself. I can do RESTful WCF, I can do SOAP. All that == MANUAL coding.I can do RIA Services. I kind of see what it does and it is nice for direct match with my data layer BUT it is not that great if there will be lot of business logic. Where would I put it? In my ViewModel? Another question is how those services maintained. Once I generated it - I should maintain them by hand if data changes?I also found CSLA which seems to be nice on one hand but receives lot's of critique.. CSLA will allow me to write business logic and shape object as I needed and than I can pass it through ViewModel and all is well.Something tells me that RIA Services will be much quicker to write. Also, I like the fact that I don't have to include extra dependencies.There is no blogs or mentioning of RIA Services since 2010. Is it going under table? Not widely accepted? Not scaling well for big apps? I'm trying to decide which one I need to bet on. CSLA or RIA Services. OR?
New Silverlight app. MVVM. RIA Services vs CSLA
design;design patterns;silverlight;patterns and practices
I have recently been working on some line-of-business Silverlight projects. For one we used straight WCF and did all the CRUD, state tracking, entity relationship, etc. ourselves. For the second project we are using RIA Services and had it manage all of that stuff for us.However, we did NOT have RIA Services directly generate services based on our ORM model. We had a layer of interfaces and dumb objects in between. So we had a distinct data tier, business logic tier, and service tier. RIA Services was only involved at the service tier.If you are going to use RIA Services, I recommend limiting how much of the layers of your application you let it influence. It does force you down certain paths as far as design is concerned, so the more you contain it the more flexibility you will have. This advice is probably sound for CSLA as well. If you are going to bet on a particular framework, hedge your bet as much as possible.In summary, RIA Services will definitely save you time, but it does limit your flexibility a bit compared with raw WCF. And it has some kinks and weaker areas that still need improvement.I haven't worked with CSLA, but I have seen Rocky Lhotka give talks on it several times so I know the basics. It seems like a solid framework. However, the main disadvantage it will have compared with RIA Services is that it isn't directly from Microsoft, so CSLA won't mesh as cohesively with other MS stuff as much as RIA Services will. Also, in the long run I'm guessing it will be easier to find people that know RIA Services versus people that know CSLA.If it helps, I gave a talk on RIA Services you can find the presentation and some sample code on my blog.http://rationalgeek.com/blog/post/WCF-RIA-Services-Silverlight-Guild.aspx
_softwareengineering.160027
Let's say I have artifacts mylibrary-5.2.jar and mylibrary-5.3.jar representing the 5.2 and 5.3 versions of a library that our project creates and publishes for one of our other projects. Does Artifactory support having multiple versions of each of these artifacts to represent the different builds that were performed during a release to construct this artifact? For example, to produce the final version of the 5.2 release of mylibrary aka the artifact: mylibrary-5.2.jar, we went through 3 builds to get to a version that passed our integration environment's automated tests and our user acceptance tests. So there were three separate builds that produced three separate artifacts for release 5.2. We want to be able to retain and potentially recall these different build's artifact at a later date (for testing, etc).In order to do this, which of the following options would work?Capture the artifacts as separate Artifacts, i.e.build-5.2-b1.jar (build 1's artifact), build-5.2-b2.jar (build 2'sartifact), build-5.2-b3.jar (build 3's artifact), and build-5.2.jar(the final production release; which matches build 3)Capture a SINGLE artifact named build-5.2.jar which hasVERSIONS of the artifact which capture builds 1 through 3 and whichcan be recalled later, by version number. Some other option we have not considered, but should
How to capture different build verisons of the same production release artifact version in Artifactory?
version control
The usual way to do this is create each build with a separate version number. For example, 5.2.RC1, 5.2.RC2, and so on (RC is release candidate). Then make a final version that's 5.2.RELEASE.I believe (no evidence) that version 5.2.RC1 is considered newer than 5.2, which is why companies that do this tend to go with 5.2.RELEASE as the final version.You might be able to tell Artifactory to not remove snapshots when the release version is deployed. Since each snapshot is written to the repository with a unique datestamp, this will give you the history you want.You definitely do not want to combine different versions into a single JAR. That will just make your life more difficult trying to use that JAR.And really, I think you'll find the benefit of keeping each release candidate to be pretty low. Unless, of course, you're in some highly regulated or litigated field, where you must keep each version to satisfy the regulators/lawyers.
_codereview.50965
The number, 1406357289, is a 0 to 9 pandigital number because it is made up of each of the digits 0 to 9 in some order, but it also has a rather interesting sub-string divisibility property.Let d1 be the 1st digit, d2 be the 2nd digit, and so on. In this way, we note the following:d2d3d4=406 is divisible by 2d3d4d5=063 is divisible by 3d4d5d6=635 is divisible by 5d5d6d7=357 is divisible by 7d6d7d8=572 is divisible by 11d7d8d9=728 is divisible by 13d8d9d10=289 is divisible by 17Find the sum of all 0 to 9 pandigital numbers with this property.Project Euler 43from itertools import permutationsfrom primes import primes_uptofrom collections import Counterfrom timeit import default_timer as timerstart = timer()def follows_property(n): divisors = primes_upto(17) for k in range(7): if int(n[k:(k+3)]) % divisors[k] != 0: return False return Trueans = 0digits = Counter(range(10))start = timer()for combo in permutations(range(10), 9): num = ''.join([str(x) for x in list(combo)]) if follows_property(num): missing = int(list((digits - Counter(sorted([int(k) for k in str(num)]))).elements())[0]) num = int(num) ans += int(%d%d % (missing, num))elapsed_time = (timer() - start) * 1000 # s --> msprint Found %d in %r ms. % (ans, elapsed_time)
Speeding up Project Euler 43 - sub-string divisibility
python;optimization;performance;strings;programming challenge
You can make a few quick improvements without altering the algorithm significantly:Remove one of the redundant calls to timer().Store the list of primes instead of calculating it for every call to follows_property.Convert the digits to strings in the list passed to permutations so you can simplify the calculation of num.Run through all permutations instead of 9-tuples and remove the Counter and missing parts.These are minor changes, but they clean up the code a bit. I also renamed ans to sum to clarify what it holds. They also cut the running time by more than half.from itertools import permutationsfrom primes import primes_uptofrom collections import Counterfrom timeit import default_timer as timerdivisors = primes_upto(17)def follows_property(n): for k in range(7): if int(n[k+1:(k+4)]) % divisors[k] != 0: return False return Truesum = 0start = timer()for combo in permutations([str(x) for x in range(10)]): num = ''.join(combo) if follows_property(num): sum += int(num)elapsed_time = (timer() - start) * 1000 # s --> msprint(Found %d in %r ms. % (sum, elapsed_time))
_unix.171388
A simple search in nvi on text such as:the quick red fox jumped 1 foot over the lazy 28 pound dogusing the following search/[[:digit:]]behaves like/[[:alnum:]]That is, it finds every character when repeated. For that matter all of the bracket expressions I tried behaved as alnum. However/[0-9]worked as expected just finding 1, 2, and 8. I've been using nvi for some time but there's a yawning chasm in my knowledge here. Help is appreciated.
Character class bug in nvi: [[:digit:]] is interpreted like [[:alnum:]]
vi;nvi
null
_codereview.44135
I want to refactor the following code because I don't feel comfortable about using assignment inside comparison operator. It looks like pretty idiomatic C, but do you think this is a good practice in Java?private void demoA(BufferedReader reader) throws IOException { String line = null; while ((line = reader.readLine()) != null) { doSomething(line); }}Here is an alternative.private void demoB(BufferedReader reader) throws IOException { String line = reader.readLine(); while (line != null) { doSomething(line); line = reader.readLine(); }}UPDATE: I've stumbled across a similar question asked couple years ago. It seems that opinions on whether it's OK or not are divided. However, both Guava and Commons IO provide alternative solutions for this issue. If I had any of these libs in the current project, I'd probably use them instead.
Is it OK to use while ((line = r.readLine()) != null) construct?
java;stream
Assignment inside a condition is ok in this case, as the assignment is surrounded by an extra pair of parentheses the comparison is obviously != null, there is no chance that we wanted to type line == reader.readLine().However, a for loop might actually be more elegant here:for (String line = reader.readLine(); line != null; line = reader.readLine()) { doSomething(line);}Alternatively, we could do this which also restricts the scope of line as with the for-loop, and additionally eliminates unnecessary repetition:while (true) { final String line = reader.readLine(); if (line == null) break; doSomething(line);}I like this solution most because it doesn't mutate any variables.
_webmaster.103539
For a project I need to fetch the file size of a GitHub release file using CURL without actually downloading the entire file. $url = 'https://github.com/atom/atom/releases/download/v1.10.2/AtomSetup.exe'$ch = curl_init($url);curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);curl_setopt($ch, CURLOPT_HEADER, 1);curl_setopt($ch, CURLOPT_NOBODY, 1);curl_setopt($ch, CURLOPT_FOLLOWLOCATION, 1);curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, 30);$data = curl_exec($ch);$size = curl_getinfo($ch, CURLINFO_CONTENT_LENGTH_DOWNLOAD);curl_close($ch);echo $size;echo $data;As I have figured out CURLOPT_NOBODY allows me to just get the size of the file (from it's header) without actually downloading the entire file. If I remove line I do get the filesize but that also downloads the entire file, and that's something I'm trying to avoid. CURLOPT_NOBODY also replaces GET request with HEAD and I guess that's the part of the problem.I have also tried setting up CURLOPT_USERAGENT, CURLOPT_COOKIEFILE and CURLOPT_COOKIEJAR but that didn't work out.Any help is warm welcome!Here is the complete CURL output:HTTP/1.1 302 FoundServer: GitHub.comDate: Mon, 06 Feb 2017 16:19:08 GMTContent-Type: text/html; charset=utf-8Status: 302 FoundCache-Control: no-cacheVary: X-PJAXLocation: https://github-cloud.s3.amazonaws.com/releases/3228505/5b5e9204-7507-11e6-8019-f5a3bc356747.exe?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIA X-UA-Compatible: IE=Edge,chrome=1Set-Cookie: _gh_sess=eyJzZXNzaW9uX2lkIjoiNWFjZWNhZDEzMjFmMmFkNmFjOGMxOTIwNzRlNGQzNTEiLCJzcHlfcmVwbyI6ImF0b20vYXRvbSIsInNweV9yZXBvX2F0IjoxNDg2Mzk3OTQ4fQ%3D%3D--2 X-Request-Id: 4d192195dc74c4f6a056710d3fd30be9X-Runtime: 0.120950Content-Security-Policy: default-src 'none'; base-uri 'self'; connect-src 'self' uploads.github.com status.github.com collector.githubapp.com api.github.com www Strict-Transport-Security: max-age=31536000; includeSubdomains; preloadPublic-Key-Pins: max-age=5184000; pin-sha256=WoiWRyIOVNa9ihaBciRSC7XHjliYS9VwUGOIud4PB18=; pin-sha256=RRM1dGqnDFsCJXBTHky16vi1obOlCgFFn/yOhI/y+ho=; pin-sha2 X-Content-Type-Options: nosniffX-Frame-Options: denyX-XSS-Protection: 1; mode=blockVary: Accept-EncodingX-Served-By: 97b025644def4c59b05efb255b209cdaX-GitHub-Request-Id: CF4C:3E44:58EC2B1:8D98E31:5898A1FBHTTP/1.1 403 Forbiddenx-amz-request-id: 962EAB609235815Dx-amz-id-2: 1uccfAjq9MwgFo5BUGLhFkCjCNhkQKpVnAU9s6AF0pNy0fPqnEw6Se+UOa1RESRxyAmtoZXKImw=Content-Type: application/xmlTransfer-Encoding: chunkedDate: Mon, 06 Feb 2017 16:19:07 GMTServer: AmazonS3
How to get a github release filesize using CURL without downloading a file?
php;github
null
_webapps.25364
I log in to http://webchat.freenode.net/ regularly. But in most channels, the amount of chatting is lesser than the notifications of users entering/leaving. Is there a command I can use that either hides this data or gives it a different color so I can ignore it?
Web IRC - how do I filter out the noise of people leaving and entering a room?
irc
Menu (top left corner) > Options > Hide Joins/Parts/Quits. You can also try some desktop IRC client.
_webapps.59665
I would like to know how can I take Gospel music out of my stations. I use Pandora a lot while I workout and although I like Gospel, I don't want to hear it during my workout. I play Pandora at home sometimes when i have small gatherings and on one station it will go from My Goodies to We Fall Down, almost inappropriately. I know it can be done I'm just not sure how to do it. Can someone help me out?
How can I exclude a particular genre of music from playlists on Pandora?
pandora;pandora playlist
null
_codereview.33017
I find myself using this bit of code very often when I am retrieving the results from a CursorSearchItem searchItem = new SearchItem();searchItem.setId(cursor.getInt(cursor.getColumnIndex(COLUMN_NAME_ID)));searchItem.setOrigin(cursor.getString(cursor.getColumnIndex(COLUMN_NAME_ORIGIN)));searchItem.setDestination(cursor.getString(cursor.getColumnIndex(COLUMN_NAME_DESTINATION)));searchItem.setTimeStamp(cursor.getLong(cursor.getColumnIndex(COLUMN_NAME_TIMESTAMP)));Specifically this could be when I am using a CursorAdapter for a ListView and in a DAO object.What would be a useful design pattern to use in this case?My first instinct is to have a singleton class which does this. Are there any problems in taking this route?
Reusing Code With Database Cursors
java;android
null
_unix.311795
Whenever I try to use the variable declared inside my SSH session it gives me blank output. Here is the code which I am trying to execute:ssh -T host <<\HEREexport usage1=$(df -h |grep /nas/infa|sed s/%//g| awk '{printf(%d\n,$4)}');echo $usage1HEREecho $usage1I am able to get the desired output inside the SSH session, but when calling the same variable outside the SSH it gives me blank.
I want to use the variable declared inside an SSH session to be used locally in my shell script
shell script;ssh;scripting;variable
To get the value from the remotely executed command into a variable in your local environment, it's the same as getting the value of a locally executed command into your local environment, e.g.,export usage1=$(ssh -T host <<\HEREdf -h|grep /nas/infa|sed s/%//g|awk '{printf(%d\n,$4)}'HERE)echo $usage1
_softwareengineering.135240
I am starting a research project and need to nail down a programming language and 3D graphics API where I will be creating an environment in the field of molecular cell biology where I will be simulating a large range of experiments in silico. This will be an ongoing project which will keep expanding and growing in size. The experiments will be done in discrete time and will encompass a complex physics engine. Pretty graphics are not of importance nor is is audio. I run Ubuntu and this is the environment I will be developing in.The pairs I have investigated thus far are:java : flexible,scalable,lib/dependency organization (maven),oojogl: full control of graphicsjMonkeyEngine: full game library, includes jBullet, maybe a bit overkill for what I need.c++ : High speed, low level, ooopengl:full control of graphicsmaya:built in physics engine, infrastructure already in place, proprietary, not free matlab : Heavily math based, but difficult to maintain and scalesimulink : proprietary, not freeI am least familiar with Maya but it was suggested to me by a friend as a good approach. When would Maya be a good solution and would it be a good selection for this type of case? Just as thinking jMonkey would be overkill this is kinda my feeling on Maya.Are there any other pairings I should investigate? Also has anyone had any experience modifying an existing physics library and the flexibility or complexity of doing so (ie Bullet, JBullet, jinngine)? Unsure at this point if I should start from scratch or try and modify/expand an existing one. Any thoughts, feedback, input or any other suggestions would be greatly appreciated.Thanks
Best approach to selecting programming languages and 3D graphics API for simulating physics experiments
java;c++;3d;opengl;simulation
null
_softwareengineering.129123
I am reading the book The Elements of Computing Systems: Building a Modern Computer from First Principles, which contains projects encompassing the build of a computer from boolean gates all the way to high level applications (in that order). The current project I'm working on is writing an assembler using a high level language of my choice, to translate from Hack assembly code to Hack machine code (Hack is the name of the hardware platform built in the previous chapters). Although the hardware has all been built in a simulator, I have tried to pretend that I am really constructing each level using only the tools available to me at that point in the real process.That said, it got me thinking. Using a high level language to write my assembler is certainly convenient, but for the very first assembler ever written (i.e. in history), wouldn't it need to be written in machine code, since that's all that existed at the time?And a correlated question... how about today? If a brand new CPU architecture comes out, with a brand new instruction set, and a brand new assembly syntax, how would the assembler be constructed? I'm assuming you could still use an existing high level language to generate binaries for the assembler program, since if you know the syntax of both the assembly and machine languages for your new platform, then the task of writing the assembler is really just a text analysis task and is not inherently related to that platform (i.e. needing to be written in that platform's machine language)... which is the very reason I am able to cheat while writing my Hack assembler in 2012, and use some preexisting high level language to help me out.
Were the first assemblers written in machine code?
assembly;low level
for the very first assembler ever written (i.e. in history), wouldn't it need to be written in machine codeNot necessarily. Of course the very first version v0.00 of the assembler must have been written in machine code, but it would not be sufficiently powerful to be called an assembler. It would not support even half the features of a real assembler, but it would be sufficient to write the next version of itself. Then you could re-write v0.00 in the subset of the assembly language, call it v0.01, use it to build the next feature set of your assembler v0.02, then use v0.02 to build v0.03, and so on, until you get to v1.00. As the result, only the first version will be in machine code; the first released version will be in the assembly language.I have bootstrapped development of a template language compiler using this trick. My initial version was using printf statements, but the first version that I put to use in my company was using the very template processor that it was processing. The bootstrapping phase lasted less than four hours: as soon as my processor could produce barely useful output, I re-wrote it in its own language, compiled, and threw away the non-templated version.
_unix.113893
I tried to run the following:$ vlc -I dummy v4l2:///dev/video0 --video-filter scene --no-audio --scene-path webcam.png --scene-prefix image_prefix --scene-format png vlc://quit --run-time=1 VLC media player 2.0.7 Twoflower (revision 2.0.6-54-g7dd7e4d) [0x1f4a1c8] dummy interface: using the dummy interface module... [0x7fc19c001238] v4l2 demux error: VIDIOC_STREAMON failed libv4l2: error setting pixformat: Device or resource busy libv4l2: error setting pixformat: Device or resource busy libv4l2: error setting pixformat: Device or resource busy libv4l2: error setting pixformat: Device or resource busy libv4l2: error setting pixformat: Device or resource busy libv4l2: error setting pixformat: Device or resource busy libv4l2: error setting pixformat: Device or resource busy libv4l2: error setting pixformat: Device or resource busy libv4l2: error setting pixformat: Device or resource busy libv4l2: error setting pixformat: Device or resource busy [0x7fc19c007f18] v4l2 access error: cannot set input 0: Device or resource busy [0x7fc19c007f18] v4l2 access error: cannot set input 0: Device or resource busy [0x7fc1a4000b28] main input error: open of `v4l2:///dev/video0' failed [0x7fc1a4000b28] main input error: Your input can't be opened [0x7fc1a4000b28] main input error: VLC is unable to open the MRL 'v4l2:///dev/video0'. Check the log for details. [0x7fc19c007cc8] idummy demux: command `quit' So I'm assuming that there is a program currently accessing my webcam, which is cumbersome since its light is off and lsof | grep /dev/video returns nothing. Is there another, proper way to check what processes are currently using my webcam? Or is the problem of an entirely different nature?
How do I find out which process is using my V4L2 webcam?
devices;camera;vlc;v4l
null
_codereview.77241
I have this small generic path search library. It's not perfect at all, so I need some comments as to have a chance to improve it.com.stackexchange.codereview.graph.model:AbstractHeuristicFunction.java:package com.stackexchange.codereview.graph.model;public interface AbstractHeuristicFunction<T extends AbstractNode<T>> { public void setTarget(final T target); public void setLayout(final PlaneLayout layout); public double h(final T node);}AbstractNode.java:package com.stackexchange.codereview.graph.model;public abstract class AbstractNode<T extends AbstractNode<T>> implements Iterable<T> { protected final String id; protected AbstractNode(final String id) { if (id == null) { throw new IllegalArgumentException(The ID string is null.); } this.id = id; } public abstract boolean connectTo(final T node); public abstract boolean disconnectFrom(final T node); public abstract boolean isConnectedTo(final T node); public abstract Iterable<T> parents(); @Override public int hashCode() { return id.hashCode(); } @Override public boolean equals(final Object o) { if (!(o instanceof AbstractNode)) { return false; } return (((AbstractNode<T>) o).id.equals(this.id)); }}AbstractWeightFunction.java:package com.stackexchange.codereview.graph.model;import java.util.HashMap;import java.util.Map;public abstract class AbstractWeightFunction<T extends AbstractNode<T>> { protected final Map<T, Map<T, Double>> map; protected AbstractWeightFunction() { this.map = new HashMap<>(); } public abstract Double put(final T node1, final T node2, final double weight); public abstract double get(final T node1, final T node2);}AbstractPathFinder.java:package com.stackexchange.codereview.graph.model;import java.util.ArrayList;import java.util.Collections;import java.util.List;import java.util.Map;public abstract class AbstractPathFinder<T extends AbstractNode<T>> { public abstract List<T> search(final T source, final T target); protected List<T> constructPath(final T middleNode, final Map<T, T> parentMapA, final Map<T, T> parentMapB) { final List<T> path = new ArrayList<>(); T current = middleNode; while (current != null) { path.add(current); current = parentMapA.get(current); } Collections.<T>reverse(path); if (parentMapB != null) { current = parentMapB.get(middleNode); while (current != null) { path.add(current); current = parentMapB.get(current); } } return path; } protected List<T> constructPath(final T target, final Map<T, T> parentMap) { return constructPath(target, parentMap, null); }}PlaneLayout.java:package com.stackexchange.codereview.graph.model;import java.awt.geom.Point2D;import java.util.HashMap;import java.util.Map;public class PlaneLayout<T extends AbstractNode<T>> { private final Map<T, Point2D.Double> map; public PlaneLayout() { this.map = new HashMap<>(); } public Point2D.Double put(final T node, final Point2D.Double location) { return map.put(node, location); } public Point2D.Double get(final T node) { return map.get(node); }}com.stackexchange.codereview.graph.model.support:AStarPathFinder.java:package com.stackexchange.codereview.graph.model.support;import static com.stackexchange.codereview.graph.Utils.checkNotNull;import com.stackexchange.codereview.graph.model.AbstractHeuristicFunction;import com.stackexchange.codereview.graph.model.AbstractNode;import com.stackexchange.codereview.graph.model.AbstractWeightFunction;import com.stackexchange.codereview.graph.model.AbstractPathFinder;import java.util.Collections;import java.util.Comparator;import java.util.HashMap;import java.util.HashSet;import java.util.List;import java.util.Map;import java.util.PriorityQueue;import java.util.Set;public class AStarPathFinder<T extends AbstractNode<T>>extends AbstractPathFinder<T> { private AbstractHeuristicFunction<T> heuristicFunction; private AbstractWeightFunction<T> weightFunction; private final Map<T, T> PARENTS; private final Map<T, Double> DISTANCE; private final Set<T> CLOSED; private PriorityQueue<T> OPEN; public AStarPathFinder() { this.PARENTS = new HashMap<>(); this.DISTANCE = new HashMap<>(); this.CLOSED = new HashSet<>(); } @Override public List<T> search(T source, T target) { checkNotNull(heuristicFunction, Heuristic function is null.); checkNotNull(weightFunction, Weight function is null.); clearState(); heuristicFunction.setTarget(target); OPEN.add(source); PARENTS.put(source, null); DISTANCE.put(source, 0.0); while (OPEN.size() > 0) { final T current = OPEN.poll(); if (current.equals(target)) { return constructPath(target, PARENTS); } CLOSED.add(current); for (final T child : current) { if (CLOSED.contains(child)) { continue; } final double w = g(current) + w(current, child); if (!PARENTS.containsKey(child)) { PARENTS.put(child, current); DISTANCE.put(child, w); // DISTANCE updated, implicitly used by OPEN.add. OPEN.add(child); } else if (w < g(child)) { PARENTS.put(child, current); DISTANCE.put(child, w); // Reinsert as to decrease the priority. OPEN.remove(child); OPEN.add(child); } } } // Empty list denotes that target is not reachable from source. return Collections.<T>emptyList(); } public AStarPathFinder<T> setWeightFunction(final AbstractWeightFunction<T> function) { this.weightFunction = function; return this; } public AStarPathFinder<T> setHeuristicFunction(final AbstractHeuristicFunction<T> function) { this.heuristicFunction = function; this.OPEN = new PriorityQueue<>( new FValueComparator(DISTANCE, function)); return this; } private double h(final T node) { return heuristicFunction.h(node); } private double w(final T tail, final T head) { return weightFunction.get(tail, head); } private double g(final T node) { return DISTANCE.get(node); } private void clearState() { PARENTS.clear(); DISTANCE.clear(); CLOSED.clear(); OPEN.clear(); } private class FValueComparator implements Comparator<T> { private final Map<T, Double> DISTANCE; private final AbstractHeuristicFunction<T> function; FValueComparator(final Map<T, Double> DISTANCE, final AbstractHeuristicFunction<T> function) { this.DISTANCE = DISTANCE; this.function = function; } @Override public int compare(final T o1, final T o2) { final double f1 = DISTANCE.get(o1) + function.h(o1); final double f2 = DISTANCE.get(o2) + function.h(o2); return Double.compare(f1, f2); } }}DijkstraHeuristicFunction.java:package com.stackexchange.codereview.graph.model.support;import com.stackexchange.codereview.graph.model.AbstractHeuristicFunction;import com.stackexchange.codereview.graph.model.AbstractNode;import com.stackexchange.codereview.graph.model.PlaneLayout;public class DijkstraHeuristicFunction<T extends AbstractNode<T>> implements AbstractHeuristicFunction<T> { @Override public double h(T node) { return 0.0; } @Override public void setTarget(T target) { } @Override public void setLayout(PlaneLayout layout) { }}DirectedGraphNode.java:package com.stackexchange.codereview.graph.model.support;import com.stackexchange.codereview.graph.model.AbstractNode;import java.util.Iterator;import java.util.LinkedHashSet;import java.util.Set;public class DirectedGraphNode extends AbstractNode<DirectedGraphNode> { private final Set<DirectedGraphNode> in; private final Set<DirectedGraphNode> out; public DirectedGraphNode(final String id) { super(id); // LinkedHashSet iterates way faster than HashSet. this.in = new LinkedHashSet<>(); this.out = new LinkedHashSet<>(); } @Override public boolean connectTo(DirectedGraphNode node) { if (out.contains(node)) { return false; } out.add(node); node.in.add(this); return true; } @Override public boolean disconnectFrom(DirectedGraphNode node) { if (!out.contains(node)) { return false; } out.remove(node); node.in.remove(this); return true; } @Override public boolean isConnectedTo(DirectedGraphNode node) { return out.contains(node); } @Override public Iterable<DirectedGraphNode> parents() { return new Iterable<DirectedGraphNode>() { @Override public Iterator<DirectedGraphNode> iterator() { return new IteratorProxy<>(in.iterator()); } }; } @Override public Iterator<DirectedGraphNode> iterator() { return new IteratorProxy<>(out.iterator()); } @Override public String toString() { return [DirectedGraphNode + id + ]; }}DirectedGraphWeightFunction.java:package com.stackexchange.codereview.graph.model.support;import com.stackexchange.codereview.graph.model.AbstractWeightFunction;import java.util.HashMap;public class DirectedGraphWeightFunction extends AbstractWeightFunction<DirectedGraphNode> { public DirectedGraphWeightFunction() { super(); } @Override public Double put(final DirectedGraphNode node1, final DirectedGraphNode node2, final double weight) { if (!map.containsKey(node1)) { map.put(node1, new HashMap<>()); } final Double old = map.get(node1).get(node2); map.get(node1).put(node2, weight); return old; } @Override public double get(final DirectedGraphNode node1, final DirectedGraphNode node2) { return map.get(node1).get(node2); }}IteratorProxy.java:package com.stackexchange.codereview.graph.model.support;import java.util.Iterator;public class IteratorProxy<T> implements Iterator<T> { private final Iterator<T> iterator; protected IteratorProxy(final Iterator<T> iterator) { this.iterator = iterator; } @Override public boolean hasNext() { return iterator.hasNext(); } @Override public T next() { return iterator.next(); }}PlaneHeuristicFunction.java:package com.stackexchange.codereview.graph.model.support;import com.stackexchange.codereview.graph.model.AbstractHeuristicFunction;import com.stackexchange.codereview.graph.model.AbstractNode;import com.stackexchange.codereview.graph.model.PlaneLayout;import java.awt.geom.Point2D;public class PlaneHeuristicFunction<T extends AbstractNode<T>> implements AbstractHeuristicFunction<T> { private T target; private PlaneLayout<T> layout; private Point2D.Double targetLocation; public PlaneHeuristicFunction(final PlaneLayout<T> layout, final T target) { this.layout = layout; this.targetLocation = layout.get(target); } @Override public void setLayout(PlaneLayout layout) { this.layout = layout; this.targetLocation = layout.get(target); } @Override public void setTarget(T target) { this.target = target; this.targetLocation = layout.get(target); } @Override public double h(final T node) { return targetLocation.distance(layout.get(node)); }}com.stackexchange.codereview.graph:Utils.java:package com.stackexchange.codereview.graph;import com.stackexchange.codereview.graph.model.PlaneLayout;import com.stackexchange.codereview.graph.model.support.DirectedGraphNode;import com.stackexchange.codereview.graph.model.support.DirectedGraphWeightFunction;import java.awt.geom.Point2D;import java.util.ArrayList;import java.util.List;import java.util.Random;public class Utils { public static class Triple<F, S, T> { private final F first; private final S second; private final T third; public Triple(final F first, final S second, final T third) { this.first = first; this.second = second; this.third = third; } public F first() { return first; } public S second() { return second; } public T third() { return third; } } public static Triple<List<DirectedGraphNode>, DirectedGraphWeightFunction, PlaneLayout> createRandomDigraph(final int nodeAmount, float edgeLoadFactor, final double width, final double height, final double maxDistance, double weightFactor, final Random rnd) { final List<DirectedGraphNode> graph = new ArrayList<>(nodeAmount); final PlaneLayout layout = new PlaneLayout(); final DirectedGraphWeightFunction weightFunction = new DirectedGraphWeightFunction(); for (int i = 0; i < nodeAmount; ++i) { final DirectedGraphNode node = new DirectedGraphNode( + i); layout.put(node, new Point2D.Double(width * rnd.nextDouble(), height * rnd.nextDouble())); graph.add(node); } weightFactor = Math.max(weightFactor, 1.05); edgeLoadFactor = Math.min(edgeLoadFactor, 0.8f); int edges = (int)(edgeLoadFactor * nodeAmount * nodeAmount); while (edges > 0) { final DirectedGraphNode tail = choose(graph, rnd); final DirectedGraphNode head = choose(graph, rnd); final Point2D.Double tailPoint = layout.get(tail); final Point2D.Double headPoint = layout.get(head); final double distance = tailPoint.distance(headPoint); if (distance <= maxDistance) { tail.connectTo(head); weightFunction.put(tail, head, weightFactor * distance); --edges; } } return new Triple<>(graph, weightFunction, layout); } public static <E> E choose(final List<E> list, final Random rnd) { if (list.isEmpty()) { return null; } return list.get(rnd.nextInt(list.size())); } public static void checkNotNull(final Object reference, final String message) { if (reference == null) { throw new NullPointerException(message); } } public static <E> boolean listsAreSame(final List<E> list1, final List<E> list2) { if (list1.size() != list2.size()) { return false; } for (int i = 0; i < list1.size(); ++i) { if (!list1.get(i).equals(list2.get(i))) { return false; } } return true; }}Demo.java:package com.stackexchange.codereview.graph;import com.stackexchange.codereview.graph.Utils.Triple;import static com.stackexchange.codereview.graph.Utils.choose;import static com.stackexchange.codereview.graph.Utils.listsAreSame;import com.stackexchange.codereview.graph.model.PlaneLayout;import com.stackexchange.codereview.graph.model.support.AStarPathFinder;import com.stackexchange.codereview.graph.model.support.DijkstraHeuristicFunction;import com.stackexchange.codereview.graph.model.support.DirectedGraphNode;import com.stackexchange.codereview.graph.model.support.DirectedGraphWeightFunction;import com.stackexchange.codereview.graph.model.support.PlaneHeuristicFunction;import java.util.List;import java.util.Random;public class Demo { public static final int GRAPH_SIZE = 100000; public static final float EDGE_LOAD_FACTOR = 4.0f / GRAPH_SIZE; public static final double WIDTH = 2000.0; public static final double HEIGHT = 1000.0; public static final double MAX_DISTANCE = 100.0; public static final double WEIGHT_FACTOR = 1.1; public static void main(final String... args) { final long seed = System.currentTimeMillis(); System.out.println(Seed: + seed); final Random rnd = new Random(seed); Triple<List<DirectedGraphNode>, DirectedGraphWeightFunction, PlaneLayout> data = Utils.createRandomDigraph(GRAPH_SIZE, EDGE_LOAD_FACTOR, WIDTH, HEIGHT, MAX_DISTANCE, WEIGHT_FACTOR, rnd); final DirectedGraphNode source = choose(data.first(), rnd); final DirectedGraphNode target = choose(data.first(), rnd); System.out.println(Source: + source); System.out.println(Target: + target); final AStarPathFinder<DirectedGraphNode> finder = new AStarPathFinder<>() .setHeuristicFunction( new PlaneHeuristicFunction<>(data.third(), target)) .setWeightFunction(data.second()); long ta = System.currentTimeMillis(); final List<DirectedGraphNode> path1 = finder.search(source, target); long tb = System.currentTimeMillis(); System.out.println(A* in + (tb - ta) + ms.); for (final DirectedGraphNode node : path1) { System.out.println(node); } System.out.println(); finder.setHeuristicFunction(new DijkstraHeuristicFunction<>()); ta = System.currentTimeMillis(); final List<DirectedGraphNode> path2 = finder.search(source, target); tb = System.currentTimeMillis(); System.out.println(Dijkstra's algorithm in + (tb - ta) + ms.); for (final DirectedGraphNode node : path2) { System.out.println(node); } System.out.println(); System.out.println(Paths are same: + listsAreSame(path1, path2)); }}
Small generic path search framework in Java
java;search;graph;pathfinding;framework
UtilsStatically importing these methods is questionable - it makes it harder to read code without consulting the list of imports.TripleDon't do this. Use a real class that contains the data you need. Maybe DirectedGraph would be a good name.choose()Should be named better - selectRandomNode(), perhaps.AbstractHeuristicFunctionThe name 'Abstract' should be reserved for use only by classes modified with the abstract keyword. By convention, java interfaces have no indicator that they are interfaces.AbstractNodeMethods not intended to be extended by children should be explicitly declared final to prevent such extension. The id instance variable should be private, unless you really intend for subtypes to be able to manually change it after the instance has been constructed.AStarPathFinderDon't allow objects to be built in an invalid state. If this class needs heuristic and weight functions, require them in the constructor/static factory method/builder.DocumentationYou don't have any, which is less than optimal. It's great when you can hover over a method in your IDE and see exactly how it's supposed to work. At the very least, all public methods should be documented, so clients using your library can easily understand how your code works.General DesignI'm an API designer, so I tend to look at things from that perspective.I think that an interface called Graph should be the main entry point into your system. DirectedGraph should be an implementation of that interface. You can take out a lot of the generics noise if you do that. Likewise, add a GraphNode interface that DirectedGraphNode implements. Many existing methods would belong to Graph, such as finder.search(). You could either pass in the relevant arguments or use a fluent API. Weights should be a property of the Graph, since different implementations behave differently. You should be able to share a lot of implementation behind the scenes when you add an undirected graph, since that's just a special case of a directed graph (both directions always have the same weight). You should even be able to modify the createRandomGraph() method to take an enum argument specifying the graph type (GraphType.DIRECTED or GraphType.UNDIRECTED).Those changes would make your Demo class look something like:public static void main(final String... args) { final long seed = System.currentTimeMillis(); System.out.println(Seed: + seed); final Random rnd = new Random(seed); final Graph diGraph = Utils.createRandomDigraph(GRAPH_SIZE, EDGE_LOAD_FACTOR, WIDTH, HEIGHT, MAX_DISTANCE, WEIGHT_FACTOR, rnd); final GraphNode source = diGraph.selectRandomNode(); final GraphNode target = diGraph.selectRandomNode(); System.out.println(Source: + source); System.out.println(Target: + target); final Heuristic aStarHeuristic = new AStarHeuristic(); long ta = System.currentTimeMillis(); final List<GraphNode> path1 = diGraph.findPath(source, target, aStarHeuristic); /* OR * diGraph.findPathFrom(source).to(target).usingHeuristic(aStar); */ long tb = System.currentTimeMillis(); System.out.println(A* in + (tb - ta) + ms.); for (final GraphNode node : path1) { System.out.println(node); } // ... etcYou should also really consider using the Builder pattern to create graph instances. You've pushed past the suggested number of input parameters to a method, which makes it harder to read/understand code written against the API, and also makes it harder to remember parameter order. In order to do that, you'd need reasonable defaults for many of the arguments, because you need to assume that clients may not set anything they don't have to. You could solve that problem with a fluent API chain instead of a builder, but that's more work for you as a developer. In a real API I'd say go for it, but if this is just a personal project it probably isn't worth it unless you want to play with fluent APIs. There are more specific issues that could be raised, but I think that's enough for a start. Many of them would change if you switch your design around.
_cs.59565
I am trying to study for an exam and I noticed a lot of the questions follow the idea of Changing the ISA. From my understanding the ISA dictates the structure and format of instructions, so changing instructions or the format (32 to 64 bit) would change the ISA. However I ran into a question which I am not sure how to answer.If we remove the branch delay slots, will the ISA be changed?My guess would be no, because we're not changing any instructions (right?), but I would like to get an answer from someone more knowledgeable.
Would removing the branch delay slots change the instructions set architecture?
computer architecture;cpu
The instructions set architecture (ISA) is the contract between the hardware designer and the software designer. Anything that changes the contract, changes the ISA.The question you have to answer is: given every possible program written with this particular ISA, do any of them have different behavior (give a different answer) if we remove the branch delay slots.
_cstheory.11182
Let's extend the Turing machine so that it can read from a stream of random number generators (in addition to an infinite tape to read and write). Certainly the TM with randomness can do whatever a classical TM do, but what about the converse?One can argue that the classical TM will always generate the same result given the same input, while the TM with randomness can behave randomly, it can do more. But, then random-valued functions are not really what we call computable. I am aware of randomized algorithms and BPP and what not, but is there an extension of computability that deals with these kind of questions?
Is a turing machine with random number generator more powerful?
computability;turing machines;randomness
See https://mathoverflow.net/questions/58060/can-randomness-add-computability
_softwareengineering.155697
I want to know if the structure for an ASP.NET website I'm working on uses a design pattern for it's web pages. If it is a design pattern, what is it called? The web pages have the following structure:UserDetails page (UserDetails.aspx) - includes UserDetailsController.ascx user control. UserDetailsController.ascx includes sub user controls like UserAccountDetails.ascx and UserLoginDetails.ascx etcEach sub user control contains a small amount of code/logic, the 'controller' user controls that host these sub user controls (i.e UserDetailsController.ascx) appear to call the business rules code and pass the data to the sub user controls.Is this a design pattern? What is it called?
Is the structure used for these web pages a design pattern?
c#;design patterns;asp.net
Short answer: looks like a variation of a Master-child design.Long answer: there are different standards how UI pattern is defined. To express them, i have combined listing that might be helpful to expand on this topic.UI PatternsUI Design patterns Introduction40+ Helpful Resources On User Interface Design PatternsMSDN - Design and Implementation Guidelines for Web ClientsASP.NET Design Patterns
_softwareengineering.278144
I have a requirement where I have a list of entity and users who that entity can be assignedE1 can be distributed by U1 or U2 E2 must be distributed by U5 E3 can be distributed by U2 or U3 or U4I have such 50K entities and for each entity there might be 1 or more users. In case of 1 user, its clear and entity will be assigned to that user only. In case of multiple users, it can be assigned to any one them. We want to distribute it such that each user gets equal amount of entities. and there are minimal possible/unavoidable skewed distributions, also each user might already posses some entities : U1 has 2K and U2 has 3K entitis already, so the distribution should take care of this fact as well.EDIT 1We have already tried a solution of going sequentially and assigning one entity at a time as per the allocation to users at that point in time, but that producing skewed results, because we are getting users who have less allocation earlier but more allocation later or viceversa...E1 to E25 must be handled by any of U1 & U2 E26 to E50 must be handled by any of U2 & U3 if we go sequentially, in the end : U1 gets 12 (from E1-E25), U2 gets 19 (13 from E1-E25 & 6 from E26-E50) & U3 gets 19(from E26-E50). So all in all 50 allocated. fine. but see the skewed resultsEDIT2Why do we have different users per entity? there are multiple products to be distributed. Some users handle multiple products and some users handle single product, but still all the users need to be load balanced.
equal distribution within given set of users
algorithms;distribution
This brings back memories of an Operations Research class I had in college a very long time ago. You might want to do some googling on the Assignment problem and the Generalized Assignment problem for some background info and more rigorous solutions.Having said that, I'd do the following:Put all the entities that can be processed by the same set of users into a bucket. Bucket 1 might contain all the entities that can be processed by U1 or U2. Bucket 2 might contain all the entities that can be processed by U2 or U3. Bucket 3 (not in your examples) might contain all the entities that can only be processed by U4.Process all the single entity buckets (Bucket 3). There are no choices to be made here, and you may as well get this out of the way first. You mentioned that 80% of your entities may fall into this category.At this point, stop looking at buckets and entities, and process users. Find the user with the least number of entities assigned and have them pick one entity from the most full bucket (bucket with the largest number of entities).Repeat step 3 until all entities are assigned.This will definitely require testing - I reserve the right to be wrong :-)
_unix.349098
Package software-center not found on Ubuntu 14.04 LTS.I tried to Install it via terminal, But ended up with below error :ravip@LP204:~$ sudo apt-get install software-center[sudo] password for ravip: Reading package lists... DoneBuilding dependency tree Reading state information... DoneSome packages could not be installed. This may mean that you haverequested an impossible situation or if you are using the unstabledistribution that some required packages have not yet been createdor been moved out of Incoming.The following information may help to resolve the situation:The following packages have unmet dependencies: software-center : Depends: software-center-aptdaemon-plugins but it is not going to be installed Depends: python-gi but it is not going to be installed Depends: python-gi-cairo but it is not going to be installed Depends: python-aptdaemon (>= 0.40) but it is not going to be installed Depends: python-aptdaemon.gtk3widgets but it is not going to be installed Depends: oneconf (>= 0.2.6) but it is not going to be installed Recommends: software-properties-gtk but it is not going to be installed Recommends: sessioninstaller but it is not going to be installedE: Unable to correct problems, you have held broken packages.All Depends & Recommends Packages are already installed & at newest version.How can I resolve ?
Ubuntu software-center : Unable to correct problems, you have held broken packages
ubuntu;apt;software installation;software updates
null
_unix.283700
Running RasPBX based on Debian Jessie - I tried to change a few .conf files, but they always reverted following reboot. I initially thought this was dhcp messing with things, but it turns out nothing survives reboot. Even a .txt file I created in the home directory gets deleted.tl;dr - SSH in, change stuff, reboot, all changes reverted.Output of mount:/dev/mmcblk0p2 on / type ext4 (rw,noatime,data=ordered)devtmpfs on /dev type devtmpfs (rw,relatime,size=469688k,nr_inodes=117422,mode=755)sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)proc on /proc type proc (rw,relatime)tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)tmpfs on /run type tmpfs (rw,nosuid,nodev,mode=755)tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k)tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755)cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/lib/systemd/systemd-cgroups-agent,name=systemd)cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct)cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)cgroup on /sys/fs/cgroup/net_cls type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls)systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=22,pgrp=1,timeout=300,minproto=5,maxproto=5,direct)debugfs on /sys/kernel/debug type debugfs (rw,relatime)mqueue on /dev/mqueue type mqueue (rw,relatime)configfs on /sys/kernel/config type configfs (rw,relatime)/dev/mmcblk0p1 on /boot type vfat (rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=ascii,shortname=mixed,errors=remount-ro)Output of dmesg | grep -E 'mmc|ext' :)[ 0.000000] Kernel command line: 8250.nr_uarts=1 dma.dmachans=0x7f35 bcm2708_fb.fbwidth=656 bcm2708_fb.fbheight=416 bcm2709.boardrev=0xa02082 bcm2709.serial=0xb59dde09 smsc95xx.macaddr=B8:27:EB:9D:DE:09 bcm2708_fb.fbswap=1 bcm2709.uart_clock=48000000 vc_mem.mem_base=0x3dc00000 vc_mem.mem_size=0x3f000000 dwc_otg.lpm_enable=0 console=ttyS0,115200 console=tty1 root=/dev/mmcblk0p2 rootfstype=ext4 elevator=deadline fsck.repair=yes rootwait .text : 0x80008000 - 0x807945f0 (7730 kB)[ 0.052137] CPU: Virtualization extensions available.[ 2.418638] mmc0: sdhost-bcm2835 loaded - DMA enabled (>1)[ 2.451228] mmc-bcm2835 3f300000.mmc: mmc_debug:0 mmc_debug2:0[ 2.462049] mmc-bcm2835 3f300000.mmc: DMA channel allocated[ 2.489744] mmc0: host does not support reading read-only switch, assuming write-enable[ 2.508346] mmc0: new high speed SDHC card at address 59b4[ 2.654165] Waiting for root device /dev/mmcblk0p2...[ 2.654367] mmcblk0: mmc0:59b4 00000 7.35 GiB[ 2.655651] mmcblk0: p1 p2[ 2.674572] mmc1: queuing unknown CIS tuple 0x80 (2 bytes)[ 2.676113] mmc1: queuing unknown CIS tuple 0x80 (3 bytes)[ 2.677657] mmc1: queuing unknown CIS tuple 0x80 (3 bytes)[ 2.680430] mmc1: queuing unknown CIS tuple 0x80 (7 bytes)[ 2.742278] EXT4-fs (mmcblk0p2): INFO: recovery required on readonly filesystem[ 2.755364] EXT4-fs (mmcblk0p2): write access will be enabled during recovery[ 2.770294] mmc1: new high speed SDIO card at address 0001[ 2.932862] EXT4-fs (mmcblk0p2): orphan cleanup on readonly fs[ 2.945051] EXT4-fs (mmcblk0p2): 2 orphan inodes deleted[ 2.955001] EXT4-fs (mmcblk0p2): recovery complete[ 2.971534] EXT4-fs (mmcblk0p2): mounted filesystem with ordered data mode. Opts: (null)[ 2.987156] VFS: Mounted root (ext4 filesystem) readonly on device 179:2.[ 4.532519] systemd[1]: Expecting device dev-mmcblk0p1.device...[ 6.365651] EXT4-fs (mmcblk0p2): re-mounted. Opts: (null)[ 6.721847] FAT-fs (mmcblk0p1): Volume was not properly unmounted. Some data may be corrupt. Please run fsck.[ 7.736280] Adding 102396k swap on /var/swap. Priority:-1 extents:7 across:307200k SSFSRefuses to run fsck:fsck from util-linux 2.25.2e2fsck 1.42.12 (29-Aug-2014)/dev/mmcblk0p2 is mounted.e2fsck: Cannot continue, aborting.Also, refuses to unmount /dev/mmcblk0p2 - claiming target is busy.Tried:shutdown -F -r nowResulting in:Code should not be reached 'Unhandled option' at ../src/systemctl systemctl.c:6316, function shutdown_parse_argv(). Aborting.Aborted
Nothing survives reboot in Debian Jessie
debian;reboot
null
_webapps.105330
Where can I see or export a list of Google Places with IDs?From Google My Business, I only get a text export, with no Google ID.
Enumerate Google Place IDs
google my business
null
_webmaster.71564
I have a website with a good amount of traffic but I must shut down this website.I have another website that is not very well ranked on Google.Is there something I could do to help my second website with SEO such as a redirect or DNS change?
Website with traffic must shut down, should I redirect?
seo;domains;dns
null
_codereview.152658
Goal:I am attempting to create a script that would automatically establish a PSSession to a Windows server for implicit remoting.Problem:Export-PSSession : Proxy creation has been skipped for the '%' command, because Windows PowerShell could not verify the safety of the command name.At Z:Somewhere\aScript.ps1:12 char:3+ Export-PSSession -Session $ServerPS -OutputModule 'First Module' - ...+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : InvalidData: (:) [Export-PSSession], InvalidOperationException + FullyQualifiedErrorId : ErrorSkippedUnsafeCommandName,Microsoft.PowerShell.Commands.ExportPSSessionCommandSide notes:A large number of users will use this script to establish PSSessions daily, I am trying to find the most efficient way to get all the users to access these modules - Amodule, Bmodule and Cmodule. Additionally, the script still works despite the above mentioned error.Here is a sample of my script:$Server = New-PSSession -ComputerName ServerPS -Authentication KerberosSet-Alias -Name go -Value Get-CustomMods -Description 'Gets the modules'function Get-CustomMods{ # Import first module Write-Verbose -Message 'Importing First module' Invoke-command { import-module 'Amodule.ps1' } -session $ServerPS Export-PSSession -Session $Server -OutputModule 'Amodule' -Force -AllowClobber # Import second module Write-Verbose -Message 'Importing Second Module' Invoke-command { import-module 'Bmodule.ps1'} -session $ServerPS Export-PSSession -Session $Server -OutputModule 'Bmodule' -Force -AllowClobber # Import third module Write-Verbose -Message 'Importing Third Module' Invoke-command { import-module 'Cmodule.ps1' } -session $ServerPS Export-PSSession -Session $Server -OutputModule 'Cmodule' -Force -AllowClobber}Question:Is there a more effective way to structure/write this script given my goal and problem?
Creating a script to automate implicit remoting
powershell
I wish I had more to offer but I wanted to add some pointers about the code I do see. Code RepetitionIf you find yourself repeating the same code over and over again you should be asking if there is another way. Another functionYou could easily wrap the block # Import first module Write-Verbose -Message 'Importing First module' Invoke-command { import-module 'Amodule.ps1' } -session $ServerPS Export-PSSession -Session $Server -OutputModule 'Amodule' -Force -AllowClobberinto its own function. That way if you need to make some changes they will be more centralized. function Export-SessionModule{ param( $session, $moduleName ) # Import named module Write-Verbose -Message Importing $moduleName module Invoke-command { import-module $moduleName.ps1 } -session $session Export-PSSession -Session $session -OutputModule $moduleName -Force -AllowClobber}SplattingNote that this is meant to be a simple example. Things like parameter typing might be advisable but were omitted for brevity. If functions are overkill for this little script of yours something else to consider would be splatting your parameters that are the same across cmdlet calls. $exportPSSessionParams = @{ Session = $session Force = $true AllowClobber = $true}# .....Export-PSSession -OutputModule 'Amodule' @exportPSSessionParams# .....Export-PSSession -OutputModule 'Bmodule' @exportPSSessionParams# .....Export-PSSession -OutputModule 'Cmodule' @exportPSSessionParamsVariable NamesYou follow the verb-noun convention for your functions which is good. Your session variable name is a little misleading. You call it $server. I usually see that representing a string with a server name. Since it is a session object it would be clearer if it was named as such. $session at a minimum or even $serverSession if you prefer. This could clash with your $sessionPS which is more reason to make them clear. Function DeclarationYou are supposed to declare your functions before you use them. That being said, you would typically find functions at the beginning of scripts. While the order you have things declared will not be a problem (since it is working) I wanted to be sure that was understood. If you were to call go after your created the alias it would fail. Your ProblemLike I said I don't have much to offer here. I would like to point out that I think you scrubbed your code incorrectly since the code in your error does not match your code block. Have you checked on SO for that message already? There are a few different causes although I didn't find an exact match. Admittingly I do not work with modules much. I would debug this by seeing if it is the same module and if perhaps how it is being called is causing some problems. Perhaps being run more than once in a session? (AllowClobber should have fixed that though)
_webmaster.39426
What can be done through XFrames, as compared to iframe in html?Is there any new feature other than making it easier to bookmark in XFrame as compared to IFrame?
What can be done through XFrames, as compared to IFrame in html?
html;iframe;learning
Xframe has never been implemented in any version of XHTML. It was proposed in a separate working draft by the XHTML2 working group while working on XHTML 2.0. The XHTML 2.0 working group itself was closed in 2010. The next version of XHTML is XHTML5, and is part of the spec being developed by the HTML5 working group. Xframes is not part of this spec.HTML5 and XHTML5 support iframe as a standard element only. The original frame and frameset elements are supported only as obsolete elements, which will trigger warnings in conformance checkers. No part of the xframes spec is mentioned.
_softwareengineering.305464
I am working on a design, but keep hitting a roadblock. I have a particular class (ModelDef) that is essentially the owner of a complex node tree built by parsing an XML schema (think DOM). I want to follow good design principles (SOLID), and ensure that the resulting system is easily testable. I have every intention of using DI to pass dependencies into the constructor of ModelDef (so that these can easily be swapped out, if need be, during testing).What I'm struggling with, though, is the creation of the node tree. This tree is going to be made up entirely of simple value objects which will not need to be independently tested. (However, I may still pass an Abstract Factory into ModelDef to assist with the creation of these objects.) But I keep reading that a constructor should not do any real work (e.g. Flaw: Constructor does Real Work). This makes perfect sense to me if real work means constructing heavy-weigh dependent objects that one might later want to stub out for testing. (Those should be passed in via DI.) But what about light-weight value objects such as this node tree? The tree has to be created somewhere, right? Why not via the constructor of ModelDef (using, say, a buildNodeTree() method)?I don't really want to create the node tree outside of ModelDef and then pass it in (via constructor DI), because creating the node tree by parsing the schema requires a significant amount of complex code -- code that needs to be thoroughly tested. I don't want to relegate it to glue code (which should be relatively trivial, and will likely not be directly tested).I have thought of putting the code to create the node tree in a separate builder object, but hesitate to call it a builder, because it doesn't really match the Builder Pattern (which seem to be more concerned with eliminating telescoping constructors). But even if I called it something different (e.g. NodeTreeConstructor), it still feels like a bit of a hack just to avoid having the ModelDef constructor build the node tree. It has to be built somewhere; why not in the object that's going to own it?
Legitimate real work in a constructor?
java;design;design patterns;dependency injection;constructors
And, besides what Ross Patterson suggested, consider this position which is the exact opposite:Take maxims such as Thou Shalt Not Do Any Real Work In Thy Constructors with a grain of salt.A constructor is, really, nothing but a static method. So, structurally, there is really not much difference between:a) a simple constructor and a bunch of complex static factory methods, and b) a simple constructor and a bunch of more complex constructors.A considerable part of the negative sentiment towards doing any real work in constructors comes from a certain period of the history of C++ when there was debate as to precisely what state the object will be left in if an exception is thrown within the constructor, and whether the destructor should be invoked in such an event. That part of the history of C++ is over, and the issue has been settled, while in languages like Java there never was any issue of this kind to begin with.My opinion is that if you simply avoid using new in the constructor, (as your intention to employ Dependency Injection indicates,) you should be fine. I laugh at statements like conditional or looping logic in a constructor is a warning sign of a flaw.Besides all that, personally, I would take the XML parsing logic out of the constructor, not because it is evil to have complex logic in a constructor, but because it is good to follow the separation of concerns principle. So, I would move the XML parsing logic into some separate class altogether, not into some static methods that belong to your ModelDef class.AmendmentI suppose that if you have a method outside of ModelDef which creates a ModelDef from XML, you will need to instantiate some dynamic temporary tree data structure, populate it by parsing your XML, and then create your new ModelDef passing that structure as a constructor parameter. So, that could perhaps be thought of as an application of the Builder pattern. There is a very close analogy between what you want to do and the String & StringBuilder pair. However, I have found this Q&A which seems to disagree, for reasons which are not clear to me: Stackoverflow - StringBuilder and Builder Pattern. So, to avoid a lengthy debate over here as to whether the StringBuilder does or does not implement the builder pattern, I would say feel free to be inspired by how StrungBuilder works in coming up with a solution that suits your needs, and postpone calling it an application of the Builder pattern until that little detail has been settled.See this brand new question: Programmers SE: Is StringBuilder an application of the Builder Design Pattern?
_unix.266238
In Linux I have files which filename is starting with date YYYYMMDD20160201_001.pdf20160110_002.pdf20150201_003.pdf20140201_004.pdfI want to tar those files less than the following date range (not using mtime, but filename period) date +'%Y%m' -d '4 months ago' (201511)Basically i want to dofiles=($(find . -name filename< date +'%Y%m' -d '4 months ago'))tar cvfz backup.tar.gz ${files[@]}The expected result of files being tar-ed: 20150201_003.pdf20140201_004.pdfHow can I do that?
Linux tar files less than 3 months ago using date in filename
bash
With that date format, you could use string ordering in awk, such as:files=( $(ls | awk -v d=$(date -d '4 months ago' +%Y%m%d_999.pdf) '$1<d {print;}') )tar cvfz backup.tar.gz ${files[@]}The awk program reads input lines, and applies its rules to them. In this case it firstly gets invoked with variable d set as the clipping date expanded into a file name. Then, for each line it compares the first word ($1) with the clipping date (d) with respect to alphabetical order, and where the input is before the clip, it prints the line (otherwise not).To clip month-wise, change %d to be 00 to exclude the month of four months ago, or 99 to include the month of four moths ago.
_unix.383445
I've installed Ubuntu server 16.04 (64bit) on a Dell fx160 thin client (has an atom 230 processor). This device has two sticks of ram in it, 2 GB each. When I execute lshw I see that it correctly recognises that the two sticks have 2 GB each. When I run free however, only 3 GB (3079672 kB) of memory is reported as being available.Removing either of these sticks causes free to report exactly 2 GB, but when inserted together only 3 GB remains.I searched around a lot, but unfortunately I remain clueless as to what can cause this and how I can solve this.Has anyone had a similar problem in the past?
not all RAM recognised (on 64bit CPU)
ubuntu;memory;ram;64bit
null
_webmaster.65270
I have an issue with Google not being able to properly crawl my site. I have read other questions where people have had the same issue. I've tried to follow their solution of using this in my robots.txt file:User-agent: *Disallow:Sitemap: http://www.sonjalimone.com/sitemap_index.xmlI have waited over 24 hours for Google to recrawl my site so I must have something wrong in the robots.txt file. It is a WordPress site if that makes any difference, though I don't see why it would.Does anyone know what else might cause this issue or is there something wrong with the above?
Google is not crawling and indexing my site after updating my robots.txt file
google;google search console;robots.txt
Google will fetch the robots.txt file itself from your site every 24 hours. If you make changes to your robots.txt file, you must wait a day to ensure that Googlebot picks up your changes.After it has the correct robots.txt file, Googlebot will start crawling and indexing your entire site properly. As a general rule, I expect to see changes to the documents that Google indexes in about two weeks. If you have a large site, the deeper pages may take as much as a month or two to get recrawled.Use the fetch as Google feature from Crawl -> Fetch as Google in Google Webmaster Tools to ensure that Googlebot is able to download the pages that you expect. You can also use the Blocked URLs tool under Crawl -> Blocked URLs in Google Webmaster Tools to ensure that Google is seeing the correct version of your robots.txt file and that it can crawl any URL that you specify in that tool.
_unix.81309
I'm having a strange case of deadlock, where the two processes launched by cron are defunct, but cron does not pick the return code and exit. I don't have access to the root user.myuser@myserver:~) ps -ef | grep 30163 11:29AM3701 28964 29950 0 11:30 pts/13 00:00:00 grep 30163root 30163 6622 0 11:00 ? 00:00:00 /usr/sbin/cron3701 30199 30163 0 11:00 ? 00:00:00 [monitor_daemon] <defunct>3701 30598 30163 0 11:00 ? 00:00:00 [sendmail] <defunct>myuser@myserver:~)Is there a known reason why we would end up in such a situation?How, without having access to the root user, can I get rid of those three processes that consume memory?I'm using the following kernel/distribution:Linux myserver 2.6.32.23-0.3-default #1 SMP 2010-10-07 14:57:45 +0200 x86_64 x86_64 x86_64 GNU/LinuxLSB_VERSION=core-2.0-noarch:core-3.2-noarch:core-4.0-noarch:core-2.0-x86_64:core-3.2-x86_64:core-4.0-x86_64SUSE Linux Enterprise Server 11 (x86_64)VERSION = 11PATCHLEVEL = 1
Deadlock in a crontab between cron and its child defunct processes
process;cron;suse
The Last SLES11 SP1 kernel when EoL came (2012-11-08) was 2.6.32.59-0.7.Kernel 2.6.32.23-0.3.1 is from 2010-10-08.So you are most propably hitting an unfixed OS bug.Wake up your root-admin and tell him to get his system in shape.Current supported SLES11 is SP2. Kernel: 3.0.80...To your second part of the question: You can only get rid of these processes as owner of these (root).
_softwareengineering.210339
Is there any design patterns (or best practices) for implementing a geographically distributed system (mostly a database)?Description: There is a network of warehouses and a central office. Now I want every warehouse replicates it's data to the central office and the central office replicates just that portion of data related to that warehouse (when it's modified). This I can call a filtered replication. Our database here is SQL Server 2008 R2. Should I go with another database? How about NoSQL databases?This is a .NET based solution.So far I have learnt about Web Synchronization for Merge Replication and I am investigating it; but I did not learnt how to implement filtered replication yet. I am not sure how NoSQL fits for an e-commerce problem (I think I need to use a combination of NoSQL+RDBMS if I should go that way) but I am investigating RavenDB and MongoDB.Any insight would help a lot; Thanks;
Geographically Distributed (Data & App) Architecture
design patterns;architecture;database;distribution;data replication
null
_ai.3548
I'm currently pursuing Computer science engineering.So I would like to know where to start and what mathematics is needed to jump in.
Getting started with Artificial intelligence
ai community
null
_codereview.117322
Like other posters, I'm currently working on recreating Google's Homepage for The Odin Project.I'm new to HTML and CSS but I'm eager to learn and have been looking around for an answer, testing different code, and pacing back & forth - I'm still stuck.I'm having an issue with positioning things on my project. I've read through W3School's documentation on CSS but the method in which I used to position my buttons Google Search & I'm feeling lucky seems like a in-this-case-solution that may not work with all browsers or especially responsive design.This is the area I'm trying to duplicate:I was able to center the logo and the search bar with:{margin-left: auto;margin-right: auto;display: block;}I was able to center the buttons with:#googleSearch {display: inline-block; margin-left: 520px;margin-right: auto;}#feelingLucky {display: inline-block;}All of my code is below but my questions is: is there a better way to position the buttons and if so, what? Playing with the margin-left until it looks right seems to me like the equivalent of using a bunch of line breaks instead of changing the margin or padding. li {font-family: arial,sans-serif;font-size: 13px; list-style: none; display: inline-block;}nav {text-align: right; padding-right: 160px; word-spacing: 10px;}#userName {opacity: .55;}a, a:visited {color: black; text-decoration: none;}a:hover {text-decoration: underline;}img, #searchBox {display: block;margin-left: auto;margin-right: auto;}img {margin-top: 200px}#googleSearch {display: inline-block; margin-left: 520px;margin-right: auto;}#feelingLucky {display: inline-block;}<!DOCTYPE HTML><html lang=en><head><link rel=stylesheet href=index.css><meta charset=UTF-8><title>Google</title><link rel=shortcut icon type=image/x-icon href=http://www.google.com/favicon.ico> </head><body><header><nav> <ul> <li><span id=userName>Jarod</span></li> <li><a href=https://accounts.google.com/ServiceLogin?passive=1209600&continue=https%3A%2F%2Faccounts.google.com%2FManageAccount&followup=https%3A%2F%2Faccounts.google.com%2FManageAccount>Gmail</a></li> <li><a href=https://www.google.com/imghp?hl=en&tab=wi&ei=yeCeVp3uLcyMmQH3mJ8w&ved=0EKouCBYoAQ>Images</a></li> </ul> </nav> </header><img src=https://www.google.com/images/branding/googlelogo/1x/googlelogo_color_272x92dp.png alt=Logo> <form action=# method=get name=searchForm> <input id=searchBox type=text name=searchBox><br> <div id=googleSearch> <input id=two type=submit value=Google Search> <input id=feelingLucky type=submit value=I'm Feeling Lucky> </form> </body><footer></footer></html>Is there a better way to position my buttons?
Recreating Google homepage for learning purposes
beginner;html;css
null
_softwareengineering.114681
I am learning functionnal programming with Haskell, and I try to grab concepts by first understanding why do I need them.I would like to know the goal of arrows in functional programming languages. What problem do they solve? I checked http://en.wikibooks.org/wiki/Haskell/Understanding_arrows and http://www.cse.chalmers.se/~rjmh/afp-arrows.pdf. All I understand is that they are used to describe graphs for computations, and that they allow easier point free style coding.The article assume that point free style is generally easier to understand and to write. This seems quite subjective to me. In another article (http://en.wikibooks.org/wiki/Haskell/StephensArrowTutorial#Hangman:_Main_program), a hangman game is implemented, but I cannot see how arrows makes this implementation natural.I could find a lot of papers describing the concept, but nothing about the motivation.What I am missing?
what is the purpose of arrows?
functional programming;haskell
I realize I'm coming late to the party, but you've had two theoretical answers here, and I wanted to provide a practical alternative to chew over. I'm coming at this as a relative Haskell noob who nonetheless has been recently force-marched through the subject of Arrows for a project I'm currently working on. First, you can productively solve most problems in Haskell without reaching for Arrows. Some notable Haskellers genuinely do not like and do not use them (see here, here, and here for more on this). So if you're saying to yourself Hey, I don't need these, understand that you may genuinely be correct.What I found most frustrating about Arrows when I first learned them was how the tutorials on the subject inevitably reached for the analogy of circuitry. If you look at Arrow code -- the sugared variety, at least -- it resembles nothing so much as a Hardware Defnition Language. Your inputs line up on the right, your outputs on the left, and if you fail to wire them all up properly they simply fail to fire. I thought to myself: Really? Is this where we've ended up? Have we created a language so completely high-level that it once again consists of copper wires and solder?The correct answer to this, as far as I've been able to determine, is: Actually, yes. The killer use case right now for Arrows is FRP (think Yampa, games, music, and reactive systems in general). The problem facing FRP is largely the same problem facing all other synchronous messaging systems: how to wire a continuous stream of inputs into a continuous stream of outputs without dropping relevant information or springing leaks. You can model the streams as lists -- several recent FRP systems use this approach -- but when you have a lot of inputs lists become almost impossible to manage. You need to insulate yourself from the current. What Arrows allow in FRP systems is the composition of functions into a network while at the same time entirely abstracting away any reference at all to the underlying values being passed by those functions. If you're new to FP, this can be confusing at first, and then mind-blowing when you've absorbed the implications of it. You've only recently absorbed the idea that functions can be abstracted, and how to understand a list like [(*), (+), (-)] as being of type [(a -> a -> a)]. With Arrows, you can push the abstraction one layer further. This additional ability to abstract carries with it its own dangers. For one thing, it can push GHC into corner cases where it doesn't know what to make of your type assumptions. You'll have to be prepared to think at the type level -- this is an excellent opportunity to learn about kinds and RankNTypes and other such topics. There are also a number of examples of what I'd call Stupid Arrow Stunts where the coder reaches for some Arrow combinator just because he or she wants to show off a neat trick with tuples. (Here's my own trivial contribution to the madness.) Feel free to ignore such hot-dogging when you come across it in the wild. NOTE: As I mentioned above, I'm a relative noob. If I've promulgated any misconceptions above, please feel free to correct me.
_datascience.15791
Can somebody explain me in a simple language how multiple instance ranking algorithm works? What is ranking function? how mathematically it is expressed?
Multiple Instance Ranking Algorithm
image classification
null
_unix.260645
I am connected to a machine via SSH and I often need to listen to WAV files on that machine. Iusually open another terminal window and do ssh host 'cat /path/to/sound.wav' | aplay, but that's tedious since I have to enter my password every time (I cannot use public-key authentication) and the file paths are long.What I would like to do is start magic-command on my local computer, and then whenever I need to listen to a remote file, run play-locally sound.wav from the remote shell and hear the sound from my speakers. Is this possible?
Playing remote audio files over SSH
ssh;audio;remote;streaming;wav
null
_codereview.12632
I have been using a slightly modified Hamming Distance algorithm for approximate String Matching for patterns and wondering if there is something better out there. The t being the length of the text and p being the length of the pattern the worst case is roughly O(t*p). Which from looking at other Fuzzy String matching seems to be in norm. final int mismatches = 1; final String text = bubbles; final String pattern = bu; for(int iter = 0; iter < text.length() - pattern.length() + 1; iter++) { int missed = 0; int ator = 0; do { if(text.charAt(iter + ator) != pattern.charAt(ator)) { missed++; } }while(++ator < pattern.length() && missed <= mismatches); if(missed <= mismatches) { System.out.println(Index: + iter + Pattern: + text.substring(iter, iter + pattern.length())); } }The output being indexes 0 bu, 2 bb, and 3 bl. The last two being mismatches with the tolerance of 1.
Pattern Matching with Mismatch
java;strings
null
_unix.137827
I'm using multiple BBB's (Rev C), communicating with them from my Mac (OSX 10.9.3) over USB using the HoRNDIS drivers. The BBB's are running Debian, and so I want to manually assign them all different static IP's. However, I can't get the IP to be anything but 192.168.7.2. Changing the /etc/network/interfaces file to have an ip of 192.168.7.10 does nothing:# This file describes the network interfaces available on your system# and how to activate them. For more information, see interfaces(5).# The loopback network interfaceauto loiface lo inet loopback# The primary network interface#auto eth0#iface eth0 inet dhcp# Example to keep MAC address between reboots#hwaddress ether DE:AD:BE:EF:CA:FEauto eth0iface eth0 inet staticaddress 192.168.2.2netmask 255.255.255.0network 192.168.2.0broadcast 192.168.2.255gateway 192.168.2.1# WiFi Example#auto wlan0#iface wlan0 inet dhcp# wpa-ssid essid# wpa-psk password# Ethernet/RNDIS gadget (g_ether)# ... or on host side, usbnet and random hwaddr# Note on some boards, usb0 is automaticly setup with an init script# in that case, to completely disable remove file [run_boot-scripts] from the boot partitioniface usb0 inet static address 192.168.7.10 netmask 255.255.255.0 network 192.168.7.0 gateway 192.168.7.1Indeed, there was a file in the boot partition that I also changed, with no result:#!/bin/bash# Update /etc/network/interfaces to add virtual Ethernet portcat >>/etc/network/interfaces <<EOFiface usb0 inet static address 192.168.7.10 netmask 255.255.255.0 network 192.168.7.0 gateway 192.168.7.1EOF# Add terminal to virtual serial portcat >/etc/init/gadget-serial.conf <<EOFstart on stopped rc RUNLEVEL=[2345]stop on runlevel [!2345]respawnexec /sbin/getty 115200 ttyGS0EOF# Write script to start gadget drivercat >/usr/sbin/g-multi-load.sh <<'EOF'#!/bin/bashif [ `lsmod | grep g_multi` != ]; then exit 0; fimac_addr=/proc/device-tree/ocp/ethernet@4a100000/slave@4a100300/mac-addresseeprom=/sys/bus/i2c/devices/0-0050/eepromDEV_ADDR=$(perl -e 'print join(:,unpack((H2)*,<>))' ${mac_addr})VERSION=$(perl -e '@x=unpack(A12A4,<>); print $x[1]' ${eeprom})SERIAL_NUMBER=$(perl -e '@x=unpack(A16A12,<>); print $x[1]' ${eeprom})ISBLACK=$(perl -e '@x=unpack(A20A4,<>); print $x[1]' ${eeprom})BLACK=if [ ${ISBLACK} = BBBK ] ; then BLACK=Blackfiif [ ${ISBLACK} = BNLT ] ; then BLACK=Blackfimodprobe g_multi file=/dev/mmcblk0p1 cdrom=0 stall=0 removable=1 nofua=1 iSerialNumber=${SERIAL_NUMBER} iManufacturer=Circuitco iProduct=BeagleBone${BLACK} host_addr=${DEV_ADDR}# Enable the network interfacesleep 1ifup usb0EOFchmod +x /usr/sbin/g-multi-load.sh# Add script to rc.localperl -i -pe 's!^exit 0!/usr/sbin/g-multi-load.sh\nexit 0!' /etc/rc.local# Install DHCP serversudo apt-get -y updatesudo apt-get -y install isc-dhcp-server# Configure DHCP servercat >/etc/ltsp/dhcp.conf <<EOFddns-update-style none;subnet 192.168.7.0 netmask 255.255.255.252 { range 192.168.7.1 192.168.7.1;}EOFperl -i -pe 's/INTERFACES=.*/INTERFACES=usb0/' /etc/default/isc-dhcp-server# Start up services/usr/sbin/g-multi-load.shservice isc-dhcp-server start
Static IP on BBB won't change with network/interfaces file
debian;arm
There is a 3rd file you need to change for BeagleBone Black:/opt/scripts/boot/am335x_evm.shI found it here:http://ewong.me/changing-usb0-ip-address-on-the-beaglebone-black
_codereview.52594
My subclass extends the Formatter class,I wonder know is there any elegant way, to rewrite my codeParentclass Formatter(object): def resp_in_hash(self, resp): ... return rtn_hashChildfrom formatter import Formatter# -----------------------------------------------------------------class Request(Formatter): def req(self, url_path, is_no_cache=True): ... def resp_in_hash(self, url_path): resp, content = self.req(url_path) return super(Request, self).resp_in_hash(content)
more elegant way when call parent method with the same name
python
null