prompt
stringlengths
5
14.5k
chosen
stringlengths
3
19.7k
rejected
stringlengths
1
9.05k
How to survive the heat in regions with very high temperature?
You may want to look into buying clothing specifically marketed as keeping you cool. I regularly ride my bike 15 km in 30 C heat. In a white cotton tshirt, sweat runs down my arms and I have trouble gripping the handlebars. In a long sleeved "performance " shirt I feel much cooler (even in a dark blue one) and sweat far less. I also drink less water in those shirts. Example: http://www.mec.ca/product/5032-035/mec-rhythm-long-sleeve-womens - I have no affiliation, I own three of that shirt though. A long sleeved loose shirt, even if it's not a special cooling shirt, may well be cooler than bare skin and will protect against sunburn. You may also be glad of it when you go inside and the air conditioning is on stun.
I'm resident in Canada and occasionally travel to Dubai to visit my parents. I am in Dubai right now and it was 44 degrees Celsius this afternoon. Note that I am used to wearing shorts even when it is lower than 0 Celsius while in Canada. Here is what I have for you: Clothing Absolutely avoid long sleeved clothing Wear shorts instead of jeans or other long pants as much as possible Try and keep to lightly coloured clothes Some find it helpful to use a parasol if they go for long walks Diet Carry fluids (preferably water) with you wherever you go Don't drink anything that has been left in a car that was exposed to the heat Las Vegas isn't humid but, in places that are both hot and humid, I find my nose getting constantly blocked,which makes it harder to breath. To avoid aggravating the situation, I avoid going to places that are filled with people that smoke (it is okay to smoke indoors in bars in Dubai) Heavy meals make me sleepy and lethargic and zap any energy that wasn't already drained. I'd avoid the buffets :) Exercise You may consider yourself to be physically fit but remember that your body is not acclimatised to this weather. Know your limits and listen to your body If you must exercise in the outdoors, do it incrementally. That would give your body a better chance of adapting For any significant amount of time exposed to sunlight, you should use sunscreen
Difference between Login Shell and Non-Login Shell?
I'll elaborate on the great answer by Gilles, combined with Timothy's method for checking login shell type. If you like to see things for yourself, try the snippets and scenarios bellow. Checking whether shell is (non-)interactive <code>if tty -s; then echo 'This is interactive shell.'; else echo 'This is non-interactive shell.'; fi </code> Checking whether shell is (non-)login If output of <code>echo $0</code> starts with <code>-</code>, it's login shell (<code>echo $0</code> output example: <code>-bash</code>). Otherwise it's non-login shell (<code>echo $0</code> output example: <code>bash</code>). <code>if echo $0 | grep -e ^\- 2>&1>/dev/null; then echo "This is login shell."; else echo "This is non-login shell."; fi; </code> Let's combine the two above together to get both pieces of information at once: <code>THIS_SHELL_INTERACTIVE_TYPE='non-interactive'; THIS_SHELL_LOGIN_TYPE='non-login'; if tty -s; then THIS_SHELL_INTERACTIVE_TYPE='interactive'; fi; if echo $0 | grep -e ^\- 2>&1>/dev/null; then THIS_SHELL_LOGIN_TYPE='login'; fi; echo "$THIS_SHELL_INTERACTIVE_TYPE/$THIS_SHELL_LOGIN_TYPE" </code> Scenarios: Typical SSH session without special options <code>ssh ubuntu@34.247.105.87 Welcome to Ubuntu 16.04.5 LTS (GNU/Linux 4.4.0-1083-aws x86_64) ubuntu@ip-172-31-0-70:~$ THIS_SHELL_INTERACTIVE_TYPE='non-interactive'; ubuntu@ip-172-31-0-70:~$ THIS_SHELL_LOGIN_TYPE='non-login'; ubuntu@ip-172-31-0-70:~$ if tty -s; then THIS_SHELL_INTERACTIVE_TYPE='interactive'; fi; ubuntu@ip-172-31-0-70:~$ if echo $0 | grep -e ^\- 2>&1>/dev/null; then THIS_SHELL_LOGIN_TYPE='login'; fi; ubuntu@ip-172-31-0-70:~$ echo "$THIS_SHELL_INTERACTIVE_TYPE/$THIS_SHELL_LOGIN_TYPE" interactive/login </code> Running script or executing explicitly via new shell <code>ubuntu@ip-172-31-0-70:~$ bash -c 'THIS_SHELL_INTERACTIVE_TYPE='non-interactive'; THIS_SHELL_LOGIN_TYPE='non-login'; if tty -s; then THIS_SHELL_INTERACTIVE_TYPE='interactive'; fi; if echo $0 | grep -e ^\- 2>&1>/dev/null; then THIS_SHELL_LOGIN_TYPE='login'; fi; echo "$THIS_SHELL_INTERACTIVE_TYPE/$THIS_SHELL_LOGIN_TYPE"' interactive/non-login </code> Running local script remotely <code>ssh ubuntu@34.247.105.87 < checkmy.sh Pseudo-terminal will not be allocated because stdin is not a terminal. Welcome to Ubuntu 16.04.5 LTS (GNU/Linux 4.4.0-1083-aws x86_64) non-interactive/login </code> Running a command over ssh remotely <code>ssh ubuntu@34.247.105.87 'THIS_SHELL_INTERACTIVE_TYPE='non-interactive'; THIS_SHELL_LOGIN_TYPE='non-login'; if tty -s; then THIS_SHELL_INTERACTIVE_TYPE='interactive'; fi; if echo $0 | grep -e ^\- 2>&1>/dev/null; then THIS_SHELL_LOGIN_TYPE='login'; fi; echo "$THIS_SHELL_INTERACTIVE_TYPE/$THIS_SHELL_LOGIN_TYPE"' non-interactive/non-login </code> Running a command over ssh remotely with <code>-t</code> switch You can explicitly request interactive shell when you want to run command remotely via ssh by using <code>-t</code> switch. <code>ssh ubuntu@34.247.105.87 -t 'THIS_SHELL_INTERACTIVE_TYPE='non-interactive'; THIS_SHELL_LOGIN_TYPE='non-login'; if tty -s; then THIS_SHELL_INTERACTIVE_TYPE='interactive'; fi; if echo $0 | grep -e ^\- 2>&1>/dev/null; then THIS_SHELL_LOGIN_TYPE='login'; fi; echo "$THIS_SHELL_INTERACTIVE_TYPE/$THIS_SHELL_LOGIN_TYPE"' interactive/non-login </code> Note: On topic why running command remotely is not <code>login shell</code> more info here.
A shell started in a new terminal in a GUI would be an interactive non-login shell. It would source your .bashrc, but not your .profile, for example.
Sort and count number of occurrence of lines <sep> I have <code>Apache</code> logfile, <code>access.log</code>, how to count number of line occurrence in that file?
<code>[your command] | sort | uniq -c | sort -nr </code> The accepted answer is almost complete you might want to add an extra <code>sort -nr</code> at the end to sort the results with the lines that occur most often first uniq options: <code>-c, --count prefix lines by the number of occurrences </code> sort options: <code>-n, --numeric-sort compare according to string numerical value -r, --reverse reverse the result of comparisons </code> In the particular case were the lines you are sorting are numbers, you need use <code>sort -gr</code> instead of <code>sort -nr</code>, see comment
You can use an associative array on awk and then -optionally- sort: <code>$ awk ' { tot[$0]++ } END { for (i in tot) print tot[i],i } ' access.log | sort </code> output: <code>1 c.php 1 d.php 2 b.php 3 a.php </code>
linux: How can I view all UUIDs for all available disks on my system?
To only get the <code>UUID</code> of a specific disk device (for example to be used in a script) you can use: <code>sudo blkid -s UUID -o value /dev/sdXY </code> where <code>/dev/sdXY</code> is the name of the device.
This works for me: <code>ls -la /dev/disk/by-uuid </code> If you want to check what type the partition is, use: <code>df -Th </code> and it will show you if you have ext3 or ext2. Today it helped me because there was a formatted ext2 partition and I thought it was ext3, which was causing the mount to fail.
Is there a command to list all open displays on a machine?
If you want the X connection forwarded over SSH, you need to enable it on both the server side and the client side. (Depending on the distribution, it may be enabled or disabled by default.) On the server side, make sure that you have <code>X11Forwarding yes</code> in <code>/etc/sshd_config</code> (or <code>/etc/ssh/sshd_config</code> or wherever the configuration file is). On the client side, pass the <code>-X</code> option to the <code>ssh</code> command, or put <code>ForwardX11</code> in your <code>~/.ssh/config</code>. If you run <code>ssh -X localhost</code>, you should see that <code>$DISPLAY</code> is (probably) <code>localhost:10.0</code>. Contrast with <code>:0.0</code>, which is the value when you're not connected over SSH. (The <code>.0</code> part may be omitted; it's a screen number, but multiple screens are rarely used.) There are two forms of X displays that you're likely to ever encounter: Local displays, with nothing before the <code>:</code>. TCP displays, with a hostname before the <code>:</code>. With <code>ssh -X localhost</code>, you can access the X server through both displays, but the applications will use a different method: <code>:NUMBER</code> accesses the server via local sockets and shared memory, whereas <code>HOSTNAME:NUMBER</code> accesses the server over TCP, which is slower and disables some extensions. Note that you need a form of authorization to access an X server, called a cookie and normally stored behind the scenes in the file <code>~/.Xauthority</code>. If you're using ssh to access a different user account, or if your distribution puts the cookies in a different file, you may find that <code>DISPLAY=:0</code> doesn't work within the SSH session (but <code>ssh -X</code> will, if it's enabled in the server; you never need to mess with <code>XAUTHORITY</code> when doing <code>ssh -X</code>). If that's a problem, you need to set the <code>XAUTHORITY</code> environment variable or obtain the other user's cookies. To answer your actual question: Local displays correspond to a socket in <code>/tmp/.X11-unix</code>. <code>(cd /tmp/.X11-unix && for x in X*; do echo ":${x#X}"; done) </code> Remote displays correspond to open TCP ports above 6000; accessing display number N on machine M is done by connecting to TCP port 6000+N on machine M. From machine M itself: <code>netstat -lnt | awk ' sub(/.*:/,"",$4) && $4 >= 6000 && $4 < 6100 { print ($1 == "tcp6" ? "ip6-localhost:" : "localhost:") ($4 - 6000) }' </code> (The rest of this bullet point is of academic interest only.) From another machine, you can use <code>nmap -p 6000-6099 host_name</code> to probe open TCP ports in the usual range. It's rare nowadays to have X servers listening on a TCP socket, especially outside the loopback interface. Strictly speaking, another application could be using a port in the range usually used by X servers. You can tell whether an X server is listening by checking which program has the port open. <code>lsof -i -n | awk '$9 ~ /:60[0-9][0-9]$/ {print}' </code> If that shows something ambiguous like <code>sshd</code>, there's no way to know for sure whether it's an X server or a coincidence.
The display is the first argument to <code>Xorg</code>. You can <code>ps</code> then grep <code>Xorg</code> out. <code>[braga@coleman teste_geom]$ ps aux | grep Xorg root 1584 5.3 1.0 156628 41708 tty1 Rs+ Jul22 22:56 /usr/bin/Xorg :0 -background none -verbose -auth /var/run/gdm/auth-for-gdm-a3kSKB/database -nolisten tcp vt1 braga 9110 0.0 0.0 109104 804 pts/1 S+ 00:26 0:00 grep --color=auto Xorg </code> You can then <code>awk</code> this into wherever format you need to.
List of available services <sep> Is there any command that would show all the available services in my wheezy Debian based OS?
On Debian jessie try: <code>service --status-all</code>. It is in the <code>sysvinit-utils</code> package.
Wheezy uses SysV init, and all the services are controlled with special shell scripts in <code>/etc/init.d</code>, so <code>ls /etc/init.d</code> will list them. These files also contain a description of the service at the top, and the directory contains a <code>README</code>. Some but not all of them have a <code>.sh</code> suffix, you should leave that off when using, eg., <code>update-rc.d</code>.
How to troubleshoot DNS with systemd-resolved?
Very helpful for troubleshooting is also: <code>journalctl -u systemd-resolved -f </code> There you can see what <code>systemd-resolved</code> is really doing. In my case it was not contacting the DNS servers that were reported via <code>systemd-resolve --status</code> at all. If it's doing weird things like that, then somtimes a restart via <code>sudo systemctl restart systemd-resolved</code> is a good idea. EDIT: In order to get more information from <code>resolved</code> you need to put <code>[Service] Environment=SYSTEMD_LOG_LEVEL=debug </code> into the <code>override.conf</code> of <code>systemd-resolved</code> via <code>sudo systemctl edit systemd-resolved </code> Restart to take effect: <code>sudo systemctl restart systemd-resolved </code> EDIT 2: Don't forget to revert this afterwards as @bmaupin and @Aminovic have helpfully pointed out in the comments. <code>sudo systemctl revert systemd-resolved sudo systemctl restart systemd-resolved </code>
Use <code>resolvectl status</code> (<code>systemd-resolve --status</code> when using systemd version earlier than 239) to show your global and per-link DNS settings.
How can I increase the number of inodes in an ext4 filesystem?
With 3.2 million inodes, you can have 3.2 million files and directories, total (but multiple hardlinks to a file only use one inode). Yes, it can be set when creating a filesystem on the partition. The options <code>-T usage-type</code>, <code>-N number-of-inodes</code>, or <code>-i bytes-per-inode</code> can all set the number of inodes. I generally use <code>-i</code>, after comparing the output of <code>du -s</code> and <code>find | wc -l</code> for a similar collection of files and allowing for some slack. No, it can't be changed in-place on an existing filesystem. However: If you're running LVM or the filesystem is on a SAN's LUN (either directly on the LUN, or as the last partition on the LUN), or you have empty space on the disk after the partition, you can grow the partition and then use <code>resize2fs</code> to expand the filesystem. This adds more inodes in proportion to the added space, roughly. If you want to avoid running out of inodes before space assuming that future files on average have about the same size, set a high enough reserved block percentage using <code>tune2fs -m</code>. If you have enough space and can take the filesystem offline, then take it offline, create a new filesystem with more inodes, and copy all the files over. If just a subset of the files are using a lot of the inodes and you have enough free space, create a filesystem on a loop device backed by a file on the filesystem, create a filesystem with more inodes (and maybe smaller blocks as well) on it, and move the offending directories into it. That's probably a performance hit and a maintenance hassle, but it is an alternative. And of course, if you can delete a lot of unneeded files, that should help too.
As another workaround I could suggest considering packing huge collections of files into an uncompressed(!) <code>tar</code> archive, and then using <code>archivemount</code> to mount it as a filesystem. A tar archive is better for sharing than a filesystem image and provides similar performance when backing up to a cloud or another storage. If the collection is supposed to be read-only, <code>squashfs</code> may be an option, but it requires certain options enabled in the kernel, and <code>xz</code> compression is available for tar as well with the same performance.
Why does reboot and poweroff require root privileges?
Warning: by the end of this answer you'll probably know more about linux than you wanted to Why <code>reboot</code> and <code>poweroff</code> require root privileges GNU/Linux operating systems are multi-user, as were its UNIX predecessors. The system is a shared resource, and multiple users can use it simultaneously. In the past this usually happened on computer terminals connected to a minicomputer or a mainframe. The popular PDP-11 minicomputer. A bit large, by today's standards :) In modern days, this can happen either remotely over the network (usually via SSH), on thin clients or on a multiseat configuration, where there are several local users with hardware attached to the same computer. A multi-seat configuration. Photo by Tiago Vignatti In practice, there can be hundreds or thousands of users using the same computer simultaneously. It wouldn't make much sense if any user could power off the computer, and prevent everyone else from using it. What security risk is posed by not requiring this to have root privileges? On a multi-user system, this prevents what is effectively a denial-of-service attack The GUI provides a way for any user to shut off or restart, so why do the terminal commands need to be run as root? Many Linux distributions do not provide a GUI. The desktop Linux distributions that do are usually oriented to a single user pattern, so it makes sense to allow this from the GUI. Possible reasons why the commands still require root privileges: Most users of a desktop-oriented distro will use the GUI, not the command line, so it's not worth the trouble Consistency with accepted UNIX conventions (Arguably misguided) security, as it prevents naive programs or scripts from powering off the system How is the GUI able to present shutdown without root privileges? The actual mechanism will vary depending on the specific desktop manager (GUI). Generally speaking, there are several mechanisms available for this type of task: Running the GUI itself as root (hopefully that shouldn't happen on any proper implementation...) setuid sudo with NOPASSWD Communicating the command to another process that has those privileges, usually done with D-Bus. On popular GUIs, this is usually managed by polkit. In summary Linux is used in very diverse environments - from mainframes, servers and desktops to supercomputers, mobile phones, and microwave ovens. It's hard to keep everyone happy all the time! :)
Shutdown (of any kind) affects all users, up to and including killing their processes. This is not something that you would normally want J. Random User to be able to do, simply because they are logged in. Normally, only authorised operators should be allowed to reboot, and in some cases, those with physical access - many Linux systems can be shut down from a power button on the case. I know this, because I have accidentally done so! Nowadays, I normally leave the button disconnected when assembling a system...
getopt, getopts or manual parsing - what to use when I want to support both short and long options?
If it has to be portable to a range of Unices, you'd have to stick to POSIX sh. And AFAIU there you just have no choice but rolling argument handling by hand.
There's this getopts_long written as a POSIX shell function that you may embed inside your script. Note that the Linux <code>getopt</code> (from <code>util-linux</code>) works correctly when not in traditional mode and supports long options, but is probably not an option for you if you need to be portable to other Unices. Recent versions of ksh93 (<code>getopts</code>) and zsh (<code>zparseopts</code>) have built-in support for parsing long options which might be an option for you as those are available for most Unices (though often not installed by default). Another option would be to use <code>perl</code> and its <code>Getopt::Long</code> module both of which should be available on most Unices nowadays, either by writing the whole script in <code>perl</code> or just call perl just to parse the option and feed the extracted information to the shell. Something like: <code>parsed_ops=$( perl -MGetopt::Long -le ' @options = ( "foo=s", "bar", "neg!" ); Getopt::Long::Configure "bundling"; $q="'\''"; GetOptions(@options) or exit 1; for (map /(\w+)/, @options) { eval "\$o=\$opt_$_"; $o =~ s/$q/$q\\$q$q/g; print "opt_$_=$q$o$q" }' -- "$@" ) || exit eval "$parsed_ops" # and then use $opt_foo, $opt_bar... </code> See <code>perldoc Getopt::Long</code> for what it can do and how it differs from other option parsers.
Why put things other than /home to a separate partition?
Minimizing loss: If <code>/usr</code> is on a separate partition, a damaged <code>/usr</code> does not mean that you cannot recover <code>/etc</code>. Security: <code>/</code> cannot be always ro (<code>/root</code> may need to be rw etc.) but <code>/usr</code> can. It can be used to make ro as much as possible. Using different FS: I may want to use a different system for <code>/tmp</code> (not reliable but fast for many files) and <code>/home</code> (has to be reliable). Similary <code>/var</code> contains data while <code>/usr</code> does not so <code>/usr</code> stability can be sacrifice but not so much as <code>/tmp</code>. Duration of fsck: Smaller partitions mean that checking one is faster. Mentioned filling up of partions, although other method is quotas.
A separate <code>/usr</code> can be useful if you have several machines sharing the same OS. They can share a single central <code>/usr</code> instead of duplicating it on every system. <code>/usr</code> can be mounted read-only. <code>/var</code> and <code>/tmp</code> can be filled up by user programs or daemons. Therefore it can be safe to have these in separate partitions that would prevent <code>/</code>, the root partition, to be 100% full, and would hit your system badly. To avoid having two distinct partitions for these, it is not uncommon to see <code>/tmp</code> being a symlink to <code>/var/tmp</code>.
What file systems on Linux store the creation time?
Several file systems store the file creation time, although there is no standard name for this field: ufs2 st_birthtime zfs crtime ext4 crtime btrfs otime jfs di_otime
The ext4 file system does store the creation time. <code>stat -c %W myfile</code> can show it to you.
How to close ports in Linux?
To "close" the port you can use <code>iptables</code> <code>sudo iptables -A INPUT -p tcp --dport 23 -m state --state NEW,ESTABLISHED -j DROP </code>
A Linux system has a so called loopback interface, which is for internal communication. Its hostname is <code>localhost</code> and its IP address is <code>127.0.0.1</code>. When you run <code>nmap</code> on <code>localhost</code>, you actually run the portscan on the virtual loopback interface. <code>192.168.1.1</code> is the IP address of your physical (most likely <code>eth0</code>) interface. So you've run <code>nmap</code> on two different network interfaces, this is why there's a difference in the open ports. They are both true. If you have TCP port 23 open, it is likely that you have a <code>telnet</code> server running (which is not a good thing due to its lack of encryption) or you have some kind of trojan horse on your machine.
How to get the tty in which bash is running?
If you want to be more efficient, then yes, you're right that <code>ps</code> can filter to just the process in question (and it will be more correct, not running the risk of picking up commands that happen to have your process number in their names). Not only that, but it can be told not to generate the header (option <code>h</code>), eliminating the <code>tail</code> process, and to display only the <code>TTY</code> field (option <code>o tty</code>), eliminating the <code>awk</code> process. So here's your reduced command: <code>ps hotty $$ </code>
Other ways to do it: <code>readlink /dev/fd/0 #or 1 or 2 readlink /proc/self/fd/0 #or 1 or 2 readlink -f /dev/stdin #or stdout or stderr; f to resolve recursively #etc. </code> ( If you're in a shell whose stdin, stdout and stderr are not connected to its controlling terminal, you can get a filedescriptor to the controlling terminal by opening <code>/dev/tty</code>: <code>( { readlink /dev/fd/0; } </dev/tty; ) </dev/null >output 2>&1 </code> ) Or with <code>ps</code>: <code>ps h -o tty -p $$ #no header (h); print tty column; for pid $$ </code>
Generic HTTP server that just dumps POST requests?
I was looking for this myself as well and ran into the Node.js http-echo-server: <code>npm install http-echo-server -g PORT=8081 http-echo-server </code> It accepts all requests and echos the full request including header to the command-line.
Simple core command line tools like <code>nc</code>, <code>socat</code> seem not to be able to handle the specific HTTP stuff going on (chunks, transfer encodings, etc.). As a result this may produce unexpected behaviour compared to talking to a real web server. So, my first thought is to share the quickest way I know of setting up a tiny web server and making it just do what you want: dump all output. The shortest I could come up with using Python Tornado: <code>#!/usr/bin/env python import tornado.ioloop import tornado.web import pprint class MyDumpHandler(tornado.web.RequestHandler): def post(self): pprint.pprint(self.request) pprint.pprint(self.request.body) if __name__ == "__main__": tornado.web.Application([(r"/.*", MyDumpHandler),]).listen(8080) tornado.ioloop.IOLoop.instance().start() </code> Replace the <code>pprint</code> line to output only the specific fields you need, for example <code>self.request.body</code> or <code>self.request.headers</code>. In the example above it listens on port 8080, on all interfaces. Alternatives to this are plenty. web.py, Bottle, etc. (I'm quite Python oriented, sorry) If you don't like its way of outputting, just run it anyway and try <code>tcpdump</code> like this: <code>tcpdump -i lo 'tcp[32:4] = 0x484f535420' </code> to see a real raw dump of all HTTP-POST requests. Alternatively, just run Wireshark.
What is /etc/mtab in Linux?
% file /etc/mtab /etc/mtab: symbolic link to ../proc/self/mounts % file /proc/mounts /proc/mounts: symbolic link to self/mounts % <code>/etc/mtab</code> is a compatibility mechanism. Decades ago, Unix did not have a system call for reading the existing mount information. Instead, programs that mounted filesystems were expected to coöperatively and voluntarily maintain a table in <code>/etc/mtab</code> of what was mounted where. For obvious reasons, this was not an ideal mechanism. Linux gained the notion of a "procfs", and one of the things that it gained was a kernel-maintained version of this table, in the form of a <code>mounts</code> pseudo-regular file. The "system call" to read the mount information out of the kernel became an open-read-close sequence against that file, followed by parsing the result from human-readable to machine-readable form (something that has some subtle catches, as you can see from the bug reports from just over a fortnight ago). <code>/etc/mtab</code> thus has popularly become a symbolic link to <code>/proc/mounts</code>, allowing programs that had hardwired that name to keep reading a mount table from that file, which the programs that mounted and unmounted filesystems no longer have to explicitly do anything themselves to keep up to date. (Some of them still will, though, if <code>/etc/mtab</code> turns out to be a writable regular file. And there are a few corner cases where the normalized information in <code>mounts</code> that lacks all non-kernel stuff is not quite what is needed; although they do not outweigh the general problems with <code>/etc/mtab</code>.) Each process can nowadays have its own individual view of what is mounted, and there are as a consequence now individual <code>mounts</code> files for each process in the procfs, each process's own table being accessible to it via the <code>self</code> symbolic link as <code>self/mounts</code>, and <code>/proc/mounts</code> is also now a compatibility mechanism. (Interestingly, neither per-process <code>mounts</code> nor the format of <code>mounts</code> are documented in the current Linux doco, although the similar <code>mountinfo</code> pseudo-regular file is.) SunOS/Solaris has a similar mechanism. The <code>/etc/mnttab</code> file is actually a single-file filesystem, and in addition to reading the table, via an open file descriptor to that file, with the <code>read()</code> system call, one can watch for mount point changes with <code>poll()</code> and obtain various further pieces of information with <code>ioctl()</code>. In HP-UX, <code>/etc/mnttab</code> is likewise the name of the file, but as of version 11 it was still a regular file whose contents were coöperatively maintained by the system utility programs. AIX does not export a human-readable text table that programs have to parse, and there is no equivalent file. The BSDs, similarly, have fully-fledged system calls, <code>getfsstat()</code> on FreeBSD and OpenBSD, for programs to obtain the mount table from the kernel in machine-readable form without marshalling it through a human-readable intermediate form. Further reading Zygmunt Krynicki (2019-03-16). \r in path confuses mount units. #12018. systemd issues. Zbigniew Jdrzejewski-Szmek (2019-04-04). [df] incorrect parsing of <code>/proc/self/mountinfo</code> with \r in mount path. #35137. GNU coreutils bugs. <code>/proc/mounts</code>. Documentation/filesystems/proc.txt. Linux 5.1. Jonathan de Boyne Pollard (2019-02-28). Re: what is the purpose of <code>fstab-decode</code>. Bug #567071. Debian bugs. <code>getfsstat()</code>. FreeBSD System Calls Manual. 2016-12-27.
According to <code>man mount</code>: <blockquote> The programs mount and umount traditionally maintained a list of currently mounted filesystems in the file /etc/mtab. This real mtab file is still supported, but on current Linux systems it is better to make it a symlink to /proc/mounts instead, because a regular mtab file maintained in userspace cannot reliably work with namespaces, containers and other advanced Linux features. </blockquote> On mounting without recording in <code>/etc/mtab</code>: <blockquote> -n, --no-mtab Mount without writing in /etc/mtab. This is necessary for example when /etc is on a read-only filesystem. </blockquote> Many more nuances are given in the manual page.
How to prefix any output in a bash script?
You can redirect stderr/stdout to a process substitution that adds the prefix of choice. For example, this script: <code>#! /bin/bash exec > >(trap "" INT TERM; sed 's/^/foo: /') exec 2> >(trap "" INT TERM; sed 's/^/foo: (stderr) /' >&2) echo foo echo bar >&2 date </code> Produces this output: <code>foo: foo foo: (stderr) bar foo: Fri Apr 27 20:04:34 IST 2018 </code> The first two lines redirect stdout and stderr respectively to sed commands that add <code>foo: </code> and <code>foo: (stderr)</code> to the input. The calls to the shell built-in command <code>trap</code> make sure that the subshell does not exit when terminating the script with <code>Ctrl+C</code> or by sending the <code>SIGTERM</code> signal using <code>kill $pid</code>. This ensures that your shell won't forcefully terminate your script because the <code>stdout</code> file descriptor disappears when <code>sed</code> exits because it received the termination signal as well. Effectively you can still use exit traps in your main script and <code>sed</code> will still be running to process any output generated while running your exit traps. The subshell should still exit after your main script ends so <code>sed</code> process won't be left running forever.
You could pipe the output through some way of replacing lines: <code>some long running stuff | sed -e 's/^/Some said: /;' </code> Also check 24337 Or just direct separate outputs to separate files/screen(1) tabs/tabs in your terminal/...
How can I ping multiple IP addresses at the same time?
I would suggest using GNU Parallel <code>parallel -u ping ::: host1 host2 host3 </code> output will be interleaved
<code>oping host1 host2 host3 </code> Description: <blockquote> oping uses ICMP packages (better known as "ping packets") to test the reachability of network hosts. It supports pinging multiple hosts in parallel using IPv4 and/or IPv6 transparently. This package contains two command line applications: "oping" is a replacement for tools like ping(1), ping6(1) and fping(1). "noping" is an ncurses-based tool which displays statistics while pinging and highlights aberrant round-trip times. </blockquote>
Is it possible in bash, to start reading a file from an arbitary byte count offset?
It sounds like you want: <code>tail -c +1048576 </code> or whatever number of bytes you want to skip. The plus sign tells tail to measure from the start of the file instead of the end. If you're using the GNU version of tail you can write that as: <code>tail -c +1M </code> To get a fixed number of bytes after the cut, instead of all the rest of the file, just pipe it through head: <code>tail -c +1048576 | head -c 1024 </code>
<code>for (( block = 0; block < 16; block += 1 )) do echo $block; dd if=INPUTFILE skip=$((block*512))MB bs=64 count=1 status=noxfer 2> /dev/null | \ head -n 1 done </code> which .. creates no temp-split files, skips blocks * 512MB of data at each run, reads 64 bytes from that position and limits the output to the first line of that 64 bytes. you might want to adjust 64 to whatever you think you need.
How do I extract with tar to a different directory?
From <code>man tar</code>: <code> -C directory In c and r mode, this changes the directory before adding the following files. In x mode, change directories after opening the archive but before extracting entries from the archive. </code> i.e, <code>tar xC /foo/bar -f /tmp/foo.tar.gz</code> should do the job. (on FreeBSD, but GNU tar is basically the same in this respect, see "Changing the Working Directory" in its manual)
if you want to extract an tar archive elsewhere just cd to the destination directory and untar it there: <code> mkdir -p foo/bar cd foo/bar tar xzvf /tmp/foo.tar.gz </code> The command you've used would search the file <code>foo/bar</code> in the archive and extract it.
Why do most linux distros default gnome?
So, a complete answer to your question involves a bit of history. This is covered well in the book <code>REBEL CODE</code> by Glyn Moody, Chapter 15, <code>Trolls Versus Gnomes</code>. It is an interesting story. Back in the mid 1990s Matthias Ettrich became interested in Linux. (Matthias is also well known for starting the LyX project). He was concerned about usability issues, as in ordinary people being able to use Linux, which back then was mostly for highly technical types, hackers and so forth. He happened to come across the Qt toolkit, created by Trolltech. This toolkit was proprietary, but apparently Matthias did not consider that to be a sufficiently important drawback. He was what one might term as belonging to the 'pragmatic' wing of the free software community. At around that time he started the KDE project based on the Qt toolkit. If you look at the original announcement (courtesy of Wikipedia's KDE page), you will see that Matthias referred to the Kool Desktop Environment. You don't hear about Kool any more. :-) I guess everyone is too embarrassed by it. Anyway, what one might term the 'purist' wing of the free software community, notably including one Richard Stallman and his Free Software Foundation, were alarmed by this turn of events. So the competing GNOME project was started, whose original leader was Miguel De Icaza, who happens to be on this site. Miguel was right in the middle of all this, so he'd be the ideal person for a history lesson. The new GNOME project used a toolkit called GTK (Gimp Tool Kit) which had been created for the GIMP by Kimball and Mattis around the same time (the GIMP project was started around 1995). Then Trolltech started feeling the pressure, and switched to the Q Public License (QPL) in 1998, and finally added the GPL as an alternative in 2000. By then GNOME had a lot of momentum, and the world had two free desktop projects instead of one. Now, Red Hat, who then as now was one of the market leaders, was and is as concerned about software freedom as the FSF, though I gather for different reasons. So, they stuck to shipping GNOME. Debian, of course, also went with GNOME. (This was in the days before Ubuntu, which was first released in 2004). So even today, Debian and Ubuntu default to GNOME. Some other distributions chose to go with KDE, notably SUSE. I remember switching from Red Hat 5.2 (I think) to SUSE 6.4 in August 1999, and being blown away by the beauty of KDE 1. And SUSE is more closely identified with KDE, and Red Hat is more closely identified with GNOME, even today.
There was a lot of uncertainty regarding licensing of the Qt library (on which KDE is built) back when most distros were choosing between KDE and GNOME. That isn't a problem any more, but by the time it was cleared up most distros had already chosen, and this is the sort of thing that they aren't comfortable switching up without a really good reason.
executing if-statement from command prompt <sep> In bash I can do the following: <code>if [ -f /tmp/test.txt ]; then echo "true"; fi </code> However, if I add <code>sudo</code> in front, it doesn't work anymore: <code>sudo if [ -f /tmp/test.txt ]; then echo "true"; fi -bash: syntax error near unexpected token `then' </code> How can I make it work?
<code>sudo</code> executes its argument using <code>exec</code>, not via a shell interpreter. Therefore, it is limited to actual binary programs and cannot use shell functions, aliases, or builtins (<code>if</code> is a builtin). Note that the <code>-i</code> and <code>-s</code> options can be used to execute the given commands in a login or non-login shell, respectively (or just the shell, interactively; note that you'll have to escape the semicolons or quote the command). <code>$ sudo if [ -n x ]; then echo y; fi -bash: syntax error near unexpected token `then' $ sudo if [ -n x ]\; then echo y\; fi sudo: if: command not found $ sudo -i if [ -n x ]\; then echo y\; fi y $ sudo -s 'if [ -n x ]; then echo y; fi' y </code>
Try calling the line as a string argument through the shell. <code>sudo /bin/sh -c 'if [ -f /tmp/test.txt ]; then echo "true"; fi' </code>
Dynamic variables in systemd service unit files <sep> Is there a way to dynamically assign environment variables in a systemd service unit file?
If you are careful you can incorporate a small bash script sequence as your exec command in the instance service file. Eg <code>ExecStart=/bin/bash -c 'v=%i; USE_GPU=$${v%:*} exec /bin/mycommand' </code> The <code>$$</code> in the string will become a single <code>$</code> in the result passed to bash, but more importantly will stop <code>${...}</code> from being interpolated by systemd. (Earlier versions of systemd did not document the use of <code>$$</code>, so I don't know if it was supported then).
No built in way. You need to do these things before your service starts. One way would be putting it to an environment file. <code>[Service] # Note you need to escape percentage sign ExecStartPre=/bin/sh -c "my_awesome_parser %%i > /run/gpu_service_%i" EnvironmentFile=/run/gpu_service_%i ExecStart=... </code>
Abbreviated current directory in shell prompt?
I like <code>PROMPT_DIRTRIM</code> in bash... <code>export PROMPT_DIRTRIM=2 </code> will change your example prompt to... <code>rfkrocktk@work-laptop ../com/tkassembled/ $ </code> It works for me.
Try this: <code>PROMPT_COMMAND='PS1X=$(perl -pl0 -e "s|^${HOME}|~|;s|([^/])[^/]*/|$""1/|g" <<<${PWD})' </code> or, pure bash: <code>PROMPT_COMMAND='PS1X=$(p="${PWD#${HOME}}"; [ "${PWD}" != "${p}" ] && printf "~";IFS=/; for q in ${p:1}; do printf /${q:0:1}; done; printf "${q:1}")' </code> then <code>PS1='\u@\h ${PS1X} $ ' </code> produces (notice the <code>~</code> for <code>${HOME}</code>): <code>rfkrocktk@work-laptop ~/D/P/W/m/s/m/j/c/tkassembled $ </code> I improved my answer thanks to @enzotib's
How to locate a Chrome tab from its process PID?
Press Shift+Esc to bring up Chrome's task manager. Locate the line corresponding to the PID you want (click on the Process ID column header to sort by PID). Double-click the line to bring the tab to the foreground.
in chromium address bar type: about:memory. It will show process ID and memory consumption of each tab:
Where is Unix Time / Official Time Measured?
UNIX time is measured on your computer, running UNIX. This answer is going to expect you to know what Cordinated Universal Time (UTC), International Atomic Time (TAI), and the SI second are. Explaining them is well beyond the scope of Unix and Linux Stack Exchange. This is not the Physics or Astronomy Stack Exchanges. The hardware Your computer contains various oscillators that drive clocks and timers. Exactly what it has varies from computer to computer depending on its architecture. But usually, and in very general terms: There is a programmable interval timer (PIT) somewhere, that can be programmed to count a given number of oscillations and trigger an interrupt to the central processing unit. There is a cycle counter on the central processor that simply counts 1 for each instruction cycle that is executed. The theory of operation, in very broad terms The operating system kernel makes use of the PIT to generate ticks. It sets up the PIT to free-run, counting the right number of oscillations for a time interval of, say, one hundredth of a second, generating an interrupt, and then automatically resetting the count to go again. There are variations on this, but in essence this causes a tick interrupt to be raised with a fixed frequency. In software, the kernel increments a counter every tick. It knows the tick frequency, because it programmed the PIT in the first place. So it knows how many ticks make up a second. It can use this to know when to increment a counter that counts seconds. This latter is the kernel's idea of "UNIX Time". It does, indeed, simply count upwards at the rate of one per SI second if left to its own devices. Four things complicate this, which I am going to present in very general terms. Hardware isn't perfect. A PIT whose data sheet says that it has an oscillator frequency of N Hertz might instead have a frequency of (say) N.00002 Hertz, with the obvious consequences. This scheme interoperates very poorly with power management, because the CPU is waking up hundreds of times per second to do little more than increment a number in a variable. So some operating systems have what are know as "tickless" designs. Instead of making the PIT send an interrupt for every tick, the kernel works out (from the low level scheduler) how many ticks are going to go by with no thread quanta running out, and programs the PIT to count for that many ticks into the future before issuing a tick interrupt. It knows that it then has to record the passage of N ticks at the next tick interrupt, instead of 1 tick. Application software has the ability to change the kernel's current time. It can step the value or it can slew the value. Slewing involves adjusting the number of ticks that have to go by to increment the seconds counter. So the seconds counter does not necessarily count at the rate of one per SI second anyway, even assuming perfect oscillators. Stepping involves simply writing a new number in the seconds counter, which isn't usually going to happen until 1 SI second since the last second ticked over. Modern kernels not only count seconds but also count nanoseconds. But it is ridiculous and often outright unfeasible to have a once-per-nanosecond tick interrupt. This is where things like the cycle counter come into play. The kernel remembers the cycle counter value at each second (or at each tick) and can work out, from the current value of the counter when something wants to know the time in nanoseconds, how many nanoseconds must have elapsed since the last second (or tick). Again, though, power and thermal management plays havoc with this as the instruction cycle frequency can change, so kernels do things like rely on additional hardware like (say) a High Precision Event Timer (HPET). The C language and POSIX The Standard library of the C language describes time in terms of an opaque type, <code>time_t</code>, a structure type <code>tm</code> with various specified fields, and various library functions like <code>time()</code>, <code>mktime()</code>, and <code>localtime()</code>. In brief: the C language itself merely guarantees that <code>time_t</code> is one of the available numeric data types and that the only reliable way to calculate time differences is the <code>difftime()</code> function. It is the POSIX standard that provides the stricter guarantees that <code>time_t</code> is in fact one of the integer types and that it counts seconds since the Epoch. It is also the POSIX standard that specifies the <code>timespec</code> structure type. The <code>time()</code> function is sometimes described as a system call. In fact, it hasn't been the underlying system call on many systems for quite a long time, nowadays. On FreeBSD, for example, the underlying system call is <code>clock_gettime()</code>, which has various "clocks" available that measure in seconds or seconds+nanoseconds in various ways. It is this system call by which applications software reads UNIX Time from the kernel. (A matching <code>clock_settime()</code> system call allows them to step it and an <code>adjtime()</code> system call allows them to slew it.) Many people wave the POSIX standard around with very definite and exact claims about what it prescribes. Such people have, more often than not, not actually read the POSIX standard. As its rationale sets out, the idea of counting "seconds since the Epoch", which is the phrase that the standard uses, intentionally doesn't specify that POSIX seconds are the same length as SI seconds, nor that the result of <code>gmtime()</code> is "necessarily UTC, despite its appearance". The POSIX standard is intentionally loose enough so that it allows for (say) a UNIX system where the administrator goes and manually fixes up leap second adjustments by re-setting the clock the week after they happen. Indeed, the rationale points out that it's intentionally loose enough to accommodate systems where the clock has been deliberately set wrong to some time other than the current UTC time. UTC and TAI The interpretation of UNIX Time obtained from the kernel is up to library routines running in applications. POSIX specifies an identity between the kernel's time and a "broken down time" in a <code>struct tm</code>. But, as Daniel J. Bernstein once pointed out, the 1997 edition of the standard got this identity embarrassingly wrong, messing up the Gregorian Calendar's leap year rule (something that schoolchildren learn) so that the calculation was in error from the year 2100 onwards. "More honour'd in the breach than the observance" is a phrase that comes readily to mind. And indeed it is. Several systems nowadays base this interpretation upon library routines written by Arthur David Olson, that consult the infamous "Olson timezone database", usually encoded in database files under <code>/usr/share/zoneinfo/</code>. The Olson system had two modes: The kernel's "seconds since the Epoch" is considered to count UTC seconds since 1970-01-01 00:00:00 UTC, except for leap seconds. This uses the <code>posix/</code> set of Olson timezone database files. All days have 86400 kernel seconds and there are never 61 seconds in a minute, but they aren't always the length of an SI second and the kernel clock needs slewing or stepping when leap seconds occur. The kernel's "seconds since the Epoch" is considered to count TAI seconds since 1970-01-01 00:00:10 TAI. This uses the <code>right/</code> set of Olson timezone database files. Kernel seconds are 1 SI second long and the kernel clock never needs slewing or stepping to adjust for leap seconds, but broken down times can have values such as 23:59:60 and days are not always 86400 kernel seconds long. M. Bernstein wrote several tools, including his <code>daemontools</code> toolset, that required <code>right/</code> because they simply added 10 to <code>time_t</code> to get TAI seconds since 1970-01-01 00:00:00 TAI. He documented this in the manual page. This requirement was (perhaps unknowingly) inherited by toolsets such as <code>daemontools-encore</code> and <code>runit</code> and by Felix von Leitner's <code>libowfat</code>. Use Bernstein <code>multilog</code>, Guenter <code>multilog</code>, or Pape <code>svlogd</code> with an Olson <code>posix/</code> configuration, for example, and all of the TAI64N timestamps will be (at the time of writing this) 26 seconds behind the actual TAI second count since 1970-01-01 00:00:10 TAI. Laurent Bercot and I addressed this in s6 and nosh, albeit that we took different approaches. M. Bercot's <code>tai_from_sysclock()</code> relies on a compile-time flag. nosh tools that deal in TAI64N look at the <code>TZ</code> and <code>TZDIR</code> environment variables to auto-detect <code>posix/</code> and <code>right/</code> if they can. Interestingly, FreeBSD documents <code>time2posix()</code> and <code>posix2time()</code> functions that allow the equivalent of the Olson <code>right/</code> mode with <code>time_t</code> as TAI seconds. They are not apparently enabled, however. Once again UNIX time is measured on your computer running UNIX, by oscillators contained in your computer's hardware. It doesn't use SI seconds; it isn't UTC even though it may superficially resemble it; and it intentionally permits your clock to be wrong. Further reading Daniel J. Bernstein. UTC, TAI, and UNIX time. cr.yp.to. Daniel J. Bernstein. <code>tai64nlocal</code>. daemontools. cr.yp.to. "seconds since the epoch". Single UNIX Specification Version 2. 1997. The Open Group. "Seconds Since the Epoch (Rationale)". Base Specifications Issue 6. 2004. IEEE 1003.1. The Open Group. David Madore (2010-12-17). The Unix leap second mess. <code>time2posix</code>. FreeBSD 10.3 manual. § 3. https://physics.stackexchange.com/questions/45739/ https://astronomy.stackexchange.com/questions/11840/
The adjustments to the clock are co-ordinated by the IERS. They schedule the insertion of a leap second into the time stream as required. From The NTP Timescale and Leap Seconds <blockquote> The International Earth Rotation Service (IERS) at the Paris Observatory uses astronomical observations provided by USNO and other observatories to determine the UT1 (navigator's) timescale corrected for irregular variations in Earth rotation. </blockquote> To the best of my knowledge 23:59:60 (Leap Second) and 00:00:00 the next day are considered the same second in Unix Time.
Meaning of [ "${1:0:1}" = '-' ] <sep> I have the following script to launch a MySQL process: <code>if [ "${1:0:1}" = '-' ]; then set -- mysqld_safe "$@" fi if [ "$1" = 'mysqld_safe' ]; then DATADIR="/var/lib/mysql" ... </code> What does 1:0:1 mean in this context?
It's a test for a <code>-</code> dashed argument option, apparently. It's a little strange, really. It uses a non-standard <code>bash</code> expansion in an attempt to extract the first and only the first character from <code>$1</code>. The <code>0</code> is the head character index and the <code>1</code> is string length. In a <code>[</code> <code>test</code> like that it might also be: <code>[ " -${1#?}" = " $1" ] </code> Neither comparison is particularly suited to <code>test</code> though, as it interprets <code>-</code> dashed arguments as well - which is why I use the leading space there. The best way to do this kind of thing - and the way it is usually done - is : <code>case $1 in -*) mysqld_safe "$@"; esac </code>
This is going to take a substring of <code>$1</code> from the 0th to the 1st character. So you're going to get the first character and only the first character of the string. From the <code>bash</code> 3.2 man page: <blockquote> <code> ${parameter:offset} ${parameter:offset:length} Substring Expansion. Expands to up to length characters of parameter starting at the character specified by offset. If length is omitted, expands to the substring of parameter start- ing at the character specified by offset. length and offset are arithmetic expressions (see ARITHMETIC EVALUATION below). length must evaluate to a number greater than or equal to zero. If offset evaluates to a number less than zero, the value is used as an offset from the end of the value of parameter. If parameter is @, the result is length positional parameters beginning at offset. If parameter is an array name indexed by @ or *, the result is the length members of the array beginning with ${parameter[offset]}. A negative offset is taken relative to one greater than the maximum index of the specified array. Note that a negative offset must be separated from the colon by at least one space to avoid being confused with the :- expan- sion. Substring indexing is zero-based unless the positional parameters are used, in which case the indexing starts at 1. </code> </blockquote>
Which users are allowed to log in via SSH by default?
Paradeepchhetri isn't exactly correct. Debian's unmodified <code>sshd_config</code> has the following: <code>PubkeyAuthentication yes PermitEmptyPasswords no UsePAM yes </code> Thus, login via ssh would only work for users that have a populated password field in <code>/etc/shadow</code> or an ssh key in <code>~/.ssh/authorized_keys</code>. Note that the default value for <code>PubkeyAuthentication</code> is <code>yes</code> and for <code>PermitEmptyPasswords</code> is <code>no</code>, so even if you remove them the behavior will be the same. In the question example, <code>www-data</code> by default won't be allowed to log in since Debian's installer neither assigns a password nor creates a key for <code>www-data</code>. <code>pam_access</code>, <code>AllowUsers</code> and <code>AllowGroups</code> in <code>sshd_config</code> can be used for finer control if that's needed. In Debian it's strongly encouraged to <code>UsePAM</code>.
By default, login is allowed for all users on Debian. You can change it by allowing certain users that can log into by editing <code>/etc/ssh/sshd_config</code> file. As mentioned in the man page of sshd_config. <blockquote> <code>AllowUsers</code> This keyword can be followed by a list of user name patterns, separated by spaces. If specified, login is allowed only for user names that match one of the patterns. Only user names are valid; a numerical user ID is not recognized. By default, login is allowed for all users. If the pattern takes the form <code>USER@HOST</code> then <code>USER</code> and <code>HOST</code> are separately checked, restricting logins to particular users from particular hosts. The allow/deny directives are processed in the following order: <code>DenyUsers</code>, <code>AllowUsers</code>, <code>DenyGroup</code>, and finally <code>AllowGroups</code>. </blockquote>
What is the NSFS filesystem?
As described in the kernel commit log linked to by jiliagre above, the <code>nsfs</code> filesystem is a virtual filesystem making Linux-kernel namespaces available. It is separate from the <code>/proc</code> "proc" filesystem, where some process directory entries reference inodes in the <code>nsfs</code> filesystem in order to show which namespaces a certain process (or thread) is currently using. The <code>nsfs</code> doesn't get listed in <code>/proc/filesystems</code> (while <code>proc</code> does), so it cannot be explicitly mounted. <code>mount -t nsfs ./namespaces</code> fails with "unknown filesystem type". This is, as <code>nsfs</code> as it is tightly interwoven with the <code>proc</code> filesystem. The filesystem type <code>nsfs</code> only becomes visible via <code>/proc/$PID/mountinfo</code> when bind-mounting an existing(!) namespace filesystem link to another target. As Stephen Kitt rightly suggests above, this is to keep namespaces existing even if no process is using them anymore. For example, create a new user namespace with a new network namespace, then bind-mount it, then exit: the namespace still exists, but <code>lsns</code> won't find it, since it's not listed in <code>/proc/$PID/ns</code> anymore, but exists as a (bind) mount point. <code># bind mount only needs an inode, not necessarily a directory ;) touch mynetns # create new network namespace, show its id and then bind-mount it, so it # is kept existing after the unshare'd bash has terminated. # output: net:[##########] NS=$(sudo unshare -n bash -c "readlink /proc/self/ns/net && mount --bind /proc/self/ns/net mynetns") && echo $NS # notice how lsns cannot see this namespace anymore: no match! lsns -t net | grep ${NS:5:-1} || echo "lsns: no match for net:[${NS:5:-1}]" # however, findmnt does locate it on the nsfs... findmnt -t nsfs | grep ${NS:5:-1} || echo "no match for net:[${NS:5:-1}]" # output: /home/.../mynetns nsfs[net:[##########]] nsfs rw # let the namespace go... echo "unbinding + releasing network namespace" sudo umount mynetns findmnt -t nsfs | grep ${NS:5:-1} || echo "findmnt: no match for net:[${NS:5:-1}]" # clean up rm mynetns </code> Output should be similar to this one: <code>net:[4026532992] lsns: no match for net:[4026532992] /home/.../mynetns nsfs[net:[4026532992]] nsfs rw unbinding + releasing network namespace findmnt: no match for net:[4026532992] </code> Please note that it is not possible to create namespaces via the nsfs filesystem, only via the syscalls clone() (<code>CLONE_NEW...</code>) and unshare. The <code>nsfs</code> only reflects the current kernel status w.r.t. namespaces, but it cannot create or destroy them. Namespaces automatically get destroyed whenever there isn't any reference to them left, no processes (so no <code>/proc/$PID/ns/...</code>) AND no bind-mounts either, as we've explored in the above example.
That's the "Name Space File System", used by the <code>setns</code> system call and, as its source code shows, Name Space related ioctl's (e.g. <code>NS_GET_USERNS</code>, <code>NS_GET_OWNER_UID</code>...) <code>NSFS</code> pseudo-files entries used to be provided by the <code>/proc</code> file system until Linux 3.19. Here is the commit of this change. See Stephen Kitt's comment about a possible explanation about this files presence.
What was the reason of the non-preemptivity of older Linux kernels?
In the context of the Linux kernel, when people talk about preemption they often refer to the kernels ability to interrupt itself essentially, switch tasks while running kernel code. Allowing this to happen is quite complex, which is probably the main reason it took a long time for the kernel to be made preemptible. At first most kernel code couldnt be interrupted anyway, since it was protected by the big kernel lock. That lock was progressively eliminated from more and more kernel code, allowing multiple simultaneous calls to the kernel in parallel (which became more important as SMP systems became more common). But that still didnt make the kernel itself preemptible; that took more development still, culminating in the <code>PREEMPT_RT</code> patch set which was eventually merged in the mainline kernel (and was capable of preempting the BKL anyway). Nowadays the kernel can be configured to be more or less preemptible, depending on the throughput and latency characteristics youre after; see the related kernel configuration for details. As you can see from the explanations in the kernel configuration, preemption affects throughput and latency, not concurrency. On single-CPU systems, preemption is still useful because it allows events to be processed with shorter reaction times; however, it also results in lower throughput (since the kernel spends time switching tasks). Preemption allows any given CPU, in a single or multiple CPU system, to switch to another task more rapidly. The limiting factor on multi-CPU systems isnt preemption, its locks, big or otherwise: any time code takes a lock, it means that another CPU cant start performing the same action.
Preemptive kernel only means that there is no Big Kernel Lock. Linux had preemptive multi-tasking (i.e. user code was preemptible) since its first moment (as far I know, the very-very first Linux 0.0.1 uploaded by Linus to the funet ftp server was already preemptive multitask). If you executed, for example, multiple compression or compilation processes, they were executed parallel from the first moment. Contrary the - at the time - widely used Win31. On Win31, if a task got the CPU from the "kernel", by default it was its responsibility to determine when to give control back to the OS (or to other tasks). If a process had no special support for this feature (which required additional programming work), then while executing it, all other tasks were suspended. Even most basic apps integrated into the Win31 worked so. Preemptive multitasking means, that the tasks have no way to allocate the CPU as they want. Instead, if their time slot expires, the kernel gets the CPU away from them. Thus, in preemptive operating systems, a badly written or badly functioning process can't freeze the OS, or avoid other processes from running. Linux was always preemptive for user space processes. The Big Kernel Lock means that in some cases, inside kernel space, still there could be some locks, preventing other processes from running the protected code. For example, you could not mount multiple filesystems concurrently - if you gave multiple mount commands, they were still executed consecutively, because mounting things required to allocate the Big Kernel Lock. Making the kernel preemptive had required to eliminate this big kernel lock, i.e. making the mount and any other tasks to be able to run concurrently. It was a big job. Historically, this was made really urgent by the increasing support of SMP (multi-CPU support). In the first time, there were really multiple-CPU mainboards. Later multiple CPUs ("cores") were integrated into a single chip, today the really multi-CPU mainboards are already rare (they are typically in costly server systems). Also the really single-core systems (where there is only a single cpu, with a single core) are rare. Thus, the answer to your question isn't that "what was the reason of non-preemptivity", because it was always preemptive. The real question is, what made the preemptive kernel execution really necessary. The answer is for that: the increasing ratio of the many-CPU, many-core systems.
256 colour prompt in Zsh <sep> How can I set my prompt to be colourized in 256 colours?
<code>export PS1='%F{214}%K{123}%m%k%f' </code> From <code>man zshmisc</code>: <code> %F (%f) Start (stop) using a different foreground colour, if supported by the terminal. The colour may be specified two ways: either as a numeric argument, as normal, or by a sequence in braces following the %F, for example %F{red}. In the latter case the values allowed are as described for the fg zle_highlight attribute; see Char acter Highlighting in zshzle(1). This means that numeric colours are allowed in the second format also. %K (%k) Start (stop) using a different bacKground colour. The syntax is identical to that for %F and %f. </code> Also to try it out it could be used like that: <code>$> print -P '%F{214}%K{123}%m%k%f' </code>
First, ensure that your terminal supports 256 colors, which I suppose you already have. Second, use a <code>PS1</code> variable with the correct code, for example: <code>export PS1='%{^[[01;38;05;214;48;05;123m%}%m%{^[[0m%} ' </code> This will give you a prompt with the host name in bold, with a foreground color of 214 and a background color of 123. Note that the <code>^[</code> is "entered" by typing Ctrl+v and Ctrl+[. See this excellent article "That 256 Color Thing" for the whole list of attributes.
Preview PDF as image in ranger <sep> How can I preview PDFs as images in ranger?
Ranger supports this (disabled by default) since v1.9.0 (see commit <code>ab8fd9e</code>). To enable this, update your <code>scope.sh</code> to the latest version. Note that this will overwrite your previewing configuration file: <code>ranger --copy-config=scope </code> Then find and uncomment the following in <code>~/.config/ranger/scope.sh</code>: <code># application/pdf) # pdftoppm -f 1 -l 1 \ # -scale-to-x 1920 \ # -scale-to-y -1 \ # -singlefile \ # -jpeg -tiffcompression jpeg \ # -- "${FILE_PATH}" "${IMAGE_CACHE_PATH%.*}" \ # && exit 6 || exit 1;; </code>
This works in <code>ranger-stable 1.8.1</code>: <code>pdf) try pdftoppm -jpeg -singlefile "$path" "${cached//.jpg}" && exit 6 || exit 1;; </code> I also had to create <code>~/.cache/ranger</code> on my system manually.
in Bash, how to not include extra arguments in an alias?
When you define an alias, the command you set is run instead of the one you wrote. This means that when you run <code>ftp abc.com</code>, what is actually executed is <code>echo do not use ftp. Use sftp instead abc.com </code> A simple solution is to use a function instead of an alias: <code>ftp(){ echo 'do not use ftp. Use sftp instead'; } </code> Alternatively, you could use <code>printf</code> as suggested by Costas: <code>alias ftp="printf 'do not use ftp. Use sftp instead\n'" </code>
<code>alias ftp='echo do not use ftp. Use sftp instead. # ' </code>
awk/sed/perl one liner + how to print only the properties lines from json file <sep> how to print only the properties lines from json file example of json file <code>{ "href" : "http://master02:8080/api/v1/clusters/HDP/configurations?type=kafka-env&tag=version1527250007610", "items" : [ { "href" : "http://master02:8080/api/v1/clusters/HDP/configurations?type=kafka-env&tag=version1527250007610", "tag" : "version1527250007610", "type" : "kafka-env", "version" : 8, "Config" : { "cluster_name" : "HDP", "stack_id" : "HDP-2.6" }, "properties" : { "content" : "\n#!/bin/bash\n\n# Set KAFKA specific environment variables here.\n\n# The java implementation to use.\nexport JAVA_HOME={{java64_home}}\nexport PATH=$PATH:$JAVA_HOME/bin\nexport PID_DIR={{kafka_pid_dir}}\nexport LOG_DIR={{kafka_log_dir}}\nexport KAFKA_KERBEROS_PARAMS={{kafka_kerberos_params}}\nexport JMX_PORT=9997\n# Add kafka sink to classpath and related depenencies\nif [ -e \"/usr/lib/ambari-metrics-kafka-sink/ambari-metrics-kafka-sink.jar\" ]; then\n export CLASSPATH=$CLASSPATH:/usr/lib/ambari-metrics-kafka-sink/ambari-metrics-kafka-sink.jar\n export CLASSPATH=$CLASSPATH:/usr/lib/ambari-metrics-kafka-sink/lib/*\nfi\n\nif [ -f /etc/kafka/conf/kafka-ranger-env.sh ]; then\n.
<code>Jq</code> is the right tool for processing JSON data: <code>jq '.items[].properties | to_entries[] | "\(.key) : \(.value)"' input.json </code> The output: <code>"content : \n#!/bin/bash\n\n# Set KAFKA specific environment variables here.\n\n# The java implementation to use.\nexport JAVA_HOME={{java64_home}}\nexport PATH=$PATH:$JAVA_HOME/bin\nexport PID_DIR={{kafka_pid_dir}}\nexport LOG_DIR={{kafka_log_dir}}\nexport KAFKA_KERBEROS_PARAMS={{kafka_kerberos_params}}\nexport JMX_PORT=9997\n# Add kafka sink to classpath and related depenencies\nif [ -e \"/usr/lib/ambari-metrics-kafka-sink/ambari-metrics-kafka-sink.jar\" ]; then\n export CLASSPATH=$CLASSPATH:/usr/lib/ambari-metrics-kafka-sink/ambari-metrics-kafka-sink.jar\n export CLASSPATH=$CLASSPATH:/usr/lib/ambari-metrics-kafka-sink/lib/*\nfi\n\nif [ -f /etc/kafka/conf/kafka-ranger-env.sh ]; then\n. /etc/kafka/conf/kafka-ranger-env.sh\nfi" "is_supported_kafka_ranger : true" "kafka_log_dir : /var/log/kafka" "kafka_pid_dir : /var/run/kafka" "kafka_user : kafka" "kafka_user_nofile_limit : 128000" "kafka_user_nproc_limit : 65536" </code> In case if it's really mandatory to obtain each key and value double-quoted - use the following modification: <code>jq -r '.items[].properties | to_entries[] | "\"\(.key)\" : \"\(.value | gsub("\n";"\\n"))\","' input.json </code> The output: <code>"content" : "\n#!/bin/bash\n\n# Set KAFKA specific environment variables here.\n\n# The java implementation to use.\nexport JAVA_HOME={{java64_home}}\nexport PATH=$PATH:$JAVA_HOME/bin\nexport PID_DIR={{kafka_pid_dir}}\nexport LOG_DIR={{kafka_log_dir}}\nexport KAFKA_KERBEROS_PARAMS={{kafka_kerberos_params}}\nexport JMX_PORT=9997\n# Add kafka sink to classpath and related depenencies\nif [ -e "/usr/lib/ambari-metrics-kafka-sink/ambari-metrics-kafka-sink.jar" ]; then\n export CLASSPATH=$CLASSPATH:/usr/lib/ambari-metrics-kafka-sink/ambari-metrics-kafka-sink.jar\n export CLASSPATH=$CLASSPATH:/usr/lib/ambari-metrics-kafka-sink/lib/*\nfi\n\nif [ -f /etc/kafka/conf/kafka-ranger-env.sh ]; then\n. /etc/kafka/conf/kafka-ranger-env.sh\nfi", "is_supported_kafka_ranger" : "true", "kafka_log_dir" : "/var/log/kafka", "kafka_pid_dir" : "/var/run/kafka", "kafka_user" : "kafka", "kafka_user_nofile_limit" : "128000", "kafka_user_nproc_limit" : "65536", </code>
Please, please dont get into the habit of parsing structured data with unstructured tools. If youre parsing XML, JSON, YAML etc., use a specific parser, at least to convert the structured data into a more appropriate form for AWK, <code>sed</code>, <code>grep</code> etc. In this case, <code>gron</code> would help greatly: <code>$ gron yourfile | grep -F .properties. json.items[0].properties.content = "\n#!/bin/bash\n\n# Set KAFKA specific environment variables here.\n\n# The java implementation to use.\nexport JAVA_HOME={{java64_home}}\nexport PATH=/usr/lib/ccache:/home/steve/bin:/usr/local/bin:/usr/bin:/bin:/usr/bin/X11:/usr/games:/bin\nexport PID_DIR={{kafka_pid_dir}}\nexport LOG_DIR={{kafka_log_dir}}\nexport KAFKA_KERBEROS_PARAMS={{kafka_kerberos_params}}\nexport JMX_PORT=9997\n# Add kafka sink to classpath and related depenencies\nif [ -e \"/usr/lib/ambari-metrics-kafka-sink/ambari-metrics-kafka-sink.jar\" ]; then\n export CLASSPATH=:/usr/lib/ambari-metrics-kafka-sink/ambari-metrics-kafka-sink.jar\n export CLASSPATH=:/usr/lib/ambari-metrics-kafka-sink/lib/*\nfi\n\nif [ -f /etc/kafka/conf/kafka-ranger-env.sh ]; then\n. /etc/kafka/conf/kafka-ranger-env.sh\nfi"; json.items[0].properties.is_supported_kafka_ranger = "true"; json.items[0].properties.kafka_log_dir = "/var/log/kafka"; json.items[0].properties.kafka_pid_dir = "/var/run/kafka"; json.items[0].properties.kafka_user = "kafka"; json.items[0].properties.kafka_user_nofile_limit = "128000"; json.items[0].properties.kafka_user_nproc_limit = "65536"; </code> (You can post-process this with <code>| cut -d. -f4- | gron --ungron</code> to get something very close to your desired output, albeit still as valid JSON.) <code>jq</code> is also appropriate.
How to run a command block in the main shell?
Depends what you mean by as a whole. If you only mean send several commands to the shell, and make sure the shell doesn't start running them until you've entered them all, then you can just do: <code>cmd1; cmd2 </code> Or cmd1Ctrl+VCtrl+Jcmd2 (or enable bracketed-paste (<code>bind 'set enable-bracketed-paste on'</code>) and paste the commands from a terminal that supports bracketed paste). Or: <code>{ cmd1 cmd2 } </code> To have them on several lines. If you want to group them so they share the same stdin or stdout for instance, you could use: <code>{ cmd1; cmd2; } < in > out </code> Or <code>eval 'cmd1; cmd2' < in > out </code> If you want them to run with their own variable and option scope, as <code>bash</code> doesn't have the equivalent of <code>zsh</code> anonymous functions, you'd need to define a temporary function: <code>f() { local var; var=foo; bar;}; f </code>
Instead of <code>( something )</code>, which launches <code>something</code> in a subshell, use <code>{ something ; }</code>, which launches <code>something</code> in the current shell You need spaces after the <code>{</code>, and should also have a <code>;</code> (or a newline) before the <code>}</code>. Ex: <code>$ { echo "hello $BASHPID";sleep 5;echo "hello again $BASHPID" ; } hello 3536 hello again 3536 </code> Please note however that if you launch some complex commands (or piped commands), those will be in a subshell most of the time anyway. And the "portable" way to get your current shell's pid is <code>$$</code>. So I'd instead write your test as: <code>{ echo "hello $$"; sleep 5 ; echo "hello again $$" ; } </code> (the sleep is not really useful anyway here)
find pipe to less - why only correct lines are left when I press "up" key?
<code>find</code> with no action applies its default <code>-print</code> action, which outputs the full file name to standard output. Errors go to standard error. The pipe operator only redirects standard output; so only correct file names are sent to <code>less</code>, everything else goes to standard error, which is your terminal. <code>less</code> also writes to your terminal, so youll initially see both file names and errors on your screen; but when you scroll up in <code>less</code> (or invoke any other action which causes it to update the screen), the errors will be overwritten by <code>less</code>s updates since <code>less</code> is only aware of the input its seen from <code>find</code>s standard output. To page through the complete output in <code>less</code>, you need to redirect standard error too: <code>find / -name foo 2>&1 | less </code> To completely ignore errors, redirect it to the bit bucket instead: <code>find / -name foo 2>/dev/null | less </code>
This has nothing to do with <code>less</code> itself. It's just that there are two output streams: standard output (<code>stdout</code>) and standard error (<code>stderr</code>). Error messages, as you would expect, go to <code>stderr</code>, while regular output goes to <code>stdout</code>. The pipe, by default, only captures <code>stdout</code> and ignores <code>stderr</code>. Since find's errors are in <code>stderr</code>, these are not sent to <code>less</code> and this is why it looks like <code>less</code> is filtering out the errors.
Is there a design pattern for dropdown lists in iOS?
Even though Apple recommended (and surprisingly still recommends) pickers for dropdowns, not even they use it anymore. Spoilers: In both these cases, the "logic" would dictate to use a dropdown + picker. Apple chose a much better solution for their own apps. Still, a whole screen to pick between "Female" or "Male" (pardon the binary example) seems exaggerated. Personally, I believe the best option is to design what visually looks like a dropdown element that, when tapped, opens an Action Sheet: The reasoning is that an action sheet is better than a picker because: 1) Action Sheets do not require scrolling to read and/or choose options that are not highlighted; 2) Action Sheets dim the background, providing clear affordance that by clicking outside the action sheet other elements will not be activated (whereas a picker makes the user unsure of where to tap in order to close the picker without accidentally tapping something else); 3) Action Sheets have "Cancel" buttons; 4) Action Sheets items are 44 points high, have margins between buttons and can list more options using more space on the screen; Action Sheets are also better than the fullscreen listed beforehand, because they don't take the user to another screen, and thus making the flow more... fluid; and they're better than a custom-built alternative because they're native and consequently "more future-proof". Brad Frost would probably correct me, saying they're actually "future-friendly". By the way, Luke Wroblewski has an excellent article on why dropdowns should be the UI of last resort, and these 4 very excellent videos going into details as to when and why one element works better than the other (and with research to back it up): Luke Wroblewski Part 1 - Conversions@Google 2014 Luke Wroblewski Part 2 - Conversions@Google 2014 Luke Wroblewski - Mobile Design Essentials Part 1 - Conversions@Google 2015 Luke Wroblewski - Mobile Design Essentials Part 2 - Conversions@Google 2015 Put your headphones and dive in. The videos are worth it.
iOS makes it much easier to use "Pickers". These may work depending on what you need the "Dropdown" to do. See new link https://developer.apple.com/ios/human-interface-guidelines/controls/pickers/
Should date pickers for meetings offer the ability to schedule a meeting for 0 minutes?
Should a calculator allow you to add zero to a number? My answer: yes. What's the use case? My answer: it doesn't need a use case. You don't remove a capability that "falls out naturally" and requires no effort to provide, just because you can't think why anyone would want to do that.
Any sort of open-house event might be well-served by this approach: people can accept and have a reminder added without it blocking out their calendar. The event may also have an end criterion that's not defined in terms of time. For example: Title: Birthday Cake Location: My desk Time 13:00 tomorrow Duration: 0 minutes Extra info: Come when you like, but when it's gone, it's gone. To force a minimum meeting time is an unnecessary restriction on your users without helping them, and begs the question "what is the shortest meeting permissible?" If you set minimum 5 minutes, the next management fad will be a 1-minute meeting, standing up at your desk shouting what you achieved today.
What is the color associated with lukewarm?
Beige Windows 10 has a color temperature meter for Night light. Using that as a starting point, I extracted the main colors from the gradient, inverted the colors to extrapolate the 'cool' half of the gradient, and worked this out: On this 5-point scale, <code>Lukewarm</code> is <code>#FFE1A5</code>. Of course this will change if you change the <code>Warm</code> color. The <code>Cold</code> color is a complementary color to <code>Warm</code> (<code>#FF6000 <---> #009FFF</code>, computed using ColorHexa). You can use this jsFiddle to tune the colors to your satisfaction.
There isn't really a color associated with lukewarm. As the diagram in xiota's answer shows, the color association humans have (red = warm, blue = cold) are even the opposite to what you'd expect from a physics point of view. Since 'lukewarm' is the neutral option, you could go with a neutral color: gray. Do make the button's style different from disabled buttons, if you have any. Since 'lukewarm' is the middle option, you could go with the color in between red and blue: purple. Since 'lukewarm' is presented in the middle, users will know it's the neutral option in between warm and cold regardless of the color of the button. Of course, its color shouldn't lean too much to one of the other options; I wouldn't choose another shade of blue or red.
Responsive Breakpoints?
The current trend is to design breakpoints with content in mind. At some width the content will appear either too squished (or stretched out) and that's when a breakpoint should be used to rearrange things, even if it doesn't correspond to a common device width. The content should look well laid out at any device width (within reason, no need to fill 3000px wide). Some designs will be best served by a breakpoint at 611px, and some better with a breakpoint at 795px. It's simply too hard to keep up with all the screen sizes of the new devices coming out. Make it work at any reasonable width and you'll sleep better. That said, test on as many devices as you can to make sure what theoretically looks good at any width really does look good and function well on actual devices in real use cases.
As far as I am aware there aren't any industry standards. From experience they seem to differ slightly depending on who you talk to. Smashing Magazine recently did a good article about logical breakpoints for responsive design. Understand what resolutions the potential users of the system will be using. This should help inform your break points. My point of view: It's unlikely you can cover all scenarios, so design for those that offer a usable solution for each segment of your users. Remember to consider the future. if the site is going to be around in the future consider how device resolutions will change and either take those into account while designing , or provide a system that is adaptable. Using a responsive grid system, like that of Foundation 4 (as mentioned by Courtney), along with break points can be a nice solution but it really depends on your specific requirements. Regarding responsive frameworks, personally I've used Foundation and Bootstrap. I found them both useful, but it really does depend on exactly what you are trying to achieve, as to whether this is the correct approach for your project.
How to display a "friendly" deadline without weekends?
If "Day" means something other than "Calendar Day", be specific. A user reading "3 days ago" will assume Friday if they are reading on Monday, and I find it hard to believe that people will know that "3 days ago" on Monday actually refers to the previous Wednesday. You could use specific term such as "working day" or "business day" if you choose to exclude weekends from your calculations. It may help to also annotate with the specific day name (if less than a week) or date (for longer durations). Examples (Assume today is Tuesday 18th July 2017) <blockquote> 1 business day ago (Monday) 2 business days ago (Friday) 5 business days ago (11th July) </blockquote> Or <blockquote> 1 working day ago (Monday) 2 working days ago (Friday) 5 working days ago (11th July) </blockquote>
Is this for a system where "working days" can change (like this is a piece of software for use by various client businesses), or are working days always M-F? In either case, you could change the assumption to be that it's counting working days. If that's made known to the users, then you can do the math appropriately and not have to worry about it. So if the business is closed on Sat-Sun, you just count how many weekdays have passed. On Monday, "1 day ago" means Friday. The catch here is that the user has to do a bit of thinking to figure out what "3 days ago" means if they're seeing it on a Monday. To solve this (assuming today is Monday, July 17, 2017), here's an option: 1 Day Ago (Fri) 3 Days Ago (Wed) 10 Days Ago (6/26)
Progress - When should I go from 100% to 0%, or vice versa?
TLDR: Never, unless you are not making any progress (losing progress) A progress-bar in UX has one simple feature, show progress to a user. Progress is a forward motion/movement. This is I especially true from a psychological point of view. From the Cambridge dictionary: <blockquote> Progress - movement to an improved or more developed state, or to a forward position. https://dictionary.cambridge.org/dictionary/english/progress </blockquote> In the context of a progress-bar this would mean that the bar needs to have a forward motion, or in other words a left to right filling motion. Reading direction: the direction in which a progress bar fills, should in my eyes logically depend on the natural reading direction for a specific language. For RTL languages the bar should fill from the right to the left. However most RTL interfaces I've seen do strange enough not reverse the progress-bar. I'm not sure why this is the case, do people simply forget the progress-bar? Or is my theory based on nonsense? Don't make progress go backwards Quite some progress-bars have a forward filling motion even though stuff is being removed. Take for example the file removal dialog in Windows: The progress bar here could have been reversed, but that would communicate a completely different message to the user: "The disk getting emptier". But in a forward filling motion it would communicate the correct message: "The progress of removing the files". An easier example is emptying a bucket of water. You can show the progress in two ways: How much water did we already drain from the bucket? (from 0 to 100) How much water is left in the bucket? (from 100 to 0) From a psychological perspective the forward motion from 0 to 100 is preferable. If you look at progress-bars on different crowdfunding websites (take Kickstarter for example) they always seem to fill up from 0 to some number (and from the left to right) but can easily be reverted: <code>500$ of 2500$ funded [===== ] </code> These websites could however have used a different type: <code>2000$ of 2500$ needed [=================== ] </code> The first type fills from the left to the right, the second model empties from the right to the left. The first progress bar is used for a reason, it shows the progress in a forward motion. The backward progress is counter-intuitive. When can you have backwards progress? Almost never, unless you are for some reason losing progress. A good example of a backwards going bar is the health-bar in a game (as mentioned in the comments). A health bar always depletes as you lose life. Notice however that a health bar is for a good reason not called a "progress bar". A health bar does not necessary communicate "progress" to the user, instead it communicates a certain "status" to the user. If the bar drops it's perceived as something negative (especially if the bar cannot go up again). Conclusion Progress is forward motion, backwards movements is counter-intuitive and from a psychological point of view probably not the best choice. Forward motion is positive, backward motion is negative. Because of this I suggest you use a filling type progress-bar. Note: I can't currently find a good source to back this answer up with. But I do have an interesting paper that researched different types of forward going progress (linear progress vs accelerating progress etc.) : Rethinking the Progress Bar
You might want to go out to the warehouse and show the people there what the app currently looks like and what other ideas you have. That way, your end user can give you insight as to what they would find the easiest to use. Something that looks great in a mock-up/design process might not be as well received by the 'real world' users. For example, I helped in the development of an in-store ordering app to help shop workers place online orders for customers. The design was sound when trying it out in the office but when we trialed in the store, the users said it would be awkward due to the fact they move around the shop a lot, don't have to time to read the long text or understand the graphics etc. In a nutshell user research will help you out here.
How valuable are user journeys?
I'm with @Michael Lai, and think they're a great tool to have in your bag for the right case. What they can help you with: Understanding the client As a UI/UX designer you often come into new environments, sometimes very complex ones. User journeys can help you understand processes that might be way out of your field of knowledge, by allowing your client to map out the processes involved in their business. Determining priorities They can be incredibly helpful in the process of determining the importance of certain elements in your design. So where it might not show up at all during most of the user journey, it will feature prominently at a later stage. Example: "Subscribe to our newsletter" might be a footer element throughout your site, but in the middle of your screen after a user has just made a purchase, when they feel best about your product. Identifying obstacles User journeys can help you identify obstacles, especially when doing mapping journeys for currently existing sites/applications. For example, you might find out that after sending a message, the user ends up on a page with nothing but a "thank you" message, instead of going back to his inbox. Streamlining Visually mapping out all the major user journeys, you might find that there are some common routes, hubs that could be avoided or maybe even an important journey that's far too complicated. For example, on forums, moderators might have to look up a user's post history quite often. Adding an icon next to each username that links to an overview of this user's posts, saves them from going through that user's profile page all the time. Scale and stages Mapping out all the major journeys can also give you a good idea of the scope of a project, and allow you to divide a project into separate stages. A few more use-cases: Sales tool: Properly visualized you can use a user journey to sell a project to a big client. Test case base: Thoroughly mapping your user's journeys also makes for a solid starting point for writing test cases later on. I found this article a good read as well.
As with many of the UX deliverables, it depends on what you believe the best way to communicate the information to your audience is. This is also a debate going around about personas and how useful they are to the design process. In my opinion, I generally try to create some sort of 'user journey' as a way of working out all the different things that a user might come across in different usage scenarios, and where they might encounter pleasant or unpleasant experiences. I then try to make sure that in the interactions we don't let them hit too many unpleasant experiences in a row, or if they do then we need some way to make those experience less unpleasant. As I said before, it is just one of the many tools you can use, and when you come across different clients, projects or UX teams you might find it useful again. I think you should think of UX assets as something that both you and your clients can benefit from. So don't do it just because they ask for it if you also find it useful for yourself. Often we get too caught up in the wireframes and UI designs that we don't get the chance to step back and look at the bigger picture, and user journeys are great for understanding the entire flow of the user interactions. If you create user journeys for the different personas and use cases, you might come up with some great insights on the user experiences that are important and you should focus on. The last thing I can think of is that it allows everyone on the team to share an understanding of the product because it provides a perfect complement to personas. There are also many training and marketing assets that can be developed from a well designed user journey map.
How to move photos from Google Drive to Google Photos?
Login to Google Drive Click on the Settings button ( ) Select Settings Make sure Create a Google Photos folder Automatically put your Google Photos into a folder in My Drive is checked. Find and select the image you want to move. Click on the Actions button ( ), and click on Move To... Select Google Photos Click on the button. Your photo has now been moved from Google Drive to Google Photos Side note: If you want to go the opposite direction (Photos Drive), then login to Drive, select Google Photos on the left side, select the picture(s) you want to move, click the action button move, and select the Drive folder to move to.
The only way I found, which is super stupid and as anti-user-friendly as they go (I guess that's the new Google), is to download the files (it will take a while, will create a big ZIP file and might crash the Google Drive app) unzip them, because while Google Drive gives you zip, Google Photos doesn't allow ZIP (output of an app is not accepted in basically the same app, awesome) upload to Google Photos, hope that it will actually upload and not crash in the middle then you will end up with the photos in both places, so you will need to delete one so it doesn't count twice to your quota Not a good user experience. But maybe there is a better way?
How NOT to bold text in WhatsApp &amp; still use ** around text?
You could send it as code using three back-ticks, similar to formatting code here on StackExchange: *word* = word ```*word*``` = <code>*word*</code> (Note: it will use a monospace font).
Excerpt from the official WhatsApp FAQ: <blockquote> Note: There is no option to disable this feature. </blockquote> See https://www.whatsapp.com/faq/en/general/26000002
Can I Prevent Enumeration of Usernames?
A simple solution I use in a <code>.htaccess</code>: <code>RewriteCond %{REQUEST_URI} !^/wp-admin [NC] RewriteCond %{QUERY_STRING} author=\d RewriteRule ^ - [L,R=403] </code> It is similar to @jptsetmes answer, but it works even when the query string is <code>/?dummy&author=5</code>, and the search pattern for <code>RewriteRule</code> is very fast: You often see a capture <code>([0-9]*)</code> in regular expressions for this. But there is no need to waste memory for the capture when you don't use the captured expression, and a match for the first character is enough, because you don't want to accept <code>author=1b</code>. Update 20.04.2017 I'm seeing more "broken" requests from people who are even too stupid to run a simple scan. The requested URLs look like this: <code>/?author={num:2} </code> So you could extend the rule above to: <code>RewriteCond %{REQUEST_URI} !^/wp-admin [NC] RewriteCond %{QUERY_STRING} ^author=\d+ [NC,OR] RewriteCond %{QUERY_STRING} ^author=\{num RewriteRule ^ - [L,R=403] </code>
You can't. The WPScan tool is an automated utility that takes advantage of WordPress' friendly URLs to determine usernames. It will loop through the first 10 possible IDs for authors and check the <code>Location</code> header on the HTTP response to find a username. Using <code>http://mysite.url</code> for example ... WPScan will check <code>http://mysite.url/?author=1</code>. If your site is using pretty permalinks, it will return a 301 redirect with a <code>Location</code> header of <code>http://mysite.url/author/username</code>. If your site isn't using pretty permalinks, it will return a status of 200 (OK) instead, so WPScan will check the feed for the string "posts by username" and extract the username. What you can do First of all, just because someone can guess your username, doesn't mean your site is insecure. And there is really no way you can prevent someone from parsing your site in such a way. However ... If you are really concerned about this, I would recommend doing two things: Turn off pretty permalinks. This will force WPScan and similar tools to parse the content of your site for usernames rather than relying on the URL. Force users to set a different nickname. In the absence of a username in the URL, scanning tools will search for "posts by username" in the feed/post content instead. If you aren't putting usernames out there, then they can't be nabbed. Another alternative is to change your author permalink rewrites. There are several ways you can do this, and you can probably find a few on this site as well.
Should I report a leak of confidential HR information?
<blockquote> I lose nothing if I don't report it, and I might lose something if I do </blockquote> You may have something to lose if you don't report it but later someone else does. If there's an audit following the report, your name may come up in a list of people who have downloaded the file. As a result, there may be questions as to what you did with the file when you downloaded it, and why you didn't report it. Of course, it's not that serious unless there's proof that you have used data from that file, and if you only touched the file once, you can make up something like "I accidentally clicked on the wrong file and removed it without reading when I realised my mistake". But why make up a lie when you can do something that's expected of you, that is, report it right away?
Personally, I would report it. Think about it this way. If it was YOUR data, what would you like to happen if somebody knew that YOUR data has been leaked.
Complaints from (junior) developers against solution architects: how can we show the benefits of our work and improve relationships?
No direct or indirect contact between architects and developers - this has to change, there is no way around it. If there's not even email contact, there's not going to be any trust between the two teams. Solution architects need to also be (senior) developers, and all developers need to be involved in the architecture, to the limit of their level of knowledge (obviously, juniors less than seniors, but still at least somewhat). There's again no way around that. If the architects do not do any coding themselves, their solutions/architecture will over time become more and more divorced from reality. Heck, even if their architecture is perfect, if there is no contact there will still be doubts and resentment; and sooner or later the architecture will stop being perfect. If the developers don't do any architecture, they won't know how their work fits into the grand scheme of things, and also their growth will be stunted. From these points it's clear that the company policy has to change if you want the problem to be solved. The question is if you can persuade those who make decisions to change the policy. This is often very hard, sometimes impossible. You'll know, better than anyone here, if it's possible in the company where you work.
I think the environment you describe is somewhat old-fashioned and is not aging well. Much of the reason that software engineering gets a bad rap is because the incentives are not aligned in a way that promotes good design, implementation, and maintenance practices, leading to low-quality software. Divorcing developers from solution ownership is exactly the kind of thing which promotes complacency, malaise, and bit rot. Writing code is not, in fact the "fun part" of being a software engineer. Solving problems is the fun part, and writing the code should be mostly mechanical, except for the most junior developers who are still learning the finer points of their programming language. I dare say that Google, Amazon, Facebook, Apple, and Netflix (the FAANG club) are based on the West Coast of the USA because this region strongly tends towards a more modern approach to software engineering, which is to drive ownership of solutions as far down the org chart as possible. Amazon, in particular, is famous for its "two-pizza teams". It might seem that it's a statement on team size, but it is not. The 2PT literally owns all the code it writes, and rarely has more than one or two senior developers. Architects do not dictate the design of new projects in detail. Rather, they coordinate design across teams, collaborating with engineers to come up with solutions created iteratively, rather than dictated in a more traditional waterfall-style approach. Or, they work on framework projects to be consumed by other teams at the teams' discretion. Junior engineers learn how to design software very quickly, because they are expected to do so, at various scales, from day 1. Which is also why they get pager duty and woken up at 3 AM if there's an outage: they own the design, they own the code, they own the bugs, they own the fix. The feedback loop is tight, and the results are indisputable. For software which is hosted by the client rather than the provider, this feedback loop does not need to be so tight. Low-stakes software allows much greater freedom in the process to do less-efficient things. But creating solutions at the top and pushing them down the entire org chart is causing push-back for a very simple reason: it's a bad process. Your junior engineers are new and inexperienced, yes. But they also have the benefit of hindsight, of learning from instructors and an environment that has a wealth of experience that simply didn't exist when you were their age. You would do well to take their feedback to heart and push your company to adopt more modern engineering practices, starting with a more collaborative design process. This will require quite a bit of humility from you, especially if you find out that your juniors are much smarter than you give them credit for. But an easy way to start is to just invite some of them to design brainstorming meetings. Invite them from all levels of developers. Present them the requirements you gathered, and ask: "How should we solve this?" Let them freewheel a bit, challenge them with insightful questions to show flaws in their proposed solutions, and see if you can't get them to craft the same design you would have on your own...or better yet, a superior design. Farm out parts of the specification to them, acting as the editor and reviewer, but also let them review each other. The entire process should be very educational, and any half-competent management chain would see that you are improving the skill set of the entire team on a level that simply didn't occur before. On the other hand, by removing them from the loop, the entire middle management chain will also fight you tooth and nail, because you are eroding their clout and value add. They are the "people who take things from this desk and walk down the hall and put them on that desk." Taking away this perceived value-add will be politically threatening to them, whether it adds value to the company or not. So, this process is political suicide if you don't already have the political capital to fend off all challengers. But if you can convince your boss that it's a good idea, and ask them to support you while you demonstrate the superior results, then it should be feasible to pull off anyway. If your boss is also politically weak, then it could simply get you run out of the company or relegated to "truck maintenance", as an old boss of mine used to call it. Just giving you fair warning that tipping the apple cart like this is not without dangers, no matter how beneficial it may be to your company's bottom line. [EDIT] I feel that I ended on an overly pessimistic note, so allow me to add a strategy for making this succeed, no matter how much political capital you have amassed at your company. The key, of course, is to make allies of the people who most benefit, and to include the ones who are most threatened. So explain to the more junior engineers (including the seniors who are right below you) that this experiment is a politically risky move, and that they need to voice their support for it in meetings with their managers. If you must, hold design meetings first, without consulting any other managers, so as to leverage the element of surprise. Better to ask forgiveness than permission. If you run the meetings well, it will energize the coder base and make them excited about change. This will incentivize them to go to bat for you. You should also scout out the managers that are more forward-thinking and who might make natural allies in this process. Invite them to the design meetings too, and make it clear that it's a round table with no hierarchy, and everyone is an equally valued contributor. Ideally, the managers themselves have more engineering experience (and if they don't, because they are pure people managers selected for non-technical skills, that is a whole other ball of wax to contend with, and outside the scope of this question) and can also add valuable insights and guidance, demonstrating their technical chops and helping to firm up their technical authority. When other architects or interested parties come to you and ask what you're doing and why you are doing it, just calmly and politely explain that you think you will get better fidelity on the results if the engineers who build the system participate in its design. It will also allow you to be more concise with the design docs and specs, saving everyone time and money and allowing you to tighten up some deadlines. And if you hold a few successful meetings, you can challenge them to go talk to the engineers themselves, get their perspective, and ask if there is any change in morale. Ideally, the results will speak for themselves, and loudly. If you succeed in making this first step, then you can introduce modern Agile practices as mentioned in other answers, and hopefully, your management org will eventually wise up and maybe bring in a coach to help the company get up to speed on more modern practices. It doesn't matter if you follow Scrum or Kanban, Kaizen or some other system. You just need to design small, start building early, and iterate heavily. It sounds like your junior engineers will push the ball very, very hard on their own if you manage to just get it rolling. Good luck!
How do I explain that I am in the office to work, not to have fun?
I'm going to play devil's advocate here and say you should do it. At least sometimes. Your coworkers are people, not automatons you're transacting workplace business with. You may be able to get away with such impersonal-ness in e.g. a minor retail transaction but your coworkers will reasonably expect that you socialize more than the bare minimum required to uphold the social contract/standards of professional courtesy. Doesn't mean you have to be "BFFs", does mean you'll have to make some small talk about $LOCAL_PROFESSIONAL_SPORTSBALL_TEAM. Do you ever need anything from any of these people? Might one of them wind up your boss one day? Is one of them your boss right now? Do you have peer reviews? Nobody ever lost anything by greasing the popularity wheel. You may not enjoy it (there are probably other parts of your job you don't enjoy much either) but it's worth doing. You also probably aren't getting away from it unless you become a contractor: I've never worked in a professional setting (midwest US, YMMV) where I did not have to participate in team building activities.
Say "No, I prefer to work". If you are asking how to do this without getting funny looks, there is no way. Also, if this is actual company culture (as opposed to your colleagues goofing of) prepare to be terminated when an excuse presents itself, for being a bad fit for the company. E.g. my employer sponsors a lot of social activity; I am free not to participate, but I am not free to go home instead, because my employer does not want me to go home, but to socialize. The idea (correct or not) is that this will make us work more efficiently as a team. If yours is a case of "company culture" then presumably your employers have an idea want they want their company values to be, and if you are basically saying "I reject your puny values and substitute my own" this might not get down too well. You might want to try a minimum of participation, at least to try if against all odds you enjoy yourself. Having said all the above, I still understand how you feel about this.
How can I ask a coworker about their salary package?
Ask him. <blockquote> What would you consider it reasonable for me to be offered if I was promoted to ....? </blockquote> That way he can choose to tell you what he is getting if he wishes to, even if he does not tell you what he is getting now, he may tell you what he got when he first got promoted. Also it may be better that you prove you can do the new job well before asking for lots of more money, your market value will not take into account the new job until you have been in it long enough for anther employer to believe in your new skills.
Here is how you diplomatically determine your salary band in relation to others without anyone knowing the exact figure (unless you watch their faces). It requires at least 3 people (A, B, C). Each of you add a random value to your current salary. A writes down their total and hands the paper to B. B adds their total and gives the new total to C. C adds their total and gives the new total to A. A removes the random amount they originally added and passes the new total to B. B does the same and hands to C. C removes their random amount, then divides the value by 3. You now have the average salary between the three of you. That said, there is more to a persons role then their salary. So even if s/he is at the role you are moving into you also need to factor in their experience so far, other areas the work in you won't be doing, etc. So it is rare that salaries will equate 1:1.
How much can I get away with making management happy but irritating co-workers?
Friction with co-workers is often a "straw that broke the camel's back" situation where each individual friction point is not that big of a deal, but the amount of friction points does become a bigger deal. Your individual points can all be argued to not be reason enough to socially exclude you, but combining them does create the image of a coworker who is less than pleasant to work with, which is statistically going to lead to social exclusions. I'll provide feedback and an alternate interpretation to your raised points, taking the liberty of differing points of view and interpretations. I'm sorry if the below reinterpretation of words comes across as blunt at times, but I think you need to see the other side of the coin, because you're currently working with a go-getter attitude with little disregard (or genuine obliviousness) to the effects of your actions. 1 <blockquote> I am not really a team first kind of player. </blockquote> This is a red flag statement. It's perfectly fine to be a good solo dev, but it's less desirable to openly present yourself like that. It suggest that you immediately dismiss your team members in favor of yourself, which doesn't help to build trust and cooperation. <blockquote> I am not one to consult </blockquote> This again is a matter of phrasing. If you don't need advice or guidance and aren't struggling with your tasks, that's perfectly fine. However, your stance on others consulting with you is unclear. If you are similarly averted to it, that becomes a strong friction point. <blockquote> not one to wait for consensus, etc. If I have an idea, I just throw it right up to decision making level. </blockquote> If you bypass the team, you effectively dismiss your coworkers' possible contributions. What you're doing isn't against the rules but it does bypass some social checks that will hinder your interpersonal relationships. Think of it this way: when I communicate with my landlord (e.g. about a broken pipe), I am allowed to let this be done through legal channels right from the get go. I'm not doing anything wrong. However, I would be massively changing the tone of communication with the landlord compared to if I talked to them first to see if they were already willing to fix the broken pipe. Me going to a lawyer without first talking to the landlord is like you talking to management without consulting the team. It completely omits casual conversation and dramatically impacts the tone of the work environment, which puts people on edge and is not going to foster a great interpersonal relationship. 2 <blockquote> In those 8 months, I am the only person to never miss a sprint goal. </blockquote> There is an underlying tone of arrogance in your description of your role and your achievements. This statement is a more concrete manifestation. Why are you even tracking this? Why would it matter? Should others feels bad for having missed a sprint goal? Either you're saying it doesn't matter, and then this statement is irrelevant, or it does matter, and then you've got a really competitive view on how to interact with your coworkers, which is at the heart of them not liking you. On top of that, you list yourself as a junior developer. Most commonly, you will be getting simpler tasks than medior/senior developers, which can massively skew the odds of you hitting your sprint goals. To showcase my point: If you put me, Michael Schumacher, Lewis Hamilton and Sebastian Vettel (world class Formula 1 drivers - in case you don't know them) in a room, I am the only one in the room with a 100% winrate on racing. Providing context, I only partook in one race when I was 14, but I did win, so I clearly rank better than Michael and Sebastian, right? Even if we ignore the statistical juggling that this statement entails, this is again one of those clear "I am better than my coworkers" statements that does nothing but harm your interpersonal relationships with them. <blockquote> When I say that something will get done, I get it done, even if it means working a few extra hours (up to 10 a week). Others don't work those extra hours, so they often have to report that their tasks are not complete. The project owner very publicly complains about the other developers when this happens. </blockquote> I can definitely see where the friction is coming from. You are driving up the standard of an employee's workload. Rather than just advancing yourself, it is actively casting your coworkers in a bad light. I don't know your personal life or that of your coworkers, but taking a statistically accurate stab in the dark: junior developers are young and more often don't yet have a family/children of their own, which makes them much more flexible to perform overtime compared to someone older with more life commitments such as a family and/or children. Don't get me wrong, I'm not saying that you and your coworkers should conspire to keep the workload artificially low. That would be wrong too. But trying to race ahead and then having it publically blow back on the others will of course do you no favors in your interactions with them. This isn't your fault - your manager is the one generating the negative feedback towards the others who don't do the overtime, but the social blowback from your coworkers is naturally directed at you instead of the manager. 4 <blockquote> In particular, I did something that which meant that a company wide email (~800 in the company) was sent out praising me. </blockquote> By itself, this is not an issue. Deserved praise is deserved, regardless of position. <blockquote> I am learning to toot my own horn </blockquote> However, this is an issue. Based on your question, it is clear that you are focused on personal achievement which you express by comparing yourself to the team, which you bypass on every occasion where it benefits you personally. On top of that, you tamper with the workload curve by taking on (presumably unpaid) overtime to achieve your sprint goals, which not everyone has the freedom to do; and are clearly aware that this is generating negative feedback on others from management. You're currently working with a go-getter attitude with little disregard (or genuine obliviousness) to the effects of your actions. If that's how you want to approach your career, that's okay. Some people have made great strides using this sort of "dog eat dog" approach to their career. But it's not going to be making you any friends. You can't both distance yourself from people and then wonder why people are distant from you. It's one or the other. <blockquote> I am not necessarily competitive, but rather just extremely sensitive to incentives. </blockquote> What exactly do you think competitiveness is, if not a behavioral response to the incentive of winning? <blockquote> I don't have a need to be well liked. I don't need infantry level allies. I am just wondering what risks might exist for me going forward. </blockquote> What happens when you don't have the free time/energy to take on overtime anymore, and a new junior dev is able to do so? Will you be happy now being one of the "lazy" devs who should take the example of the junior? What happens when you run into a conflict with your employer, possibly over something completely unrelated? Will anyone have your back or vouch for you, after you've hung them out to dry? What happens when you happen to be assigned tasks that, through no fault of your own, end up not meeting their sprint goal? Will you still argue your "results over effort" opinion when your result isn't perfect either? None of this is a guarantee. You might always have the time/energy to keep this up, you might never get a task you can't complete, and you might always have a great relationship with management. But the odds are low. <blockquote> Our individual Velocity score seems to be how we are judged. A team member who spent their sprint unblocking people but not completing their own work would hear about it from the project owner. </blockquote> The saving grace here is that your work attitude seems to be a product of the environment you find yourself in. Management seems deadset on pitting employees against each other in a misguided "competition breeds results" attitude. I hope you can see the blatant issue in negatively reviewing someone who spends their time helping their coworkers. Clearly, management's metrics are not accurate measures of contribution. If you want to play the game and reap the rewards, I can't tell you you shouldn't. However, if you decide to go over dead bodies, which is what you're currently doing, then you're going to reap the consequences from eroding your team's trust and cooperative spirit.
Some companies use 360 Reviews. This is where your end-of-year review (and any salary increase or bonus) is based largely on how your peers and underlings view you. Not just your management. If they introduce that at your company then you might have shot yourself in the foot. Additionally, you might find that it is very difficult to achieve a pay rise via your friends in management. Management are incentivised to pay you as little as possible. A much easier way of achieving a significant pay rise is by moving to a different company. The easiest way to do this is by having connections at that other companies. I.E. a longterm strategy is to have all your peers love working with you, so when they get amazing jobs elsewhere they are on the phone offering you to walk into a great job. It sounds like your colleagues might prefer to blacklist you than phone you, so you might find that once they have all moved on to bigger better things, you are the one left behind working overtime.
What is LinkedIn etiquette before an interview?
In my opinion, it's not just accepted practice, but best practice. Whenever I've noticed candidates poking my Linkedin profile, I was actually positively impressed, as it shows preparation and insight. What is not cool, is asking for a connection to your interviewer before an interview. Don't do that.
It is a fair assumption that everyone's linkedin profile is public information, and that all sorts of people could be looking at it at any point in time. If I noticed that someone I was interviewing looked me up, I would not give it much thought. I really do not see why doing this would impact you negatively in any way. Personally, I have done that in the past to check if we had worked with any of the same people so that I could inquire about the company from someone I knew. After you work in the same industry/area for a while, you know a lot of the people invovled and this can help you quickly gauge whether you want to work in a given company or not.
How do I professionally tell the interviewer that I am extremely nervous to do what he's asking me to do?
What the interviewer did was very inappropriate, and I think it would be possibly beneficial for future candidates if you mention it to your point of contact at the company. Asperger's can be considered a disability, and pointedly bringing it up in the interview is very questionable, in my opinion, as what he did might make it look like he may have possibly been discriminating against you, whether he actually was or not. Furthermore, if you don't volunteer that information, which you did not, it is none of his business, and not at all polite to bring up. Part of the interviewer's job is to also impress on you that this is a company that you'd like to work for, and in that he's epically failed, by doing something in the least I consider very impolite.
Interviews are two way - they give you an insight into how the company works. There's a pretty good chance that this kind of presentation approach is not unusual for this company - otherwise, the interviewer will have to explain having a whole team listen to a candidate. There's also a pretty good chance you'll be expected to face this situation more in your job after. Given your discomfort at doing this once, are you sure you want a job where you could be expected to be doing this on a regular basis?
How much should I be doing as the Junior Developer?
Senior engineers usually have a lot of "off book" work that isn't very visible, even less visible if you are working remotely and can't see the foot traffic to their desk. You see a tiny sliver of it when you get stuck and ask for help, but you don't see the half hour of research he does before getting back to you, or the three other people who required a similar level of help, or the reviews, or the design work, or getting the build green so everyone isn't blocked, and often a smattering of administrative work. In other words, a senior engineer's productivity is frequently measured by keeping team members unstuck, more than his or her individual output. It's not uncommon to go an entire day without having any time at all for my "official" coding task. I remember having similar feelings about credit, especially when the senior engineer is handling a lot of the communication. Credit doesn't work the same as in school. The credit often goes to the team in larger contexts, but trust me, people closer to you know what you are contributing. As far as support, I also don't see it as that unusual that you are primarily getting it from other team members. What would be a problem is if you are asking your senior engineer for specific help and not getting it. If you feel there is a specific question or task that he should be handling, be proactive and ask. If you feel you are still not getting what you need, bring it up with your manager. Or the guy could be a flake. That's certainly a possibility, but I think it's more likely there are things going on you don't have the perspective to see yet.
There are lots of things that could be happening. Here are some to consider: Maybe your estimation of what is more difficult is just straight-up wrong Maybe the task itself is 'simple' but you don't yet have the necessary business knowledge of all the other systems/processes it has to feed into Maybe the Dev has lots of other important work that needs doing so they're doing less important work on your specific project but much more demanding work elsewhere Maybe they are trying to actively develop you. Imagine they start you off with the easy stuff, and as long as you keep handling the work they gradually give you more and more complex/difficult tasks to develop your skills & experience. At a certain point, you'll have conquered all the easy tasks on any project so they'll have to give you the harder/hardest ones in order to keep you progressing, leaving them with just the simpler ones I would check in with them on whether you're doing a good job, and as long as they say that you are then keep focusing on yourself, do your best work and welcome any opportunity to advance the level of what you do.
Are one-line email responses considered disrespectful?
Again, this varies GREATLY from culture to culture. In a German email, The Subject may contain vital information, and not be repeated in the body of the email. In India, the email could include personal information. In the USA, brevity is often considered polite. In Japan, if you screw up the honorifics, you will be in for trouble. From the UK, you may get an email ending with "Cheers!" Learn what is polite for the culture you are addressing. What may be considered rude in one culture may be polite in another culture, may be overly formal for another.
Shorter (hah!) version: In the US, longgg emails are, in fact, supercilious. The example email given by the OP, would literally be seen as rude in the US - it would appear that you were trying to be a smart-arse, or otherwise supercilious. Cultural norms are strange things; it's professional to simply be aware of the prevailing cultural norm. Your speculation is correct: In the US context, your example format is basically "wrong". In the US, emails are just like texts or a chat room In (say) France or Germany, they are (often) more like letters So in the US, It's totally OK to often completely forget about - totally - the "formatting" of "letters" which those among us old enough learned in school. So, the "greeting part", "signature part" and so on - often just forget about it, as in a text One word replies (notably "Understood") are totally OK Even addressing a senior boss, you can still keep it extramely brief. (Perhaps just adding a bare "Thanks Sir, Fattie" at the end.) Definitely forget forever your "I hope ..." introductory sentence :) You can treat it as if a chat room In total contrast, dealing (in general) in France or Germany, you can write "actual letters". Regarding asia (say, China, India) ... things are too fast paced for me, anyway to form an opinion either way! Regarding Japan, it's inscrutable. Hence, to answer your questions, <blockquote> Do I have valid reasons to consider these kind of emails rude </blockquote> You are totally incorrect. They are not rude. <blockquote> However, I find ... [your cultural expectation] </blockquote> You're in a different country. "Your" convention is rude, surprisingly! Cordialement, Fattie
Can employees prohibit their employer from calling unnecessary ambulance services?
I think communication and mutual understanding is key here. I don't know any place where you could legally forbid your employer from calling an ambulance. On the contrary, the employer is bound to care for your physical and mental wellbeing, a fact that is often ignored. You should have a serious talk with your manager and the people working near you most of the time. If there are dedicated first aid workers in your company, invite them as well. Explain to them what a typical seizure looks like, how they should react to prevent injuries to you and themselves, and how long to wait until they call an ambulance. Be aware that telling them to never call an ambulance will most likely not work. It would help them immensly if you told them how often you usually get seizures and for how long. For an untrained person, seing someone having a seizure is very shocking. The first instinct (and what every first aid training tells you) is to call the emergency number. If you're aware that a seizure is imminent, try saying "no ambulance" or something like that right before it starts.
Knowing whether it would be legal or not would be largely dependent on locale. That said, even if it were legal for you to provide some sort of waiver or agreement to them that said not to call an ambulance, as an employer I'd be deeply uncomfortable with such an arrangement, and I expect many others would be too. You say that it would "most likely be unnecessary" - which implies that there is a chance it would be necessary. I'm not sure I'd like to be in a position to explain to anyone - lawyers, employer's liability insurers, and least of all to an employee's grieving relatives why we didn't call an ambulance when they were seizing.
How should I handle my workplace being messy?
This is a very common problem in the newspaper publishing industry. (I've been in that industry for 30+ years) And there's not an easy resolution. Often senior, and valuable, employees become hoarders of sorts, hanging on to older technologies which enable them to retrieve information and sometimes avoiding new technologies in the process. There is a perception that "if I keep everything I'll always be able to find what I need." Even though I believe you are correct that being surrounded by garbage (and most of it is garbage, never to be looked at or used again) degrades the workplace, this is an area to tread lightly. I'd suggest working on using your job skills to improve your "cred" with both the hoarder and your bosses. You're going to need a lot of workplace respect to change you entrenched co-worker's habits and environment. Once you've built up points in your favor, try picking really small battles and be nice about it. Its all about the people you work with gaining confidence in you and your judgement. Sad to say, but I've made more progress in this area by waiting for people to pass on than I ever have while they were still in the workplace.
If you don't want to rock the boat yet on pushing for the pile to be moved/removed/tidied up, could you change the way your desk faces so you're not looking at it? Or otherwise find a way to screen your desk off from the mess? Different people have very different tolerances for mess and different preferences regarding ideal work spaces. A giant pile of junk in front of my desk wouldn't faze me in the least, unless I was worried about it toppling onto me or something. Many people can just tune out irrelevant details like a mess behind their desk; others absolutely cannot. You want to be careful about being too pushy about your preferences as a new and junior worker in the office, while trying to create a productive work space for yourself.
walking out of interviews - any significant problems down the line?
Like a lot of things in life, it's not what you do but how you do it. The next time it is your turn to speak in the interview, address your concern. If their answer doesn't satisfy you, thank them for their time and tell them that this doesn't feel like an opportunity to you. They will either try to persuade you to stay or say something like "Sorry to hear that, I'll walk you back to the front and we'll turn in your badge and validate your parking." Keep in mind that a lot of business communities are relatively small, and the story of any dramatic maneuver has a high probability of being shared within that industry.
Cutting an interview short when you realize that the position just isn't for you is acceptable. How you cut the interview short is where lasting impact comes from. To walk out on a positive note saying something like <blockquote> I realize that this position just wouldn't be a good fit for me, so lets just wrap it up here. The company seems nice so I'd definitely apply again, just not for this role. Thanks for your consideration. </blockquote> shouldn't poison the well. However silently getting up and leaving probably would.
'Thanks' for a raise?
I don't know if it's exactly impolite, but it's certainly not ideal. You say you're working hard on improving your soft skills, but responding to praise or recognition gracefully is a soft skill you still need to work on. A better response that gets in your point that you deserved it would be something like "Thank you! I've been working very hard to be useful to the company and improve my skills, and it's gratifying to have my efforts recognized. I hope to continue to improve in the future and have a rewarding career here. And thank you for (putting me on such an exciting project) or (supporting my growth) or (something your manager has done that's helpful)".
Was this a verbal response or a written (e.g. email) response? If it was written, that could easily be interpreted as sarcastic. It looks a bit that way to me, as words on the screen. Assuming that it was verbal, whether it was "too cold" or "impolite" would be entirely down to how it was delivered. If someone said that to me with a handshake and a smile of genuine pleasure, I would absolutely take it as a very polite expression of thanks. If it was said in a flat tone with an expressionless face, I would take it as a sarcastic expression. With regards to your second question, perhaps something along the lines of saying "thank you" for the raise and also saying how pleased you are that things are working out well for both yourself and the company. Make it a mutual thing, tell them that you are happy that you are able to do a good job for your employer, and also happy that you are being recognized for doing so.