title
stringlengths
24
151
question
stringlengths
12
528
answer
stringlengths
33
9.55k
url
stringlengths
61
126
linux - How to resume interrupted download automatically in curl?
I'm working with curl in Linux. I'm downloading a part of a file in ftp server (using the -r option), but my connection is not good, it always interrupts. I want to write a script which resume download when I'm connected again.
curl -L -O your_url This will download the file. Now let's say your connection is interrupted; curl -L -O -C - your_url This will continue downloading from the last byte downloaded From the manpage: Use "-C -" to tell curl to automatically find out where/how to resume the transfer. It then uses the given output/input files to figure that out.
https://stackoverflow.com//questions/19728930/how-to-resume-interrupted-download-automatically-in-curl
c++ - How to Add Linux Executable Files to .gitignore?
How do you add Linux executable files to .gitignore without giving them an explicit extension and without placing them in a specific or /bin directory? Most are named the same as the C file from which they were compiled without the .c extension.
Can you ignore all, but source code files? For example: * !*.c !Makefile
https://stackoverflow.com//questions/8237645/how-to-add-linux-executable-files-to-gitignore
linux - List of Java processes
How can I list all Java processes in bash? I need an command line. I know there is command ps but I don't know what parameters I need to use.
try: ps aux | grep java and see how you get on
https://stackoverflow.com//questions/6283167/list-of-java-processes
linux - "Couldn't find a file descriptor referring to the console" on Ubuntu bash on Windows
I have a problem with Bash on Ubuntu on Windows. If I type "open (filename)" on Mac terminal, it opens the file with the right program but if I try to use it on Windows bash, it says: "Couldn't find a file descriptor referring to the console".
Instead of open u can use xdg-open which does the same thing, independently of application i.e. pdf, image, etc. It will open a new virtual terminal (I have tried this on Linux) Example: xdg-open ~/Pictures/Wallpapers/myPic.jpg xdg-open ~/Docs/holidays.pdf
https://stackoverflow.com//questions/42463929/couldnt-find-a-file-descriptor-referring-to-the-console-on-ubuntu-bash-on-win
How do I find all the files that were created today in Unix/Linux?
How do I find all the files that were create only today and not in 24 hour period in unix/linux
On my Fedora 10 system, with findutils-4.4.0-1.fc10.i386: find <path> -daystart -ctime 0 -print The -daystart flag tells it to calculate from the start of today instead of from 24 hours ago. Note however that this will actually list files created or modified in the last day. find has no options that look at the true creation date of the file.
https://stackoverflow.com//questions/801095/how-do-i-find-all-the-files-that-were-created-today-in-unix-linux
linux - Using iconv to convert from UTF-16LE to UTF-8
Hi I am trying to convert some log files from a Microsoft SQL server, but the files are encoded using UTf-16LE and iconv does not seem to be able to convert them.
I forgot the -o switch! The final command is : iconv -f UTF-16LE -t UTF-8 <filename> -o <new-filename>
https://stackoverflow.com//questions/17287713/using-iconv-to-convert-from-utf-16le-to-utf-8
tcp - Simulate delayed and dropped packets on Linux
I would like to simulate packet delay and loss for UDP and TCP on Linux to measure the performance of an application. Is there a simple way to do this?
netem leverages functionality already built into Linux and userspace utilities to simulate networks. This is actually what Mark's answer refers to, by a different name. The examples on their homepage already show how you can achieve what you've asked for: Examples Emulating wide area network delays This is the simplest example, it just adds a fixed amount of delay to all packets going out of the local Ethernet. # tc qdisc add dev eth0 root netem delay 100ms Now a simple ping test to host on the local network should show an increase of 100 milliseconds. The delay is limited by the clock resolution of the kernel (Hz). On most 2.4 systems, the system clock runs at 100 Hz which allows delays in increments of 10 ms. On 2.6, the value is a configuration parameter from 1000 to 100 Hz. Later examples just change parameters without reloading the qdisc Real wide area networks show variability so it is possible to add random variation. # tc qdisc change dev eth0 root netem delay 100ms 10ms This causes the added delay to be 100 ± 10 ms. Network delay variation isn't purely random, so to emulate that there is a correlation value as well. # tc qdisc change dev eth0 root netem delay 100ms 10ms 25% This causes the added delay to be 100 ± 10 ms with the next random element depending 25% on the last one. This isn't true statistical correlation, but an approximation. Delay distribution Typically, the delay in a network is not uniform. It is more common to use a something like a normal distribution to describe the variation in delay. The netem discipline can take a table to specify a non-uniform distribution. # tc qdisc change dev eth0 root netem delay 100ms 20ms distribution normal The actual tables (normal, pareto, paretonormal) are generated as part of the iproute2 compilation and placed in /usr/lib/tc; so it is possible with some effort to make your own distribution based on experimental data. Packet loss Random packet loss is specified in the 'tc' command in percent. The smallest possible non-zero value is: 2−32 = 0.0000000232% # tc qdisc change dev eth0 root netem loss 0.1% This causes 1/10th of a percent (i.e. 1 out of 1000) packets to be randomly dropped. An optional correlation may also be added. This causes the random number generator to be less random and can be used to emulate packet burst losses. # tc qdisc change dev eth0 root netem loss 0.3% 25% This will cause 0.3% of packets to be lost, and each successive probability depends by a quarter on the last one. Probn = 0.25 × Probn-1 + 0.75 × Random Note that you should use tc qdisc add if you have no rules for that interface or tc qdisc change if you already have rules for that interface. Attempting to use tc qdisc change on an interface with no rules will give the error RTNETLINK answers: No such file or directory.
https://stackoverflow.com//questions/614795/simulate-delayed-and-dropped-packets-on-linux
linux - How to get terminal's Character Encoding
Now I change my gnome-terminal's character encoding to "GBK" (default it is UTF-8), but how can I get the value(character encoding) in my Linux?
The terminal uses environment variables to determine which character set to use, therefore you can determine it by looking at those variables: echo $LC_CTYPE or echo $LANG
https://stackoverflow.com//questions/5306153/how-to-get-terminals-character-encoding
linux - Docker not running on Ubuntu WSL due to error cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Using WSL2 You simply have to activate and use WSL2, I have to install Ubuntu 20.04 as the 18.04 wasn't connecting with Docker desktop. In the windows shell: To check the WSL mode, run wsl -l -v To upgrade your existing Linux distro to v2, run: wsl --set-version (distro name) 2 WSL Integration will be enabled on your default WSL distribution. To change your default WSL distro, run wsl --set-default <distro name> Then in docker you have to. ...use the WSL2 engine ...access from your default WSL2 Based on this article. A Linux Dev Environment on Windows with WSL 2, Docker Desktop And the docker docs. Docker Desktop WSL 2 backend Below is valid only for WSL1 It seems that docker cannot run inside WSL. What they propose is to connect the WSL to your docker desktop running in windows: Setting Up Docker for Windows and WSL In the docker forums they also refer to that solution: Cannot connect to the docker daemon
https://stackoverflow.com//questions/61592709/docker-not-running-on-ubuntu-wsl-due-to-error-cannot-connect-to-the-docker-daemo
linux - docker networking namespace not visible in ip netns list
When I create a new docker container like with
That's because docker is not creating the reqired symlink: # (as root) pid=$(docker inspect -f '{{.State.Pid}}' ${container_id}) mkdir -p /var/run/netns/ ln -sfT /proc/$pid/ns/net /var/run/netns/$container_id Then, the container's netns namespace can be examined with ip netns ${container_id}, e.g.: # e.g. show stats about eth0 inside the container ip netns exec "${container_id}" ip -s link show eth0
https://stackoverflow.com//questions/31265993/docker-networking-namespace-not-visible-in-ip-netns-list
linux - Split one file into multiple files based on delimiter
I have one file with -| as delimiter after each section...need to create separate files for each section using unix.
A one liner, no programming. (except the regexp etc.) csplit --digits=2 --quiet --prefix=outfile infile "/-|/+1" "{*}" tested on: csplit (GNU coreutils) 8.30 Notes about usage on Apple Mac "For OS X users, note that the version of csplit that comes with the OS doesn't work. You'll want the version in coreutils (installable via Homebrew), which is called gcsplit." — @Danial "Just to add, you can get the version for OS X to work (at least with High Sierra). You just need to tweak the args a bit csplit -k -f=outfile infile "/-\|/+1" "{3}". Features that don't seem to work are the "{*}", I had to be specific on the number of separators, and needed to add -k to avoid it deleting all outfiles if it can't find a final separator. Also if you want --digits, you need to use -n instead." — @Pebbl
https://stackoverflow.com//questions/11313852/split-one-file-into-multiple-files-based-on-delimiter
linux - EC2 ssh Permission denied (publickey,gssapi-keyex,gssapi-with-mic)
I got this permission denied problem when I want to ssh to my ec2 host. I tried existing solution chmod 600 "My.pem" but still didn't work. Here is my debug information:
I resolved this issue in my centos machine by using command: ssh -i <Your.pem> ec2-user@<YourServerIP> It was about userName which was ec2-user in my case. Referenced From: AMAZONTroubleshooting
https://stackoverflow.com//questions/33991816/ec2-ssh-permission-denied-publickey-gssapi-keyex-gssapi-with-mic
c - How to solve "ptrace operation not permitted" when trying to attach GDB to a process?
I'm trying to attach a program with gdb but it returns:
If you are using Docker, you will probably need these options: docker run --cap-add=SYS_PTRACE --security-opt seccomp=unconfined If you are using Podman, you will probably need its --cap-add option too: podman run --cap-add=SYS_PTRACE
https://stackoverflow.com//questions/19215177/how-to-solve-ptrace-operation-not-permitted-when-trying-to-attach-gdb-to-a-pro
linux - How to shutdown a Spring Boot Application in a correct way?
In the Spring Boot Document, they said that 'Each SpringApplication will register a shutdown hook with the JVM to ensure that the ApplicationContext is closed gracefully on exit.'
If you are using the actuator module, you can shutdown the application via JMX or HTTP if the endpoint is enabled. add to application.properties: Spring Boot 2.0 and newer: management.endpoints.shutdown.enabled=true Following URL will be available: /actuator/shutdown - Allows the application to be gracefully shutdown (not enabled by default). Depending on how an endpoint is exposed, the sensitive parameter may be used as a security hint. For example, sensitive endpoints will require a username/password when they are accessed over HTTP (or simply disabled if web security is not enabled). From the Spring boot documentation
https://stackoverflow.com//questions/26547532/how-to-shutdown-a-spring-boot-application-in-a-correct-way
linux - Docker not running on Ubuntu WSL due to error cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Using WSL2 You simply have to activate and use WSL2, I have to install Ubuntu 20.04 as the 18.04 wasn't connecting with Docker desktop. In the windows shell: To check the WSL mode, run wsl -l -v To upgrade your existing Linux distro to v2, run: wsl --set-version (distro name) 2 WSL Integration will be enabled on your default WSL distribution. To change your default WSL distro, run wsl --set-default <distro name> Then in docker you have to. ...use the WSL2 engine ...access from your default WSL2 Based on this article. A Linux Dev Environment on Windows with WSL 2, Docker Desktop And the docker docs. Docker Desktop WSL 2 backend Below is valid only for WSL1 It seems that docker cannot run inside WSL. What they propose is to connect the WSL to your docker desktop running in windows: Setting Up Docker for Windows and WSL In the docker forums they also refer to that solution: Cannot connect to the docker daemon
https://stackoverflow.com//questions/61592709/docker-not-running-on-ubuntu-wsl-due-to-error-cannot-connect-to-the-docker-daemo
linux - Fast Concatenation of Multiple GZip Files
I have list of gzip files:
With gzip files, you can simply concatenate the files together, like so: cat file1.gz file2.gz file3.gz > allfiles.gz Per the gzip RFC, A gzip file consists of a series of "members" (compressed data sets). [...] The members simply appear one after another in the file, with no additional information before, between, or after them. Note that this is not exactly the same as building a single gzip file of the concatenated data; among other things, all of the original filenames are preserved. However, gunzip seems to handle it as equivalent to a concatenation. Since existing tools generally ignore the filename headers for the additional members, it's not easily possible to extract individual files from the result. If you want this to be possible, build a ZIP file instead. ZIP and GZIP both use the DEFLATE algorithm for the actual compression (ZIP supports some other compression algorithms as well as an option - method 8 is the one that corresponds to GZIP's compression); the difference is in the metadata format. Since the metadata is uncompressed, it's simple enough to strip off the gzip headers and tack on ZIP file headers and a central directory record instead. Refer to the gzip format specification and the ZIP format specification.
https://stackoverflow.com//questions/8005114/fast-concatenation-of-multiple-gzip-files
linux - How to remove non UTF-8 characters from text file
I have a bunch of Arabic, English, Russian files which are encoded in utf-8. Trying to process these files using a Perl script, I get this error:
This command: iconv -f utf-8 -t utf-8 -c file.txt will clean up your UTF-8 file, skipping all the invalid characters. -f is the source format -t the target format -c skips any invalid sequence
https://stackoverflow.com//questions/12999651/how-to-remove-non-utf-8-characters-from-text-file
Kill python interpeter in linux from the terminal
I want to kill python interpeter - The intention is that all the python files that are running in this moment will stop (without any informantion about this files). obviously the processes should be closed.
pkill -9 python should kill any running python process.
https://stackoverflow.com//questions/18428750/kill-python-interpeter-in-linux-from-the-terminal
linux - how to detect invalid utf8 unicode/binary in a text file
I need to detect corrupted text file where there are invalid (non-ASCII) utf-8, Unicode or binary characters.
Assuming you have your locale set to UTF-8 (see locale output), this works well to recognize invalid UTF-8 sequences: grep -axv '.*' file.txt Explanation (from grep man page): -a, --text: treats file as text, essential prevents grep to abort once finding an invalid byte sequence (not being utf8) -v, --invert-match: inverts the output showing lines not matched -x '.*' (--line-regexp): means to match a complete line consisting of any utf8 character. Hence, there will be output, which is the lines containing the invalid not utf8 byte sequence containing lines (since inverted -v)
https://stackoverflow.com//questions/29465612/how-to-detect-invalid-utf8-unicode-binary-in-a-text-file
Pip is not working for Python 3.10 on Ubuntu
I am new to using Ubuntu and Linux in general. I just attempted to update Python by using sudo apt-get install python3.10. When I run python3.10 -m pip install <library name> I always receive the following error:
This is likely caused by a too old system pip version. Install the latest with: curl -sS https://bootstrap.pypa.io/get-pip.py | python3.10 and test result python3.10 -m pip --version e.g. pip 22.2.2 from <home>/.local/lib/python3.10/site-packages/pip (python 3.10) and then test upgrade python3.10 -m pip install --upgrade pip e.g. Defaulting to user installation because normal site-packages is not writeable Requirement already satisfied: pip in <home>/.local/lib/python3.10/site-packages (22.2.2)
https://stackoverflow.com//questions/69503329/pip-is-not-working-for-python-3-10-on-ubuntu
linux - shell-init: error retrieving current directory: getcwd -- The usual fixes do not wor
I have a simple script:
I believe the error is not related to the script at all. The issue is: the directory at which you are when you try to run the script does not exist anymore. for example you have two terminals, cd somedir/ at the first one then mv somedir/ somewhere_else/ at the second one, then try to run whatsoever in the first terminal - you'll receive this error message. Please note you'll get this error even if you re-create directory with the same name because the new directory will have different inode index. At least this was in my case.
https://stackoverflow.com//questions/29396928/shell-init-error-retrieving-current-directory-getcwd-the-usual-fixes-do-not
linux - How do I uninstall a program installed with the Appimage Launcher?
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Since an AppImage is not "installed", you don't need to "uninstall" it. Just delete the AppImage file and the application is gone. Additionally you may want to remove menu entry by deleting the desktop file from $HOME/.local/share/applications/. Files and directories with names starting with a full stop (dot) (.example) are hidden - you might need to turn hidden files visible. You can probably find it somewhere in the settings of the file manager you use or in many file managers you can do that with ctrl+h.
https://stackoverflow.com//questions/43680226/how-do-i-uninstall-a-program-installed-with-the-appimage-launcher
linux - How do I fix the Rust error "linker 'cc' not found" for Debian on Windows 10?
I'm running Debian on Windows 10 (Windows Subsystem for Linux) and installed Rust using the command:
The Linux Rust installer doesn't check for a compiler toolchain, but seems to assume that you've already got a C linker installed! The best solution is to install the tried-and-true gcc toolchain. sudo apt install build-essential If you need to target another architecture, install the appropriate toolchain and target the compilation as follows: rustc --target=my_target_architecture -C linker=target_toolchain_linker my_rustfile.rs
https://stackoverflow.com//questions/52445961/how-do-i-fix-the-rust-error-linker-cc-not-found-for-debian-on-windows-10
linux - How to run vi on docker container?
I have installed docker on my host virtual machine. And now want to create a file using vi.
login into container with the following command: docker exec -it <container> bash Then , run the following command . apt-get update apt-get install vim
https://stackoverflow.com//questions/31515863/how-to-run-vi-on-docker-container
linux - EC2 ssh Permission denied (publickey,gssapi-keyex,gssapi-with-mic)
I got this permission denied problem when I want to ssh to my ec2 host. I tried existing solution chmod 600 "My.pem" but still didn't work. Here is my debug information:
I resolved this issue in my centos machine by using command: ssh -i <Your.pem> ec2-user@<YourServerIP> It was about userName which was ec2-user in my case. Referenced From: AMAZONTroubleshooting
https://stackoverflow.com//questions/33991816/ec2-ssh-permission-denied-publickey-gssapi-keyex-gssapi-with-mic
linux - what does "bash:no job control in this shell” mean?
I think it's related to the parent process creating new subprocess and does not have tty. Can anyone explain the detail under the hood? i.e. the related working model of bash, process creation, etc?
You may need to enable job control: #! /bin/bash set -m
https://stackoverflow.com//questions/11821378/what-does-bashno-job-control-in-this-shell-mean
c - What are the differences between (and reasons to choose) tcmalloc/jemalloc and memory pools?
tcmalloc/jemalloc are improved memory allocators, and memory pool is also introduced for better memory allocation. So what are the differences between them and how to choose them in my application?
It depends upon requirement of your program. If your program has more dynamic memory allocations, then you need to choose a memory allocator, from available allocators, which would generate most optimal performance out of your program. For good memory management you need to meet the following requirements at minimum: Check if your system has enough memory to process data. Are you albe to allocate from the available memory ? Returning the used memory / deallocated memory to the pool (program or operating system) The ability of a good memory manager can be tested on basis of (at the bare minimum) its efficiency in retriving / allocating and returning / dellaocating memory. (There are many more conditions like cache locality, managing overhead, VM environments, small or large environments, threaded environment etc..) With respect to tcmalloc and jemalloc there are many people who have done comparisions. With reference to one of the comparisions: http://ithare.com/testing-memory-allocators-ptmalloc2-tcmalloc-hoard-jemalloc-while-trying-to-simulate-real-world-loads/ tcmalloc scores over all other in terms of CPU cycles per allocation if the number of threads are less. jemalloc is very close to tcmalloc but better than ptmalloc (std glibc implementation). In terms of memory overhead jemalloc is the best, seconded by ptmalloc, followed by tcmalloc. Overall it can be said that jemalloc scores over others. You can also read more about jemalloc here: https://www.facebook.com/notes/facebook-engineering/scalable-memory-allocation-using-jemalloc/480222803919 I have just quoted from tests done and published by other people and have not tested it myself. I hope this could be a good starting point for you and use it to test and select the most optimal for your application.
https://stackoverflow.com//questions/9866145/what-are-the-differences-between-and-reasons-to-choose-tcmalloc-jemalloc-and-m
python - docker.errors.DockerException: Error while fetching server API version
I want to install this module but there is something wrong when I try the step docker-compose build ...
Are you sure docker is running on your system? You can get that error when compose is not able to connect to docker via docker socket (if any other way for connection is not defined). If you are running on linux, usually you can run systemctl status docker to check if docker daemon is running and systemctl start docker to start it. It would help to tell what OS and docker version are you using.
https://stackoverflow.com//questions/64952238/docker-errors-dockerexception-error-while-fetching-server-api-version
math - How do I divide in the Linux console?
I have to variables and I want to find the value of one divided by the other. What commands should I use to do this?
In the bash shell, surround arithmetic expressions with $(( ... )) $ echo $(( 7 / 3 )) 2 Although I think you are limited to integers.
https://stackoverflow.com//questions/1088098/how-do-i-divide-in-the-linux-console
c++ - How do you find what version of libstdc++ library is installed on your linux machine?
I found the following command: strings /usr/lib/libstdc++.so.6 | grep GLIBC from here. It seems to work but this is an ad-hoc/heuristic method.
To find which library is being used you could run $ /sbin/ldconfig -p | grep stdc++ libstdc++.so.6 (libc6) => /usr/lib/libstdc++.so.6 The list of compatible versions for libstdc++ version 3.4.0 and above is provided by $ strings /usr/lib/libstdc++.so.6 | grep LIBCXX GLIBCXX_3.4 GLIBCXX_3.4.1 GLIBCXX_3.4.2 ... For earlier versions the symbol GLIBCPP is defined. The date stamp of the library is defined in a macro __GLIBCXX__ or __GLIBCPP__ depending on the version: // libdatestamp.cxx #include <cstdio> int main(int argc, char* argv[]){ #ifdef __GLIBCPP__ std::printf("GLIBCPP: %d\n",__GLIBCPP__); #endif #ifdef __GLIBCXX__ std::printf("GLIBCXX: %d\n",__GLIBCXX__); #endif return 0; } $ g++ libdatestamp.cxx -o libdatestamp $ ./libdatestamp GLIBCXX: 20101208 The table of datestamps of libstdc++ versions is listed in the documentation:
https://stackoverflow.com//questions/10354636/how-do-you-find-what-version-of-libstdc-library-is-installed-on-your-linux-mac
logging - View a log file in Linux dynamically
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
tail -f yourlog.csv Newly appended lines will continuously show.
https://stackoverflow.com//questions/2099149/view-a-log-file-in-linux-dynamically
How can I identify the request queue for a linux block device
I am working on this driver that connects the hard disk over the network. There is a bug that if I enable two or more hard disks on the computer, only the first one gets the partitions looked over and identified. The result is, if I have 1 partition on hda and 1 partitions on hdb, as soon as I connect hda there is a partition that can be mounted. So hda1 gets a blkid xyz123 as soon as it mounts. But when I go ahead and mount hdb1 it also comes up with the same blkid and in fact, the driver is reading it from hda, not hdb.
Queue = blk_init_queue(sbd_request, &Device.lock);
https://stackoverflow.com//questions/6785651/how-can-i-identify-the-request-queue-for-a-linux-block-device
linux - Is Mac OS X a POSIX OS?
What is it that makes an OS a POSIX system? All versions of Linux are POSIX, right? What about Mac OS X?
Is Mac OS X a POSIX OS? Yes. POSIX is a group of standards that determine a portable API for Unix-like operating systems. Mac OS X is Unix-based (and has been certified as such), and in accordance with this is POSIX compliant. POSIX guarantees that certain system calls will be available. Essentially, Mac satisfies the API required to be POSIX compliant, which makes it a POSIX OS. All versions of Linux are not POSIX-compliant. Kernel versions prior to 2.6 were not compliant, and today Linux isn't officially POSIX-compliant because they haven't gone out of their way to get certified (which will likely never happen). Regardless, Linux can be treated as a POSIX system for almost all intents and purposes.
https://stackoverflow.com//questions/5785516/is-mac-os-x-a-posix-os
linux - How to perform grep operation on all files in a directory?
Working with xenserver, and I want to perform a command on each file that is in a directory, grepping some stuff out of the output of the command and appending it in a file.
In Linux, I normally use this command to recursively grep for a particular text within a directory: grep -rni "string" * where r = recursive i.e, search subdirectories within the current directory n = to print the line numbers to stdout i = case insensitive search
https://stackoverflow.com//questions/15286947/how-to-perform-grep-operation-on-all-files-in-a-directory
linux - 64 bit ntohl() in C++?
The man pages for htonl() seem to suggest that you can only use it for up to 32 bit values. (In reality, ntohl() is defined for unsigned long, which on my platform is 32 bits. I suppose if the unsigned long were 8 bytes, it would work for 64 bit ints).
Documentation: man htobe64 on Linux (glibc >= 2.9) or FreeBSD. Unfortunately OpenBSD, FreeBSD and glibc (Linux) did not quite work together smoothly to create one (non-kernel-API) libc standard for this, during an attempt in 2009. Currently, this short bit of preprocessor code: #if defined(__linux__) # include <endian.h> #elif defined(__FreeBSD__) || defined(__NetBSD__) # include <sys/endian.h> #elif defined(__OpenBSD__) # include <sys/types.h> # define be16toh(x) betoh16(x) # define be32toh(x) betoh32(x) # define be64toh(x) betoh64(x) #endif (tested on Linux and OpenBSD) should hide the differences. It gives you the Linux/FreeBSD-style macros on those 4 platforms. Use example: #include <stdint.h> // For 'uint64_t' uint64_t host_int = 123; uint64_t big_endian; big_endian = htobe64( host_int ); host_int = be64toh( big_endian ); It's the most "standard C library"-ish approach available at the moment.
https://stackoverflow.com//questions/809902/64-bit-ntohl-in-c
c++ - Easily measure elapsed time
I am trying to use time() to measure various points of my program.
//***C++11 Style:*** #include <chrono> std::chrono::steady_clock::time_point begin = std::chrono::steady_clock::now(); std::chrono::steady_clock::time_point end = std::chrono::steady_clock::now(); std::cout << "Time difference = " << std::chrono::duration_cast<std::chrono::microseconds>(end - begin).count() << "[µs]" << std::endl; std::cout << "Time difference = " << std::chrono::duration_cast<std::chrono::nanoseconds> (end - begin).count() << "[ns]" << std::endl;
https://stackoverflow.com//questions/2808398/easily-measure-elapsed-time
Randomly shuffling lines in Linux / Bash
I have some files in linux. For example 2 and i need shuffling the files in one file.
You should use shuf command =) cat file1 file2 | shuf Or with Perl : cat file1 file2 | perl -MList::Util=shuffle -wne 'print shuffle <>;'
https://stackoverflow.com//questions/17578873/randomly-shuffling-lines-in-linux-bash
linux - How to convert Windows end of line in Unix end of line (CR/LF to LF)
I'm a Java developer and I'm using Ubuntu to develop. The project was created in Windows with Eclipse and it's using the Windows-1252 encoding.
There should be a program called dos2unix that will fix line endings for you. If it's not already on your Linux box, it should be available via the package manager.
https://stackoverflow.com//questions/3891076/how-to-convert-windows-end-of-line-in-unix-end-of-line-cr-lf-to-lf
linux - Get Docker Container Names
This command gives me a list of running container IDs:
docker ps --format "{{.Names}}"
https://stackoverflow.com//questions/31887258/get-docker-container-names
linux - shell init issue when click tab, what's wrong with getcwd?
once i click Tab on bash, the error message will appear, what's wrong?
This usually occurs when your current directory does not exist anymore. Most likely, from another terminal you remove that directory (from within a script or whatever). To get rid of this, in case your current directory was recreated in the meantime, just cd to another (existing) directory and then cd back; the simplest would be: cd; cd -.
https://stackoverflow.com//questions/12338233/shell-init-issue-when-click-tab-whats-wrong-with-getcwd
linux - C: Run a System Command and Get Output?
I want to run a command in linux and get the text returned of what it outputs, but I do not want this text printed to screen. Is there a more elegant way than making a temporary file?
You want the "popen" function. Here's an example of running the command "ls /etc" and outputing to the console. #include <stdio.h> #include <stdlib.h> int main( int argc, char *argv[] ) { FILE *fp; char path[1035]; /* Open the command for reading. */ fp = popen("/bin/ls /etc/", "r"); if (fp == NULL) { printf("Failed to run command\n" ); exit(1); } /* Read the output a line at a time - output it. */ while (fgets(path, sizeof(path), fp) != NULL) { printf("%s", path); } /* close */ pclose(fp); return 0; }
https://stackoverflow.com//questions/646241/c-run-a-system-command-and-get-output
centos - How do I download a file from the internet to my linux server with Bash
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Using wget wget -O /tmp/myfile 'http://www.google.com/logo.jpg' or curl: curl -o /tmp/myfile 'http://www.google.com/logo.jpg'
https://stackoverflow.com//questions/14300794/how-do-i-download-a-file-from-the-internet-to-my-linux-server-with-bash
node.js - How can I update NodeJS and NPM to their latest versions?
I installed NPM for access to additional Node.js Modules.
Use: npm update -g npm See the docs for the update command: npm update [-g] [<pkg>...] This command will update all the packages listed to the latest version (specified by the tag config), respecting semver. Additionally, see the documentation on Node.js and NPM installation and Upgrading NPM. The following original answer is from the old FAQ that no longer exists, but should work for Linux and Mac: How do I update npm? npm install -g npm Please note that this command will remove your current version of npm. Make sure to use sudo npm install -g npm if on a Mac. You can also update all outdated local packages by doing npm update without any arguments, or global packages by doing npm update -g. Occasionally, the version of npm will progress such that the current version cannot be properly installed with the version that you have installed already. (Consider, if there is ever a bug in the update command.) In those cases, you can do this: curl https://www.npmjs.com/install.sh | sh To update Node.js itself, I recommend you use nvm, the Node Version Manager.
https://stackoverflow.com//questions/6237295/how-can-i-update-nodejs-and-npm-to-their-latest-versions
Pip is not working for Python 3.10 on Ubuntu
I am new to using Ubuntu and Linux in general. I just attempted to update Python by using sudo apt-get install python3.10. When I run python3.10 -m pip install <library name> I always receive the following error:
This is likely caused by a too old system pip version. Install the latest with: curl -sS https://bootstrap.pypa.io/get-pip.py | python3.10 and test result python3.10 -m pip --version e.g. pip 22.2.2 from <home>/.local/lib/python3.10/site-packages/pip (python 3.10) and then test upgrade python3.10 -m pip install --upgrade pip e.g. Defaulting to user installation because normal site-packages is not writeable Requirement already satisfied: pip in <home>/.local/lib/python3.10/site-packages (22.2.2)
https://stackoverflow.com//questions/69503329/pip-is-not-working-for-python-3-10-on-ubuntu
linux - Skipping acquire of configured file 'main/binary-i386/Packages'
Good afternoon, please tell me what I'm doing wrong. I just installed the Linux Ubuntu on my computer and still don’t understand anything about it. I tried to install PostreSQL and pgAdmin. I installed on this video tutorial https://www.youtube.com/watch?v=Vdzb7JTPnGk I get this error.
You must change the line of /etc/apt/sources.list.d/pgdg.list to deb [arch=amd64] http://apt.postgresql.org/pub/repos/apt/ focal-pgdg main
https://stackoverflow.com//questions/61523447/skipping-acquire-of-configured-file-main-binary-i386-packages
linux - docker networking namespace not visible in ip netns list
When I create a new docker container like with
That's because docker is not creating the reqired symlink: # (as root) pid=$(docker inspect -f '{{.State.Pid}}' ${container_id}) mkdir -p /var/run/netns/ ln -sfT /proc/$pid/ns/net /var/run/netns/$container_id Then, the container's netns namespace can be examined with ip netns ${container_id}, e.g.: # e.g. show stats about eth0 inside the container ip netns exec "${container_id}" ip -s link show eth0
https://stackoverflow.com//questions/31265993/docker-networking-namespace-not-visible-in-ip-netns-list
c - How to make a daemon process
I am trying to understand how can I make my program a daemon. So some things which I came across are in general, a program performs the following steps to become a daemon:
If you are looking for a clean approach please consider using standard api- int daemon(int nochdir, int noclose);. Man page pretty simple and self explanatory. man page. A well tested api far outweigh our own implementation interms of portability and stability.
https://stackoverflow.com//questions/5384168/how-to-make-a-daemon-process
environment variables - How does /usr/bin/env work in a Linux shebang line?
I know shebang line like this:
env is the name of a Unix program. If you read the manual (man env) you can see that one way to use it is env COMMAND, where in your case, COMMAND is python3. According to the manual, this will Set each NAME to VALUE in the environment and run COMMAND. Running env alone will show you what NAMEs and VALUEs are set: $ env TERM=xterm-256color SHELL=/bin/bash PATH=/usr/local/bin:/usr/bin:/bin:/usr/local/sbin:/usr/sbin:/sbin … Therefore, /usr/bin/env python3 is an instruction to set the PATH (as well as all the other NAME+VALUE pairs), and then run python3, using the first directory in the PATH that contains the python3 executable.
https://stackoverflow.com//questions/43793040/how-does-usr-bin-env-work-in-a-linux-shebang-line
c - unix domain socket VS named pipes?
After looking at a unix named socket and i thought they were named pipes. I looked at name pipes and didnt see much of a difference. I saw they were initialized differently but thats the only thing i notice. Both use the C write/read function and work alike AFAIK.
UNIX-domain sockets are generally more flexible than named pipes. Some of their advantages are: You can use them for more than two processes communicating (eg. a server process with potentially multiple client processes connecting); They are bidirectional; They support passing kernel-verified UID / GID credentials between processes; They support passing file descriptors between processes; They support packet and sequenced packet modes. To use many of these features, you need to use the send() / recv() family of system calls rather than write() / read().
https://stackoverflow.com//questions/9475442/unix-domain-socket-vs-named-pipes
linux - sudo: docker-compose: command not found
I am trying to run docker-compose using sudo.
On Ubuntu 16.04 Here's how I fixed this issue: Refer Docker Compose documentation sudo curl -L https://github.com/docker/compose/releases/download/1.21.0/docker-compose-$(uname -s)-$(uname -m) -o /usr/local/bin/docker-compose sudo chmod +x /usr/local/bin/docker-compose After you do the curl command , it'll put docker-compose into the /usr/local/bin which is not on the PATH. To fix it, create a symbolic link: sudo ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose And now if you do: docker-compose --version You'll see that docker-compose is now on the PATH
https://stackoverflow.com//questions/38775954/sudo-docker-compose-command-not-found
linux - Automatically enter SSH password with script
I need to create a script that automatically inputs a password to OpenSSH ssh client.
First you need to install sshpass. Ubuntu/Debian: apt-get install sshpass Fedora/CentOS: yum install sshpass Arch: pacman -S sshpass Example: sshpass -p "YOUR_PASSWORD" ssh -o StrictHostKeyChecking=no YOUR_USERNAME@SOME_SITE.COM Custom port example: sshpass -p "YOUR_PASSWORD" ssh -o StrictHostKeyChecking=no YOUR_USERNAME@SOME_SITE.COM:2400 Notes: sshpass can also read a password from a file when the -f flag is passed. Using -f prevents the password from being visible if the ps command is executed. The file that the password is stored in should have secure permissions.
https://stackoverflow.com//questions/12202587/automatically-enter-ssh-password-with-script
linux - How do I find out what inotify watches have been registered?
I have my inotify watch limit set to 1024 (I think the default is 128?). Despite that, yeoman, Guard and Dropbox constantly fail, and tell me to up my inotify limit. Before doing so, I'd like to know what's consuming all my watches (I have very few files in my Dropbox).
Oct 31 2022 update While my script below works fine as it is, Michael Sartain implemented a native executable that is much faster, along with additional functionality not present in my script (below). Worth checking out if you can spend a few seconds compiling it! I have also added contributed some PRs to align the functionality, so it should be pretty 1:1, just faster. Upvote his answer on the Unix Stackexchange. Original answer with script I already answered this in the same thread on Unix Stackexchange as was mentioned by @cincodenada, but thought I could repost my ready-made answer here, seeing that no one really has something that works: I have a premade script, inotify-consumers, that lists the top offenders for you: INOTIFY INSTANCES WATCHES PER COUNT PROCESS PID USER COMMAND ------------------------------------------------------------ 21270 1 11076 my-user /snap/intellij-idea-ultimate/357/bin/fsnotifier 201 6 1 root /sbin/init splash 115 5 1510 my-user /lib/systemd/systemd --user 85 1 3600 my-user /usr/libexec/xdg-desktop-portal-gtk 77 1 2580 my-user /usr/libexec/gsd-xsettings 35 1 2475 my-user /usr/libexec/gvfsd-trash --spawner :1.5 /org/gtk/gvfs/exec_spaw/0 32 1 570 root /lib/systemd/systemd-udevd 26 1 2665 my-user /snap/snap-store/558/usr/bin/snap-store --gapplication-service 18 2 1176 root /usr/libexec/polkitd --no-debug 14 1 1858 my-user /usr/bin/gnome-shell 13 1 3641 root /usr/libexec/fwupd/fwupd ... 21983 WATCHES TOTAL COUNT INotify instances per user (e.g. limits specified by fs.inotify.max_user_instances): INSTANCES USER ----------- ------------------ 41 my-user 23 root 1 whoopsie 1 systemd-ti+ ... Here you quickly see why the default limit of 8K watchers is too little on a development machine, as just WebStorm instance quickly maxes this when encountering a node_modules folder with thousands of folders. Add a webpack watcher to guarantee problems ... Even though it was much faster than the other alternatives when I made it initially, Simon Matter added some speed enhancements for heavily loaded Big Iron Linux (hundreds of cores) that sped it up immensely, taking it down from ten minutes (!) to 15 seconds on his monster rig. Later on, Brian Dowling contributed instance count per process, at the expense of relatively higher runtime. This is insignificant on normal machines with a runtime of about one second, but if you have Big Iron, you might want the earlier version with about 1/10 the amount of system time :) How to use inotify-consumers --help 😊 To get it on your machine, just copy the contents of the script and put it somewhere in your $PATH, like /usr/local/bin. Alternatively, if you trust this stranger on the net, you can avoid copying it and pipe it into bash over http: $ curl -s https://raw.githubusercontent.com/fatso83/dotfiles/master/utils/scripts/inotify-consumers | bash INOTIFY WATCHER COUNT PID USER COMMAND -------------------------------------- 3044 3933 myuser node /usr/local/bin/tsserver 2965 3941 myuser /usr/local/bin/node /home/myuser/.config/coc/extensions/node_modules/coc-tsserver/bin/tsserverForkStart /hom... 6990 WATCHES TOTAL COUNT How does it work? For reference, the main content of the script is simply this (inspired by this answer) find /proc/*/fd \ -lname anon_inode:inotify \ -printf '%hinfo/%f\n' 2>/dev/null \ \ | xargs grep -c '^inotify' \ | sort -n -t: -k2 -r Changing the limits In case you are wondering how to increase the limits $ inotify-consumers --limits Current limits ------------- fs.inotify.max_user_instances = 128 fs.inotify.max_user_watches = 524288 Changing settings permanently ----------------------------- echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf sudo sysctl -p # re-read config
https://stackoverflow.com//questions/13758877/how-do-i-find-out-what-inotify-watches-have-been-registered
linux - fork: retry: Resource temporarily unavailable
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
This is commonly caused by running out of file descriptors. There is the systems total file descriptor limit, what do you get from the command: sysctl fs.file-nr This returns counts of file descriptors: <in_use> <unused_but_allocated> <maximum> To find out what a users file descriptor limit is run the commands: sudo su - <username> ulimit -Hn To find out how many file descriptors are in use by a user run the command: sudo lsof -u <username> 2>/dev/null | wc -l So now if you are having a system file descriptor limit issue you will need to edit your /etc/sysctl.conf file and add, or modify it it already exists, a line with fs.file-max and set it to a value large enough to deal with the number of file descriptors you need and reboot. fs.file-max = 204708
https://stackoverflow.com//questions/12079087/fork-retry-resource-temporarily-unavailable
linux - Split files using tar, gz, zip, or bzip2
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
You can use the split command with the -b option: split -b 1024m file.tar.gz It can be reassembled on a Windows machine using @Joshua's answer. copy /b file1 + file2 + file3 + file4 filetogether Edit: As @Charlie stated in the comment below, you might want to set a prefix explicitly because it will use x otherwise, which can be confusing. split -b 1024m "file.tar.gz" "file.tar.gz.part-" // Creates files: file.tar.gz.part-aa, file.tar.gz.part-ab, file.tar.gz.part-ac, ... Edit: Editing the post because question is closed and the most effective solution is very close to the content of this answer: # create archives $ tar cz my_large_file_1 my_large_file_2 | split -b 1024MiB - myfiles_split.tgz_ # uncompress $ cat myfiles_split.tgz_* | tar xz This solution avoids the need to use an intermediate large file when (de)compressing. Use the tar -C option to use a different directory for the resulting files. btw if the archive consists from only a single file, tar could be avoided and only gzip used: # create archives $ gzip -c my_large_file | split -b 1024MiB - myfile_split.gz_ # uncompress $ cat myfile_split.gz_* | gunzip -c > my_large_file For windows you can download ported versions of the same commands or use cygwin.
https://stackoverflow.com//questions/1120095/split-files-using-tar-gz-zip-or-bzip2
c++ - How to fix: /usr/lib/libstdc++.so.6: version `GLIBCXX_3.4.15' not found
So I'm now desperate in finding a fix for this. I'm compiling a shared library .so in Ubuntu 32 bit (Have tried doing it under Debian and Ubuntu 64 bit, but none worked either)
Link statically to libstdc++ with -static-libstdc++ gcc option.
https://stackoverflow.com//questions/19386651/how-to-fix-usr-lib-libstdc-so-6-version-glibcxx-3-4-15-not-found
shell - Fast Linux file count for a large number of files
I'm trying to figure out the best way to find the number of files in a particular directory when there are a very large number of files (more than 100,000).
By default ls sorts the names, which can take a while if there are a lot of them. Also there will be no output until all of the names are read and sorted. Use the ls -f option to turn off sorting. ls -f | wc -l Note: This will also enable -a, so ., .., and other files starting with . will be counted.
https://stackoverflow.com//questions/1427032/fast-linux-file-count-for-a-large-number-of-files
Why is creating a new process more expensive on Windows than Linux?
I've heard that creating a new process on a Windows box is more expensive than on Linux. Is this true? Can somebody explain the technical reasons for why it's more expensive and provide any historical reasons for the design decisions behind those reasons?
mweerden: NT has been designed for multi-user from day one, so this is not really a reason. However, you are right about that process creation plays a less important role on NT than on Unix as NT, in contrast to Unix, favors multithreading over multiprocessing. Rob, it is true that fork is relatively cheap when COW is used, but as a matter of fact, fork is mostly followed by an exec. And an exec has to load all images as well. Discussing the performance of fork therefore is only part of the truth. When discussing the speed of process creation, it is probably a good idea to distinguish between NT and Windows/Win32. As far as NT (i.e. the kernel itself) goes, I do not think process creation (NtCreateProcess) and thread creation (NtCreateThread) is significantly slower as on the average Unix. There might be a little bit more going on, but I do not see the primary reason for the performance difference here. If you look at Win32, however, you'll notice that it adds quite a bit of overhead to process creation. For one, it requires the CSRSS to be notified about process creation, which involves LPC. It requires at least kernel32 to be loaded additionally, and it has to perform a number of additional bookkeeping work items to be done before the process is considered to be a full-fledged Win32 process. And let's not forget about all the additional overhead imposed by parsing manifests, checking if the image requires a compatbility shim, checking whether software restriction policies apply, yada yada. That said, I see the overall slowdown in the sum of all those little things that have to be done in addition to the raw creation of a process, VA space, and initial thread. But as said in the beginning -- due to the favoring of multithreading over multitasking, the only software that is seriously affected by this additional expense is poorly ported Unix software. Although this sitatuion changes when software like Chrome and IE8 suddenly rediscover the benefits of multiprocessing and begin to frequently start up and teardown processes...
https://stackoverflow.com//questions/47845/why-is-creating-a-new-process-more-expensive-on-windows-than-linux
linux - "Couldn't find a file descriptor referring to the console" on Ubuntu bash on Windows
I have a problem with Bash on Ubuntu on Windows. If I type "open (filename)" on Mac terminal, it opens the file with the right program but if I try to use it on Windows bash, it says: "Couldn't find a file descriptor referring to the console".
Instead of open u can use xdg-open which does the same thing, independently of application i.e. pdf, image, etc. It will open a new virtual terminal (I have tried this on Linux) Example: xdg-open ~/Pictures/Wallpapers/myPic.jpg xdg-open ~/Docs/holidays.pdf
https://stackoverflow.com//questions/42463929/couldnt-find-a-file-descriptor-referring-to-the-console-on-ubuntu-bash-on-win
c++ - How can I get the IP address of a (Linux) machine?
This Question is almost the same as the previously asked How can I get the IP Address of a local computer? -Question. However I need to find the IP address(es) of a Linux Machine.
I found the ioctl solution problematic on os x (which is POSIX compliant so should be similiar to linux). However getifaddress() will let you do the same thing easily, it works fine for me on os x 10.5 and should be the same below. I've done a quick example below which will print all of the machine's IPv4 address, (you should also check the getifaddrs was successful ie returns 0). I've updated it show IPv6 addresses too. #include <stdio.h> #include <sys/types.h> #include <ifaddrs.h> #include <netinet/in.h> #include <string.h> #include <arpa/inet.h> int main (int argc, const char * argv[]) { struct ifaddrs * ifAddrStruct=NULL; struct ifaddrs * ifa=NULL; void * tmpAddrPtr=NULL; getifaddrs(&ifAddrStruct); for (ifa = ifAddrStruct; ifa != NULL; ifa = ifa->ifa_next) { if (!ifa->ifa_addr) { continue; } if (ifa->ifa_addr->sa_family == AF_INET) { // check it is IP4 // is a valid IP4 Address tmpAddrPtr=&((struct sockaddr_in *)ifa->ifa_addr)->sin_addr; char addressBuffer[INET_ADDRSTRLEN]; inet_ntop(AF_INET, tmpAddrPtr, addressBuffer, INET_ADDRSTRLEN); printf("%s IP Address %s\n", ifa->ifa_name, addressBuffer); } else if (ifa->ifa_addr->sa_family == AF_INET6) { // check it is IP6 // is a valid IP6 Address tmpAddrPtr=&((struct sockaddr_in6 *)ifa->ifa_addr)->sin6_addr; char addressBuffer[INET6_ADDRSTRLEN]; inet_ntop(AF_INET6, tmpAddrPtr, addressBuffer, INET6_ADDRSTRLEN); printf("%s IP Address %s\n", ifa->ifa_name, addressBuffer); } } if (ifAddrStruct!=NULL) freeifaddrs(ifAddrStruct); return 0; }
https://stackoverflow.com//questions/212528/how-can-i-get-the-ip-address-of-a-linux-machine
linux - Explain the effects of export LANG, LC_CTYPE, and LC_ALL
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
I'll explain with detail: export LANG=ru_RU.UTF-8 That is a shell command that will export an environment variable named LANG with the given value ru_RU.UTF-8. That instructs internationalized programs to use the Russian language (ru), variant from Russia (RU), and the UTF-8 encoding for console output. Generally this single line is enough. This other one: export LC_CTYPE=ru_RU.UTF-8 Does a similar thing, but it tells the program not to change the language, but only the CTYPE to Russian. If a program can change a text to uppercase, then it will use the Russian rules to do so, even though the text itself may be in English. It is worth saying that mixing LANG and LC_CTYPE can give unexpected results, because few people do that, so it is quite untested, unless maybe: export LANG=ru_RU.UTF-8 export LC_CTYPE=C That will make the program output in Russian, but the CTYPE standard old C style. The last line, LC_ALL is a last resort override, that will make the program ignore all the other LC_* variables and use this. I think that you should never write it in a profile line, but use it to run a program in a given language. For example, if you want to write a bug report, and you don't want any kind of localized output, and you don't know which LC_* variables are set: LC_ALL=C program About changing the language of all your programs or only the console, that depends on where you put these lines. I put mine in ~/.bashrc so they don't apply to the GUI, only to the bash consoles.
https://stackoverflow.com//questions/30479607/explain-the-effects-of-export-lang-lc-ctype-and-lc-all
networking - Increasing the maximum number of TCP/IP connections in Linux
I am programming a server and it seems like my number of connections is being limited since my bandwidth isn't being saturated even when I've set the number of connections to "unlimited".
Maximum number of connections are impacted by certain limits on both client & server sides, albeit a little differently. On the client side: Increase the ephermal port range, and decrease the tcp_fin_timeout To find out the default values: sysctl net.ipv4.ip_local_port_range sysctl net.ipv4.tcp_fin_timeout The ephermal port range defines the maximum number of outbound sockets a host can create from a particular I.P. address. The fin_timeout defines the minimum time these sockets will stay in TIME_WAIT state (unusable after being used once). Usual system defaults are: net.ipv4.ip_local_port_range = 32768 61000 net.ipv4.tcp_fin_timeout = 60 This basically means your system cannot consistently guarantee more than (61000 - 32768) / 60 = 470 sockets per second. If you are not happy with that, you could begin with increasing the port_range. Setting the range to 15000 61000 is pretty common these days. You could further increase the availability by decreasing the fin_timeout. Suppose you do both, you should see over 1500 outbound connections per second, more readily. To change the values: sysctl net.ipv4.ip_local_port_range="15000 61000" sysctl net.ipv4.tcp_fin_timeout=30 The above should not be interpreted as the factors impacting system capability for making outbound connections per second. But rather these factors affect system's ability to handle concurrent connections in a sustainable manner for large periods of "activity." Default Sysctl values on a typical Linux box for tcp_tw_recycle & tcp_tw_reuse would be net.ipv4.tcp_tw_recycle=0 net.ipv4.tcp_tw_reuse=0 These do not allow a connection from a "used" socket (in wait state) and force the sockets to last the complete time_wait cycle. I recommend setting: sysctl net.ipv4.tcp_tw_recycle=1 sysctl net.ipv4.tcp_tw_reuse=1 This allows fast cycling of sockets in time_wait state and re-using them. But before you do this change make sure that this does not conflict with the protocols that you would use for the application that needs these sockets. Make sure to read post "Coping with the TCP TIME-WAIT" from Vincent Bernat to understand the implications. The net.ipv4.tcp_tw_recycle option is quite problematic for public-facing servers as it won’t handle connections from two different computers behind the same NAT device, which is a problem hard to detect and waiting to bite you. Note that net.ipv4.tcp_tw_recycle has been removed from Linux 4.12. On the Server Side: The net.core.somaxconn value has an important role. It limits the maximum number of requests queued to a listen socket. If you are sure of your server application's capability, bump it up from default 128 to something like 128 to 1024. Now you can take advantage of this increase by modifying the listen backlog variable in your application's listen call, to an equal or higher integer. sysctl net.core.somaxconn=1024 txqueuelen parameter of your ethernet cards also have a role to play. Default values are 1000, so bump them up to 5000 or even more if your system can handle it. ifconfig eth0 txqueuelen 5000 echo "/sbin/ifconfig eth0 txqueuelen 5000" >> /etc/rc.local Similarly bump up the values for net.core.netdev_max_backlog and net.ipv4.tcp_max_syn_backlog. Their default values are 1000 and 1024 respectively. sysctl net.core.netdev_max_backlog=2000 sysctl net.ipv4.tcp_max_syn_backlog=2048 Now remember to start both your client and server side applications by increasing the FD ulimts, in the shell. Besides the above one more popular technique used by programmers is to reduce the number of tcp write calls. My own preference is to use a buffer wherein I push the data I wish to send to the client, and then at appropriate points I write out the buffered data into the actual socket. This technique allows me to use large data packets, reduce fragmentation, reduces my CPU utilization both in the user land and at kernel-level.
https://stackoverflow.com//questions/410616/increasing-the-maximum-number-of-tcp-ip-connections-in-linux
How to get the current time in milliseconds from C in Linux?
How do I get the current time on Linux in milliseconds?
This can be achieved using the POSIX clock_gettime function. In the current version of POSIX, gettimeofday is marked obsolete. This means it may be removed from a future version of the specification. Application writers are encouraged to use the clock_gettime function instead of gettimeofday. Here is an example of how to use clock_gettime: #define _POSIX_C_SOURCE 200809L #include <inttypes.h> #include <math.h> #include <stdio.h> #include <time.h> void print_current_time_with_ms (void) { long ms; // Milliseconds time_t s; // Seconds struct timespec spec; clock_gettime(CLOCK_REALTIME, &spec); s = spec.tv_sec; ms = round(spec.tv_nsec / 1.0e6); // Convert nanoseconds to milliseconds if (ms > 999) { s++; ms = 0; } printf("Current time: %"PRIdMAX".%03ld seconds since the Epoch\n", (intmax_t)s, ms); } If your goal is to measure elapsed time, and your system supports the "monotonic clock" option, then you should consider using CLOCK_MONOTONIC instead of CLOCK_REALTIME.
https://stackoverflow.com//questions/3756323/how-to-get-the-current-time-in-milliseconds-from-c-in-linux
linux - sed command with -i option (in-place editing) works fine on Ubuntu but not Mac
I know nothing about Sed but need this command (which works fine on Ubuntu) to work on a Mac OSX:
Ubuntu ships with GNU sed, where the suffix for the -i option is optional. OS X ships with BSD sed, where the suffix is mandatory. Try sed -i ''
https://stackoverflow.com//questions/16745988/sed-command-with-i-option-in-place-editing-works-fine-on-ubuntu-but-not-mac
c - How to make a daemon process
I am trying to understand how can I make my program a daemon. So some things which I came across are in general, a program performs the following steps to become a daemon:
If you are looking for a clean approach please consider using standard api- int daemon(int nochdir, int noclose);. Man page pretty simple and self explanatory. man page. A well tested api far outweigh our own implementation interms of portability and stability.
https://stackoverflow.com//questions/5384168/how-to-make-a-daemon-process
linux - How to update-alternatives to Python 3 without breaking apt?
The other day I decided that I wanted the command python to default to firing up python3 instead of python2.
Per Debian policy, python refers to Python 2 and python3 refers to Python 3. Don't try to change this system-wide or you are in for the sort of trouble you already discovered. Virtual environments allow you to run an isolated Python installation with whatever version of Python and whatever libraries you need without messing with the system Python install. With recent Python 3, venv is part of the standard library; with older versions, you might need to install python3-venv or a similar package. $HOME~$ python --version Python 2.7.11 $HOME~$ python3 -m venv myenv ... stuff happens ... $HOME~$ . ./myenv/bin/activate (myenv) $HOME~$ type python # "type" is preferred over which; see POSIX python is /home/you/myenv/bin/python (myenv) $HOME~$ python --version Python 3.5.1 A common practice is to have a separate environment for each project you work on, anyway; but if you want this to look like it's effectively system-wide for your own login, you could add the activation stanza to your .profile or similar.
https://stackoverflow.com//questions/43062608/how-to-update-alternatives-to-python-3-without-breaking-apt
Git status ignore line endings / identical files / windows & linux environment / dropbox / meld
How do I make
Try setting core.autocrlf value like this : git config --global core.autocrlf true
https://stackoverflow.com//questions/20496084/git-status-ignore-line-endings-identical-files-windows-linux-environment
linux - Fast way to get image dimensions (not filesize)
I'm looking for a fast way to get the height and width of an image in pixels. It should handle at least JPG, PNG and TIFF, but the more the better. I emphasize fast because my images are quite big (up to 250 MB) and it takes soooo long to get the size with ImageMagick's identify because it obviously reads the images as a whole first.
The file command prints the dimensions for several image formats (e.g. PNG, GIF, JPEG; recent versions also PPM, WEBP), and does only read the header. The identify command (from ImageMagick) prints lots of image information for a wide variety of images. It seems to restrain itself to reading the header portion (see comments). It also uses a unified format which file sadly lacks. exiv2 gives you dimensions for many formats, including JPEG, TIFF, PNG, GIF, WEBP, even if no EXIF header present. It is unclear if it reads the whole data for that though. See the manpage of exiv2 for all supported image formats. head -n1 will give you the dimensions for PPM, PGM formats. For formats popular on the web, both exiv2 and identify will do the job. Depending on the use-case you may need to write your own script that combines/parses outputs of several tools.
https://stackoverflow.com//questions/4670013/fast-way-to-get-image-dimensions-not-filesize
linux - How to remove non UTF-8 characters from text file
I have a bunch of Arabic, English, Russian files which are encoded in utf-8. Trying to process these files using a Perl script, I get this error:
This command: iconv -f utf-8 -t utf-8 -c file.txt will clean up your UTF-8 file, skipping all the invalid characters. -f is the source format -t the target format -c skips any invalid sequence
https://stackoverflow.com//questions/12999651/how-to-remove-non-utf-8-characters-from-text-file
java - How do I run a spring boot executable jar in a Production environment?
Spring boot's preferred deployment method is via a executable jar file which contains tomcat inside.
Please note that since Spring Boot 1.3.0.M1, you are able to build fully executable jars using Maven and Gradle. For Maven, just include the following in your pom.xml: <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> <configuration> <executable>true</executable> </configuration> </plugin> For Gradle add the following snippet to your build.gradle: springBoot { executable = true } The fully executable jar contains an extra script at the front of the file, which allows you to just symlink your Spring Boot jar to init.d or use a systemd script. init.d example: $ln -s /var/yourapp/yourapp.jar /etc/init.d/yourapp This allows you to start, stop and restart your application like: $/etc/init.d/yourapp start|stop|restart Or use a systemd script: [Unit] Description=yourapp After=syslog.target [Service] ExecStart=/var/yourapp/yourapp.jar User=yourapp WorkingDirectory=/var/yourapp SuccessExitStatus=143 [Install] WantedBy=multi-user.target More information at the following links: Installation as an init.d service Installation as a systemd service
https://stackoverflow.com//questions/22886083/how-do-i-run-a-spring-boot-executable-jar-in-a-production-environment
linux - Freeing up a TCP/IP port?
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
As the others have said, you'll have to kill all processes that are listening on that port. The easiest way to do that would be to use the fuser(1) command. For example, to see all of the processes listening for HTTP requests on port 80 (run as root or use sudo): # fuser 80/tcp If you want to kill them, then just add the -k option.
https://stackoverflow.com//questions/750604/freeing-up-a-tcp-ip-port