You create the public key and private key on your laptop.
You use ssh-copy-id to upload your public key to the server while your private key remains on the laptop. Then, when you ssh, the keys will magically do their thing and let you in without a password.
You don't have to worry about "storing stuff" on the server because this is not like storing data on the server - its a built in function of ssh and the server will automatically do the right thing and put the key in the right place (as long as its configured too, which it probably 100 percent is)
Your IT dept could probably help you if you asked them "can you help me set up ssh public key with on Server X", but you can also do it yourself.
There are many tutorials out there..." Ssh passwordless with" "ssh public key" etc...
https://www.digitalocean.com/community/tutorials/how-to-set-up-ssh-keys--2
If you're not using rsync, then you're probably looking for pv - a program that will display progress as part of a pipe.
Ah, it stands for "pipe viewer".
As DaftPump said, use ImageMagick. Here is how to do what you want.
Once installed, just run convert dragon.gif -resize 50% half_dragon.gif
. Change the filenames first, of course.
You can even nest it inside a for file in *; do ...; done
loop if you want batch processing.
The man pages are just a reference. They are really good if you actually know the language. If you need to learn the language just read the actual manual for gawk: https://www.gnu.org/software/gawk/manual/gawk.html
Yeah, the GNU coreutils include all kinds of little programs that work well in shell scripts. It is kind of like a standard library for the Bash programming language.
You can see the full list of utility programs and their documentation here:
http://www.gnu.org/software/coreutils/manual/html_node/index.html
You may just discover a few more commands you didn't know about that may come in handy for shell scripting.
So first, my suggestion is to stop what you're doing, delete all the bowtie2 stuff you've installed, install homebrew on your mac, and then run brew install bowtie2
. No point in doing all this stuff manually when homebrew exists and your bowtie program is available on it.
That said...
for Mac the "standard" is to put the program dependencies in a directory within /usr/local, most people going with /usr/local/opt since that's the equivalent practice on linux and dumping the directories there to be organised. If you're using something like homebrew or macports, those package managers will do the same, creating a /usr/local/Homebrew or /usr/local/macports directory to store all the package dependencies.
Then just symlink the bowtie2 executable to /usr/local/bin
cd /usr/local sudo mkdir opt sudo chmod 0775 opt sudo chown $(id -u):admin opt mv bin/bowtie2 opt ln -s opt/bowtie2/bowtie2 bin/bowtie
Now you can run the program with bowtie
If you're doing that in a corporate environment, you might want to use the good tool for the job instead: https://www.nagios.org/ https://www.zabbix.com/
If you insist on doing it by hand, see: top(1)
uptime(1)
free(1)
df(1)
.
And I hope you mean CSV, not Excel format.
In your first version you are running the first instance and then telling it to run the second after the first one quits (I don't know mdk3, so I don't know when that will happen.)
In your second version you are starting the first one into the background and then starting the second. So this works, but it makes the first instance a background process which has to be managed separately as you have seen.
You most likely will have to send these processes to the background and manage them with job control in order for things to work how you want them to work. Here is a seemingly straight forward guide on how to manage that: https://www.digitalocean.com/community/tutorials/how-to-use-bash-s-job-control-to-manage-foreground-and-background-processes
Hope that helps.
> In a tutorial on good bash scripting which I, as a bash beginner, found a bit opaque, the following line was mentioned:
That does not teach you good practices, it teaches bugs. The only thing it gets right is that you should quote properly. Try the BashGuide instead.
Looks like you’re running into the same problem as this StackOverflow question. TL;DR: Make sure any adjustments to your PATH
are appended to it, not prepended, so that head
still refers to the “first lines of text” program and not the “make HTTP HEAD
request” program.
I was looking for the same thing recently. I noticed that "Learning the bash Shell" covers this topic in Appendix C. "Loadable Built-Ins". They mention that example code can be found in the bash tarball under "bash-4.4.18/examples/loadables". Hope that helps.
Interesting.
When I needed it, I just used this https://github.com/oblique/create_ap . (I currently see that it is unmaintained, but there is a maintained fork). The only thing, which I had to do every time was this
echo "Disconnecting from current connection"
ssid=nmcli con show --active | tail -n1 | awk '{print $1}'
echo "Disconnecting from $ssid"
nmcli con down id $ssid
echo Creating new apsudo create_ap wlp3s0 wlp3s0 SSID_NAME PASSSWORD
#When you are done, exit from create_ap and next line gets executed
echo Stopping interface on wlp3s0
sudo create_ap --stop wlp3s0
Regarding
>I have an
>
>alias hotspot='hotspot.sh'
>
>so that makes even quicker to type.
You don't need .sh after the name of the script. When your shell tries to execute a script, it just checks which interpreter it should use (the first line in your script). The extensions of the file does not do anything and if anything just fills your aliases with more junk.
the issue is with how you are using the variable.. syntax would be
$ m='/' $ df | awk -v mt="$m" '$6==mt{print $4}' 2385268
depending on your df
version, you can also use
$ free=$(df --output=avail / | tail -n1) $ echo "$free" 2385268
this allows simple way to build array for different mounts
f=($(df --output=avail /mnt1 /mnt5 /mntx | tail -n +2))
See also
for
is a horrendous term to search for in the manpage.
This is a great example of how the Texinfo documentation makes things easier. You can go directly to the documentation for <code>for</code> using info bash for
.
Do you have netcat or a telnet client?
With telnet: http://www.the-art-of-web.com/system/telnet-http11/
With nc: https://stackoverflow.com/questions/642707/scripting-an-http-header-request-with-netcat
"git push" is only a one command for me...?
Anyways are you perhaps reinventing a wheel? Are you trying to avoid having git ask you for user/password during push?
Either setup your git/ssh keys properly or use something like built in credentials store or credentials cache...
https://git-scm.com/docs/git-credential-store
I think there is even some setup for .netrc ?
I used udev to change monitor configuration upon docking and undocking (using USB). This was on Linux Mint nadia.
You would probably want a very general "hook", because different USB drives have different chips/manufacturer etc.
To play a sound you will need additional hardware, AFAIK there is no system beep on the pi.
See e.g. http://superuser.com/questions/305723/using-udev-rules-to-run-a-script-on-usb-insertion
In short, UDEV detects insertion, and runs your script.
Man if that works for you then of course it makes sense.
Also fzf is your friend. It finds files recursively as you type the name of it, then you can select a file and it gets printed out. You can use it like this for example: nano $(fzf)
, then you type the part of the file that you remember, select it and that's it. I use to open my pdfs since you can give it a initial input (dont recall the flag, not on my pc now), so if you give it something like .pdf$
it will query for all files ending with pdf
.
It's maybe confusing to describe how it works but then you try it you'll get right away.
Also it's a very hackable program, you can do a lot of crazy things with it.
> I've already read that grep is limited to a certain line length,
Not the GNU version. The manual specifically says that "it has no limits on input line length other than available memory".
GNU/awk has a FIELDWIDTHS option that lets you specify field splitting with an array of column numbers.
www.gnu.org/software/gawk/manual/gawk.html#Fixed-width-data
Edit: Usage Note: Field splitting takes place before your user code gets to see the line. So (like FS) setting FIELDWIDTHS in the line action is too late. It needs to be set in a BEGIN { .. } block.
A variable declared with declare -n name
If you need to distinguish between no arguments and an empty string, you should check $#
to see how many arguments were passed. Technically there is a difference between unset and empty, but in most cases they behave the same (see https://stackoverflow.com/a/12263914).
You might need to use [ ... ]
if you needed your code to run in other sh-compatible shells, but if you only need to support bash, then always use [[ ... ]]
and (( ... ))
. For [[ ... ]]
, the -eq
, -ne
, -lt
, etc. are the only operators that are meant to work with integers. The others ==
, <
, etc. are for working with strings (and it turns out that >=
doesn't actually exist). However, you should just always use (( .. ))
when working with integers.
Try this in .xinitrc:
xpdf $WHATEVER &
exec sleep infinity
Which should fix your problem of exiting. If that fails, try replacing the "sleep infinity" with something else from here; probably cat.
You can use tar in place, which is a little complex, or rsync on the local system instead of a local/remote pair.
> It already does some file inspection, and mime-type and other file attributes are real too!
But they aren’t! The MIME type of a file is not an attribute of the file. file
just looks at the file content for a bit and then guesses what it looks like, and prints that. Nothing more. And this can be a fragile and even insecure process.
Files can have extended attributes (see attr(5)
), and I suppose it would be possible to store the MIME type there, but I’m not aware of any application doing that.
The thing with pgrep/pkill is that it'll return more than one pid if there are multiple processes that match what you're looking for. You can add the -f
option (I'll let you look that up in man pgrep
yourself) to reduce it down a bit, but you should be careful & make sure you match only one process. Alternatively (and safer too really), make your script drop pid files (as stated by /u/mfvo) within the application so you know exactly which one you're killing. Just make sure no one can overwrite the pid files.
That said, if you want to test pkill & what it'd actually kill, just run the same command with pgrep instead. That'll output the matching pids.
You already have your answer & I admit, this post isn't exactly helping with that particular problem, however...
> I don't currently have access to a linux machine right now...
I encourage you to download & install VirtualBox. It's free, multi-platform & VMs are a great resource for tasks like this.
Add .rss after the URL. e,g, https://www.reddit.com/r/investing/comments/9smwrx/i_read_the_news_so_you_dont_have_to_market_news/.rss Use that as the basis of what you want to do. https://newsboat.org/ might be an option to handle it.
http://www.gnu.org/software/coreutils/faq/coreutils-faq.html#Value-too-large-for-defined-data-type
That link would seem to support the large file issue. You could recompile find maybe? I'd say the real solution is upgrading. Debian 3.1 stopped getting security updates almost ten years ago.
First google result for "bash autocomplete git"
https://git-scm.com/book/en/v1/Git-Basics-Tips-and-Tricks
Related, here's a section of my .bashrc
################################################################################ # Programmable Completion (Tab Completion)
# Enable programmable completion features (you don't need to enable # this, if it's already enabled in /etc/bash.bashrc and /etc/profile # sources /etc/bash.bashrc). if ! shopt -oq posix; then if [ -f /usr/share/bash-completion/bash_completion ]; then . /usr/share/bash-completion/bash_completion elif [ -f /etc/bash_completion ]; then . /etc/bash_completion fi fi
# Fix 'cd' tab completion complete -d cd
# SSH auto-completion based on ~/.ssh/config. if [[ -e ~/.ssh/config ]]; then complete -o "default" -W "$(grep "^Host" ~/.ssh/config | grep -v "[?*]" | cut -d " " -f2- | tr ' ' '\n')" scp sftp ssh fi
# SSH auto-completion based on ~/.ssh/known_hosts. if [[ -e ~/.ssh/known_hosts ]]; then complete -o "default" -W "$(cut -f 1 -d ' ' ~/.ssh/known_hosts | sed -e s/,.*//g | uniq | grep -v "[" | tr ' ' '\n')" scp sftp ssh fi
Here's a cleaner approach, which you can build on:
Excerpt from man 5 os-release
:
> The file /etc/os-release
takes precedence over /usr/lib/os-release
. Applications should check for the former, and exclusively use its data if it exists, and only fall back to /usr/lib/os-release
if it is missing.
(full text: https://man7.org/linux/man-pages/man5/os-release.5.html)
So we can check both locations.
os-release
is deliberately written in such a way that we can source it, and use the variables (like $ID
) in our current environment (the script).
When parsing $ID
, case
makes more sense than multiple elif
s.
Also, when scripting, use apt-get
instead of apt
. It's a more stable choice for scripts.
#!/bin/bash
if [ -e /etc/os-release ]; then . /etc/os-release elif [ -e /usr/lib/os-release ]; then . /usr/lib/os-release else echo 'os-release file not found' >&2 exit 1 fi
case $ID in centos) pkmgr=yum ;; debian|ubuntu) pkmgr=apt-get ;; '') echo 'os-release: OS ID not specified' >&2 ;; *) echo "$ID: OS not supported" >&2 ;; esac
The script above should achieve what you're trying to do. But IMO, consider if it's more appropriate to test for an available package manger, rather than for the distro.
Here is a quick example for that approach:
for pkmgr in apt-get yum; do command -v "$pkmgr" >/dev/null && break done
if ! command -v "$pkmgr" >/dev/null; then echo 'No supported package manager found' >&2 exit 1 fi
echo "$pkmgr is the package manager detected"
Finally, if you're doing OS detection in bash
, have a read of the neofetch
bash script:
https://github.com/dylanaraps/neofetch/blob/master/neofetch
If this is a personal project, and something you're only wanting to be able to run for yourself on Linux, then the /etc/os-release
file is written in such a way that you can source it and use the definitions in the file as variable names.
You can try this in your terminal, first try running:
echo "$NAME" echo "$LOGO" echo "$BUILD_ID"
Nothing will print for any of these. However if you then run:
source /etc/os-release
Then run those same echo
commands again,
echo "$NAME" echo "$LOGO" echo "$BUILD_ID"
You'll see it now prints the information in the file. This means that in your script you can just source the file, and use the variables as though you had defined them yourself.
If you want this to be more portable and used by others, you'll have to look at how it's done in, for example, <code>neofetch</code> or the POSIX <code>pfetch</code>.
Thanks for the feedback!
The client ID is actually Livestreamer's client ID I found here https://github.com/chrippa/livestreamer/issues/1478
Added info to select the quality, it should be more obvious now.
What, ultimately, are you wanting to do? To manually scroll through the output?
To directly answer your question, you may like to investigate the stty command e.g. stty -ixoff and -ixon, there might be something for you down that google path.
If you want to manually scroll through the output, though, why not do what everybody else does and pipe it to more
or less
?
If it was a one-off script, you could do that by adding this to the very start of the script:
(
and this to the very end:
) | less
You may also like to read these:
http://unix.stackexchange.com/questions/40242/scroll-inside-screen-or-pause-output
http://serverfault.com/questions/31845/is-there-a-way-to-configure-bash-to-always-page-output
There are alternative - but compatible with docker - container engines in the works (well, not sure how feature complete they are, but definitely dont have the mindshare that docker does) that are designed to be much more secure.
For example podman lets you run containers as a normal user. The traditional user-id based file restrictions apply to any containers you launch. rkt does something similar too.
Alternatively, this is POSIX sed and will remove multiple spaces after the headings as well.
sed '/^#/{s/#*/& /;s/# */# /}'
I am currently reading the "Linux Command Line and Shell Scripting Bible" (https://www.amazon.de/dp/1119700914/ref=cm_sw_r_cp_apa_glt_i_GXC6NTMKQJCWXYWT1NXC) and I can really recommend it.
Very well written and understandable.
arr=('MMM' 'AXP' 'T' 'BA' 'CAT' 'CVX' 'CSCO' 'DD' 'XOM' 'GE' 'GS' 'HD' 'INTC' 'IBM' 'JNJ' 'JPM' 'MCD' 'MRK' 'MSFT' 'NKE' 'PFE' 'PG' 'KO' 'TRV' 'UTX' 'UNH' 'VZ' 'V' 'WMT' 'DIS')
stringOne="http://finance.yahoo.com/q/hp?s="
for i in ${arr[@]}
do
stringTwo=${stringOne}${i} wget ${stringTwo} done
Still getting the same Syntax error: "(" unexpected. It seems to really unlike the left paranthese.
arr=(MMM AXP T BA CAT CVX CSCO DD XOM GE GS HD INTC IBM JNJ JPM MCD MRK MSFT NKE PFE PG KO TRV UTX UNH VZ V WMT DIS)
stringOne="http://finance.yahoo.com/q/hp?s="
for i in ${arr[@]}
do
stringTwo=$stringOne$i
wget stringTwo
done
Here is me updated code, still throwing up the same error.
Did you put quotes around each entry into the array, as in
arr=("MMM" "AXP" "T" .... etc.?
Or, only to the string variables, such as
stringOne="http://finance.yahoo.com/q/hp?s=" stringTwo="$stringOne$i"
?
use four spaces instead of quotation to get blockcode:
arr=( MMM AXP T BA CAT CVX CSCO DD XOM GE GS HD INTC IBM >JNJ JPM MCD MRK MSFT NKE PFE PG KO TRV UTX UNH VZ V WMT >DIS )
for i in ${arr[@]}
do
stringOne=http://finance.yahoo.com/q/hp?s=
stringTwo=$stringOne$i
wget stringTwo
done
and if I run that I get a different error, please edit your post to see what you're actually running.
Looks like you might do well to set up a post-merge
githook. It should do exactly what you want - that is, run some command after pulling remote changes.
https://git-scm.com/book/en/v2/Customizing-Git-Git-Hooks
If not, comparing program output isn't difficult:
for gd in ~/github/ares ~/github/dhewm3; do
cd "$gd"
gitsays="$( git pull )"
if [ "$gitsays" != "Already up to date" ]; then
eval $(basename $PWD)-update
fi
done
You might look into the outputs of git fetch
or git status
if "git pull" doesn't do it for you.
I know this is /r/bash - but since you mentioned this is on Windows.
If you have Git installed on the machine, this same process can be done pretty easily via PowerShell (or even Batch) if you wanted to take the BASH
part of it out of the picture.
Here is a PowerShell one-liner that will do basically the same thing:
if (!($null -eq (cmd.exe /c 'git status -s'))) {git add -A;git commit -m "Auto-sync: $(get-date -f MM-dd-yyyy_HH:mm)";git pull --rebase;git push}
Can also be adapted into a PS Script or even batch script if needed.
Anyway - just a thought.
The most correct way to detect button presses is events. Here's a library that appears to support them, but this might be more than you're prepared to get into right now:
https://sourceforge.net/p/raspberry-gpio-python/wiki/Inputs/
VPS=server Your computer=client
Try the following to get your public IP, without hitting a public IP you can't access the internet.
curl ifconfig.me
Most home firewalls will close all outgoing ports by default; so you might want to open one (in this example 1234). There's a similar explanation on this page.
I think what you have is a server and no access to it and you want to push a key to it?? If that's the case running commands on it is likely to be difficult; but basic webpages is probably best (perhaps github? Or as you've said pastebin).
Why doesn't this work if I type it out in the terminal line by line and hit enter
(using a mac os terminal)
```
read email
http="https://haveibeenpwned.com/api/v2/breachedaccount/$email"
curl -s -A "pwnedornot" $http > pwnd.txt
cat pwnd.txt
```
​
Im assuming its something to do with not using backticks when concatenating email to the end of that http string.
I want to start making functional stuff so help us out pls <3
Thanks for going the extra mile with explaining parameters. Just curious if there are any caveats in using -P$(nproc)
... could there be a chance in locking up other processes if I'm running this script simultaneously with other stuff?
EDIT: I think -i
is deprecated: http://superuser.com/a/132284
OP, I highly recommend investing in something like this:
https://www.amazon.com/bash-Cookbook-Solutions-Examples-Users/dp/1491975334
Also search for " O'reilly bash " and look up the GNU reference for bash
Are you looking for something other than a spreadsheet program?
I usually use gnumeric as my open-source spreadsheet program when I'm not being picky about formatting and formula compatibility with Excel (which, in a CSV, you wouldn't be). When I do care about those things, I use LibreOffice Calc, but I'd hardly call that "small".
gnumeric also comes with the command-line program ssconvert
for converting among different spreadsheet formats, including generating LaTeX or PDF output.
Unfortunately, there are no longer official Windows builds of gnumeric, though I hear it's possible to build yourself under msys2 (and probably wsl?)
If you like learning with books, the bash scripting bible takes you from 0 to advanced in small, digestible, and practical steps. Hands-down the best bash resource I've ever come across.
Knowing at least vim is almost essential these days, although I got by without it for years.
​
Further reading - bash cookbook:
https://www.amazon.com/bash-Cookbook-Solutions-Examples-Cookbooks/dp/0596526784
You have to reload the init file, you can do it either by logging out and back in, or restarting, or if you're lucky (I can't get this to work for me...) pressing C-x, letting go, then C-r. (Try C-xC-r, without letting go of control, if that doesn't work?)
http://superuser.com/questions/241187/how-do-i-reload-inputrc
I believe it's the base class for streams. The following links may be helpful:
http://www.cplusplus.com/reference/ios/ios/ https://stackoverflow.com/questions/26993086/what-the-point-of-using-stdios-basebinary
>Generally, the Linux distribution that you don't identify won't allow the latest version of an application to be installed if that would cause a library or driver conflict.
No it's not the problem, sorry if I explained badly. The program works.
The thing is that instead of:
wget https://protonmail.com/download/protonmail-bridge-1.2.6-1.x86_64.rpm sudo dnf -y install protonmail-bridge-1.2.6-1.x86_64.rpm
I would prefer something like:
PMB=$(this is what I need) wget https://protonmail.com/download/protonmail-bridge-${PMB}.x86_64.rpm sudo dnf -y install protonmail-bridge-${PMB}.x86_64.rpm
I'm able to do that with other stuff, but not with this one.
Good on you for trying to write it yourself!
I found it worked out of the box when installing using the instructions from thefzf
repository.
https://github.com/junegunn/fzf#key-bindings-for-command-line
As others have said, you want ssh-keygen
and ssh-copy-id
^* .
ssh-keygen — authentication key generation, management and conversion
ssh-copy-id — use locally available keys to authorise logins on a remote machine
Here is a decent walk through the use of ssh keys. https://www.digitalocean.com/community/tutorials/how-to-set-up-ssh-keys--2
^* Nobody had suggested ssh-copy-id yet.
Here's a link for remote execution via ssh: http://www.cyberciti.biz/faq/unix-linux-execute-command-using-ssh/
searching for "color" in the article revealed:
> ;; Add color to terminal
> ;; (this is all commented out as I use Mac Terminal Profiles)
> ;; from http://osxdaily.com/2012/02/21/add-color-to-the-terminal-in-mac-os-x/
> ;; ------------------------------------------------------------
> ;; export CLICOLOR=1
> ;; export LSCOLORS=ExFxBxDxCxegedabagacad
(edit: replaced the # for ;; as comments or they were formatted as headers)
The osxdaily article goes into details about the color setup.
BUT since it's mainly for the LS command, there might be something else.
Also the author seems to use nano
, but I would not go as far as saying it's a text editor color scheme.
On top of that, the color scheme we see in the article is text, those are not screen shots, so there might be some specific formatting.
Why not sending a mail to the author directly?
I think I was a little unclear above, under normal circumstances there is no way to modify a parent shell's environment from within a child process, subshells and executed scripts are child processes. eval
doesn't do anything special that would violate that, it simply takes an expression, performs parameter expansion and then runs it as a command.
I guess you could write a loop to read each line of your script and eval it but that doesn't buy you anything over just calling source fooscript
since you a: have to write the read loop and b: you still have to call it somehow, probably with an alias or function.
I like /u/Schreq's solution below, that'll do what you want without aliases.
Personally I'd probably use an alias or a function to avoid jumping through all of these extra hoops.
Edit: If I were doing this I'd write it as a function, drop it in ~/.dotfiles/bash/prep_env.sh
and source ~/.dotfiles/bash/*
from my bashrc. That keeps it as a separate script that's easy to copy from machine to machine and also avoids all of the weird workarounds.
Cygwin maps your actual Windows paths to UNIX-like paths.
https://cygwin.com/cygwin-ug-net/using.html#using-pathnames
There's no actual Windows path associated with // (which is really just the same as /, FYI), so you can't start Windows apps there.
From what I recall, dash
comes closest to "POSIX, the whole POSIX, and nothing but the POSIX", while being "readily available to the general public" by some definition (read: at least Ubuntu/Debian). It makes some concessions to the Debian scripts policy, but I think it's the closest you can get to a practical POSIX script testbed.
Numbers don't "turn into" scientific numbers. That is an artefact of the printf default format when you display them. If you allow them more precision, you will get more digits.
Awk stores all numbers as IEEE 64-bit (double) precision. Typically, the default is to show you no more than six digits after the decimal point.
GNU/awk does have an inbuilt high-precision library (it is a compile-time option, so check your gawk has it). Take a look at:
https://www.gnu.org/software/gawk/manual/gawk.html#MPFR-features
You can also create extension packs for VSCode as explained here. Then, you can install sets of extensions as needed with
$ code --install-extension <ext>
Yes. replace &&
with ;
.
Seriously now, If you want to automate a marginally more complicated task
If you just want an easy interactive experience, use an interactive program. IIRC nnn has excellent renaming features.
/u/where_Am_I77, others have already covered the stuff I would have, so I'll share instead the general stuff that is far more important than a thousand bash tricks:
case
s the same values you check at the global level, just to do something for puppet but not master.my_start_script_1.1
./u/where_Am_I77, others have already covered the stuff I would have, so I'll share instead the general stuff that is far more important than a thousand bash tricks:
case
s the same values you check at the global level, just to do something for puppet but not master.my_start_script_1.1
.Yes. replace &&
with ;
.
Seriously now, If you want to automate a marginally more complicated task
If you just want an easy interactive experience, use an interactive program. IIRC nnn has excellent renaming features. screencast
You're welcome. I hope it helps. You can use "&&" to start a new process if the previous one finished successfully (exit status 0) ";" will run the new process regardless of exist status, "||" will run if the exit status is non-zero - usually indicating something went wrong but not always ;) See https://stackoverflow.com/questions/4510640/command-line-what-is-the-purpose-of
I want to prevent the copy if the md5sum of the source file matches that of any of the files in the target directory. I don't plan on storing anything else in that directory. I really find it unlikely that I'll ever need to go back more than 2 backups, having 5 is just for extra security. My usage patterns of the application who's data I'm backing up also means that it will very rarely match any of the files except the most recent. I suppose I could try to check for only the most recent file but that sounds harder.
And you caught me. I didn't write the "dance" as you put it myself, I copied it from here (I wrote everything else though).
Hacker Rank has a good path for shell
https://www.hackerrank.com/domains/shell
Not focused in automation, but in common bash commands like awk, sed, sort, paste... the last exercise is a joke (I guess)
Actually just stumbled into HackerRank. The Linux Shell challenges are very similar to CMD Challenge, and so far I really like their built-in editor way more. Liking it so far.
I'm not that great at bash, but I didn't like tldp either. I've been using this to get the basics: http://www.tutorialspoint.com/unix/ Is it any good?
I haven't been using the man pages, like I probably should. Especially since I prefer more reference-like guides. Navigating between man pages just feels so awkward for me. Vim help is better but that also feels weird, like searching topics on multiple pages. Any tips for using man pages in general?
I was thinking of trying to get more comfortable with http://zsh.sourceforge.net/Guide/zshguide.html since I use zsh and I'd like to learn tips for both.
I can't follow the Japanese content, but for Bash, the simplest approach for you would be to use "sed". Like this:
newstring=$(echo "$oldstring" | sed 's/find this/replace with this/')
This is just a way to get the basic idea across, it's not actually how you would want to use "sed" in your own script.
"sed" can do regular expressions, which may prove useful in the more complex replacement tasks you have.
You know, since you already have JavaScript code that works, why not write something using node.js to take advantage of this fact? Node.js does everything in JavaScript. This would be a faster and more productive than trying to convert the code to a Bash version.
A quick google search turned up the following which looks similar to what you appear to be wanting:
http://www.webupd8.org/2009/05/ubuntu-embed-terminal-into-you-desktop.html
​
A perhaps simpler way to achieve something similar would be to use the i3 window manager and set all applications except the terminal to "floating". i3 always places floating windows above tiling windows. I cannot vouch for how well that would work though as I have not tried it.
It's a bit more than you asked for, but this may help:
nvim "$(whereis "my-script" | sed 's/^[^:]*: //' | tr ' ' '\n' | fzf -1)"
You may need to install fzf
if whereis my-script
results in a single finding, nvim opens my-script, else you'll get an interactive menu to choose from
Its actually been around for a while but only resumed active development last year. GNU has a lot of user space tools that were all designed with the Unix philosophy in mind that has been mostly forgotten these days with monolithic systems (systemd) taking over many areas.
It is unfortunate that so many of the recent generations know absolutely nothing about all but the most popular of these tools, most of which have a CLI that was designed specifically for Bash scripting.
This is literally the first result when searching for "exit code 2" and it has the answer: http://tldp.org/LDP/abs/html/exitcodes.html
Did you verify that the code runs locally first? Something seems to be wrong with it, and it looks messy so you need to clean it if you want better help. Use markdown to make your code readable: https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet#code
That's great to hear! Actually it is true that if the blog has adjusted the way you use bash, then programming in the Oil language will be natural :) The Oil language is very much based on my (effective) use/abuse of bash.
Two more examples I want to elaborate on are the $0
Dispatch Pattern and the PPPT Pattern:
http://www.oilshell.org/blog/2021/07/blog-backlog-1.html#shell-programming-patterns
Here's a concerete example of why it's powerful -- because you can recursively use shell with xargs, rather than being constrained to xargs' weird mini-language:
https://lobste.rs/s/wlqveb/xargs_considered_harmful#c_7cax3s (needs to be a blog post)
I'd do this completely differently. There's a tool called dateseq
that can do part of this for you that you might want to investigate.
Otherwise, assuming you have GNU date
, you could write yourself a function along the lines of this:
# Generate a list of all days this year with random timestamps between 1 and 6am git::rand_commit_timestamp() { local year stdout_date next_year rand_hour for year in "${@:-$(date +'%Y')}"; do stdout_date="${year}-01-01" next_year="$(( ++year ))" until [[ "${stdout_date}" = "${next_year}-01-01" ]]; do (( rand_hour = RANDOM % 6 + 1 )) printf -- '%s\n' "${stdout_date} ${rand_hour}:00:00" stdout_date="$(date -d "${stdout_date} +1 day" +'%Y-%m-%d')" done done }
This churns through all the days within a sequence of years (e.g. git::rand_commit_timestamp 2020 2022
), or without args it prints out all the dates for the current year. It loops through day by day, simply adding a day to the last printed value. This has the advantage of letting date
deal with the number of days in a month problem. And we throw your random hour requirement into the output as well.
Then, if you just want the days for a particular month or months, you can simply grep
those out e.g.
▓▒░$ git::rand_commit_timestamp | grep -E -- "-09-|-10-" 2020-09-01 2:00:00 2020-09-02 1:00:00 2020-09-03 1:00:00 2020-09-04 5:00:00 2020-09-05 4:00:00 2020-09-06 1:00:00 2020-09-07 4:00:00 2020-09-08 4:00:00 2020-09-09 3:00:00 2020-09-10 5:00:00 ...
Adjusting that for multiple commits per day wouldn't be much extra work.
Well, the sloppy way I usually did it was just to go ahead and run ssh-add
, and if I see the Error connecting to agent
then I'd run eval $(ssh-agent)
and try ssh-add
again.
But then I started using keychain and stopped thinking about ssh-agent.
https://raw.githubusercontent.com/funtoo/keychain/master/img/keychain-1.png
It's a crapload of instructions at their home page, but if you already have your keys created all that's left is to do [yum or apt or brew] install keychain
, put eval $(keychain --eval)
in your .bashrc file, log in again, and test it with ssh-add.
>Is there a way to combine & and >> here?
I might be misinterpreting what you want, but have you tried [tee](http://www.gnu.org/software/coreutils/manual/html_node/tee-invocation.html)
?
i think it's a well readable font where it's not hard to tells ones and ells apart: screenshot
My browser renders this as Liberation Mono. Yours as well?
This is likely overly advanced for your use-case, but you can see how Neofetch does it as a reference
> $ v="(" > $ [ $v = ")" ] && echo t > t # wrong!
Looks like you found a bug in zsh. I'll report it to the zsh-workers list.
By the way, this really belongs in /r/zsh instead.
edit: A patch has already been made: http://www.zsh.org/mla/workers/2015/msg03276.html
I'm a huge advocate of tmux. It's high energy to start, but entirely customizable and so bonkers powerful that it makes window management optional. I refuse to work without access to the combination of tmux and vim.
Go back to that other thing about Apple's default terminal program and "marks" that's not working on your OS X version: I think there's a third-party terminal program for OS X that also has the same feature. I think it was this here:
Bash is quite consistent in my experience. However I'd also suggest installing Bash via Homebrew like another commenter suggested, because macOS's version is older.
Something slightly related that I would warn you about are utilities like sed
, grep
, and find
. There's a pretty significant difference in how flags work between the GNU version (usually on Linux) and the BSD version (on Mac). For example, one issue that I often run into is with sed -i
. You can install the GNU versions via Homebrew, for sed
the package is called gnu-sed
. Here's more info.
Based on other comments, you're trying to install PHP Composer on macOS.
I strongly suggest you install and use the Homebrew package manager instead, then use that to install and update Composer (and a more recent PHP, for that matter). That way, you don't have to use root at all, and can keep all your installed stuff up-to-date with a single brew upgrade
command.
Edit your bashrc or create it with:
nano .nanorc
The line you want to have is set line numbers, but here's a couple useful settings I have set up.
Alt+N will toggle line numbers if you haven't set up a .nanorc
set tabsize 4 set tabstospaces set autoindent set smooth set morespace set linenumbers
Here is a nice online man page for nano
I've used Nano for nearly 20 years and I find it to be just right for me. You will find lot's of 'l33t' nix folk who say just use Vim or Emacs, but ignore them and use what ever you feel comfortable with.
What you’re saying is true for Unix systems in general, but on Linux, when registering additional executable formats, you can also match on the file name extension instead of a magic string inside the file.
Sorry for the delay, work got me this time and I was just able to come back to this right now...
https://asciinema.org/a/JMuB1AJKbrAgFsvHNLmAinePJ
I recorded the commands that I executed and all the behaviors, as I wasn't able to just copy the terminal.
this is me trying to do the later. well you can see that my mailcap folder is full of other stuff too that I tried. There might something wrong with the code. Not sure. https://asciinema.org/a/GNLsQegSnapN2i1XkFYkeAy0k
I've always had great luck using Asciinema (and it seems to be the de facto standard). The resulting file when you save is just a text file that you can easily manipulate to edit parts out/speed things up/etc.
mp3splt is a good way to do things like this, so it's something you might want to look into in the future. It can do cool things like automatically cut a longer file into separate files based on where it detects silence. I wrote an ffmpeg
-based bash script that worked similar to /u/Schreq's answer, but I don't really use it anymore.
Looking at your source code now, I see! Yeah it is kind of funny that you went to the trouble of making the waveform in Bash but wrote a C program to mix it.
Another strategy you could use: output MIDI to Timidity.
If you're going to go the route of using native code for mixing, I recommend you check out Sox (you can probably install it using your package manager), it has command line options for both generating wave forms and for mixing. If I were going to write a sequencer in Bash, it's the method I'd use.
To mix waves together, you need to sum them together into an array of integers before dumping it to the WAV stream. So declare an array and fill it with samples (although this might be a bit slow in Bash):
declare -a BUFFER=(); sum_to_buffer() { let FREQ="${1}"; shift; let PW="${SAMPLE_RATE} / ${FREQ}"; let HALF_PW="${PW} / 2";
# This generates a sawtooth wave, rather than a square wave. for (( i = 0; i < $HALF_PW; i++ )); do let BUFFER[$i]="((${i} * 255) / ${PW}) + BUFFER[$i]"; done; for (( i = $HALF_PW; i < $PW; i++ )); do let BUFFER[$i]="((${i} * 255) / ${PW}) + BUFFER[$i]"; done; } buffer_as_Cstring() { # first format a string containing the content of the buffer as a C-string: for i in "${BUFFER[@]}"; do printf '\x%X' "${i}"; done; } dump_buffer() { printf "$(buffer_as_Cstring)"; }
A translate-shell wrapper that would fetch selected text and output its translation into a text window. I've seen some articles on how to do it, but they all been bugged so I made a proper version which was required by some of my coworkers.
BTW, translate-shell by itself is a mix of bash and awk.
>nextcloud server
If you're already running a Nextcloud server, you have the resources to run a lightweight internal DHCP/DNS server like dnsmasq.
It's so lightweight that it can be run on off-the-shelf home routers (that have very little RAM and storage) using third-party firmware like DD-WRT, but is still powerful enough to assign static IPs via DHCP (matching each node's MAC address against an allocation list) and "mask" whatever domain names you want with internal IP addresses. No messing with individual /etc/hosts
required, and is largely a set-and-forget operation.
See the dnsmasq entry on ArchWiki for more ideas on what can be done with this one simple tool.
I have the following script in a cron on my router:
#!/bin/bash
pcheck() {
local phost=$1
local phck=$2
ping -c1 $phost && curl -fsS --retry 3 https://hc-ping.com/$phck > /dev/null
}
pcheck google.nl health-check-id
pcheck 192.168.1.1 another-health-check-id
If that curl is 15 minutes delayed I get a notification through https://healthchecks.io/