Change your Unix prompt

export PS1="\n:whoami -> ${USER}@${HOSTNAME}\n:pwd -> ${PWD}\n:"

.bashrc
# User specific aliases and functions
### export PS1='\[\033[1m\][\u@\h \w]$ \[\033[m\] '
export PS1="\n:whoami -> ${USER}@${HOSTNAME}\n:pwd -> ${PWD}\n:"

Using wget to download all links

Wget to the rescue. It's a utility for unix/linux/etc. that goes and gets stuff from Web and FTP servers - kind of like a browser but without actually displaying what it downloads. And since it's one of those awesomely configurable command line programs, there is very little it can't do. So I run wget, give it the URLs to those mp3 blogs, and let it scrape all the new audio files it finds. Then I have it keep doing that on a daily basis, save everything into a big directory, and have a virtual radio station of hand-filtered new music. Neat.

Here's how I do it:

wget -r -l1 -H -t1 -nd -N -np -A.mp3 -erobots=off -i ~/mp3blogs.txt

And here's what this all means:

-r -H -l1 -np These options tell wget to download recursively. That means it goes to a URL, downloads the page there, then follows every link it finds. The -H tells the app to span domains, meaning it should follow links that point away from the blog. And the -l1 (a lowercase L with a numeral one) means to only go one level deep; that is, don't follow links on the linked site. In other words, these commands work together to ensure that you don't send wget off to download the entire Web - or at least as much as will fit on your hard drive. Rather, it will take each link from your list of blogs, and download it. The -np switch stands for "no parent", which instructs wget to never follow a link up to a parent directory.

We don't, however, want all the links - just those that point to audio files we haven't yet seen. Including -A.mp3 tells wget to only download files that end with the .mp3 extension. And -N turns on timestamping, which means wget won't download something with the same name unless it's newer.

To keep things clean, we'll add -nd, which makes the app save every thing it finds in one directory, rather than mirroring the directory structure of linked sites. And -erobots=off tells wget to ignore the standard robots.txt files. Normally, this would be a terrible idea, since we'd want to honor the wishes of the site owner. However, since we're only grabbing one file per site, we can safely skip these and keep our directory much cleaner. Also, along the lines of good net citizenship, we'll add the -w5 to wait 5 seconds between each request as to not pound the poor blogs.

Finally, -i ~/mp3blogs.txt is a little shortcut. Typically, I'd just add a URL to the command line with wget and start the downloading. But since I wanted to visit multiple mp3 blogs, I listed their addresses in a text file (one per line) and told wget to use that as the input.

rsync example

Notice that the target folder does not have a trailing slash, and this example is used to keep the same structure starting at the folder-unique down.

You will need to tar the files or copy to a USB formatted as EXT3/4 to preserve the file permissions

rsync -vca --delete /my/source/folder-unique/ /my/target/folder-unique

Trying different way to preserve permissions for git - FAIL !!!

Refer: https://bobcares.com/blog/how-to-preserve-permissions-in-rsync/

rsync -avz --delete /my/source/folder-unique/ /my/target/folder-unique

Secure Copy scp Syntax

Refer: https://phoenixnap.com/kb/linux-scp-command

Copy Files from Remote with Wildcard

scp x_ubuntu1804_ci:"/home/mruckman/crontab/*.zip" .

Single File

scp /your/source/file-to-copy.zip  xxx@target.server.com:/tmp/file-to-copy.zip

Single File Copied from Server

scp x_ubuntu1804_ci:/home/mruckman/sos-api-deployment-analysis/server_report.xls ~/Desktop/

Recursive Copy

scp -r user@server1:/var/www/html/ /var/www/ - it doubled created the target folder on RHEL server

or

scp -r user@server1:/var/www/html/ user@server2:/var/www/html/ - this has been untested

Format XML with xmllint command

Linux / Unix Command: xmllint

xmllint - command line XML tool

xmllint [--version | --debug | --shell | --debugent
| --copy | --recover | --noent | --noout | --htmlout
|--nowrap | --valid | --postvalid | --dtdvalid URL
| --timing | --repeat | --insert | --compress
| --sgml | --html | --push | --memory | --nowarning
| --noblanks | --format | --testIO | --encode encoding
| --catalogs | --nocatalogs | --auto | --xinclude
| --loaddtd | --dtdattr | --dropdtd | --stream
| --chkregister] [xmlfile]

Example:
$xmllint --format summary.xml > ~/Desktop/summary-format.xml

Grep file and include lines around search result

grep -A 1 -i "Search-for-something" /var/lib/jbossas/server/halprdjbs01/log/server.log

6.1 Display N lines after match

-A is the option which prints the specified N lines after the match as shown below.

Syntax:
grep -A <N> "string" FILENAME

The following example prints the matched line, along with the 3 lines after it.

$ grep -A 3 -i "example" demo_text

6.2 Display N lines before match

-B is the option which prints the specified N lines before the match.

Syntax:
grep -B <N> "string" FILENAME

When you had option to show the N lines after match, you have the -B option for the opposite.

$ grep -B 2 "single WORD" demo_text

6.3 Display N lines around match

-C is the option which prints the specified N lines before the match. In some occasion you might want the match to be appeared with the lines from both the side. This options shows N lines in both the side(before & after) of match.

$ grep -C 2 "Example" demo_text