What can be done very easy in linux?
Some stuff that I encounter that everyone should know
The older ones are here old tricks
26.04.2023 19:20
Sometimes you have some variables inside a file that you want to be sources.
Like if for example you have a parameters file for charts, and you only need a specific charts parameters.
You want it to be sourced in the middle of a script, ot any other thing.
In that case you need to source only the output of the command that gets you these variables:
grep CHART_NAME variablesfile.txt
output
CHART_name=<name>
CHART_VERSION=<version>
SPECIFIC_CHART_INFO=<info>
Just put
source at the begining of the command, and you'll get them as environmen variables:
source <(grep CHART_NAME variablesfile.txt)
24.02.2023 18:36
I needed to check some api of mine.
The input for this api was not my decisions to make, or it will take a lot of time to make it that way.
Part of them contain spaces.
So how to do a curl command (because that's the best wy to check apis with cli) that contain spaces?
Its not a hard questions for people who knows html... but its just a little tricky, so its place is in this page.
So if my server is on local computer and its open on port 8781 and I want to search for the sentence "This Is What I Search For"
Then do:
curl localhost:8781/Search/This%20Is%20What%I%20Search%For
29.01.2023 - 17:32 - Sunday
I had a situation where I had a pod that was deleted pertty fast, and I had to take files from it
and copy directory didn't work for some reason :(
So as always,
for loop came to the rescue!.
>> for f in $(ls);do echo "$f:" >> aggregated;cat $f >> aggregated;echo endoffile>> aggregated;done
Here I am going over all the files in the directory, and print them to a file with a specific separator "endoffile".
This can obviously be whatever you want.
I copied the file to my computer (at least I could do that...)
And then I separated all the files into multiple files(myfiles where html, use whatever regular expression that meets your needs):
for f in $(egrep -Ris "\.html:" aggregated;do cat aggregated | sed -n "/$f:/,/endoffile/ p" | head -n -1 > $f;done )
If I write the needed commands before the pod is up, I just copy them and I don't need
more than a second or 2 to get all the files that were in the directory.
You can use that in any situtation that you need to copy a lot of files very fast
06.01.2023 - 19:30
A lot of my tricks are for saving time.
This trick is not different :)
For a task that I did, I wanted to know the capacity of the workers of 10 different clusters in openshift...
Seems like a tedious task
BUT!!! its not that tedious, or even very very easy if you use a command with autocompletion(
Autocompletion Post) and for loops(
For loops Post) together!
Lets assume you already have a command that does everything, and you just add it a subcommand that connects to cluster by number, lets call it
clusterconnectbynum
You give it a number and it connects to the cluster.
So you only need to do
poiuytrewq clusterconnectbynum <number>
When you are connected to the cluster, first you have to find all worker nodes, lets assume their name starts with
"workernode".
After you found them you need to do the right jsonpath query on each one of them, to get capacity.
So to sum it up, instead of getting into each cluster and do the command, you just need to do:
for i in $(echo 1 2 3 4 5 6 7 8 9 10);do echo "$i";poiuytrewq clusterconnectbynum $i;for n in $(oc get nodes | grep "workernode" | awk '{print $1}');do echo "$n";oc get nodes $n -o jsonpath='{.status.capacity}';done
And again - for comes to the rescues, now as for within for.
03.01.2023 - 17:32 - Tuesday
Very easy to do if you track my tricks, but still something that I needed to do once to see that it works.
for i in $(seq 1 5);do gnome-terminal -- /bin/bash -c 'echo aaaa;read';done
02.01.2023 - 18:34 - Wednesday
I always wanted to have a script that writes files, however I want, and not line by line with redirection.
Today I found the way, and I already wrote about it but forgot.
cat <<EOF > file
This is going to be a file
All of this is going to be written to a file
as one file and not line by line.
In this case you don't have to write lines after line with redirection as I did before
EOF
You can look at an example of creating a repo, with sed commads, editing, adding or removing lines with cli commands
link
02.01.2023 - 17:32 - Monday
Did you ever got to a situation where you wanted to do grep to two lines, but one of them is by its numebr?
In my case I wanted to have the output of
kubectl get pods with grep on some name, and have the headers too.
After a bit of struggeling I got to this command:
kubectl get deploy | awk 'NR==1 || /some_pod_name/'
In this case, you will get the first line, that contains the headers, and the pods that contains the name you put inside '//'.
Now think about how much complicated it can get....
You can learn all about it in the blog post on awk.
28.12.2022 - 15:06 - Wednesday
This was a bit complicated explanation
What do I mean?
I had a situation where I needed to find files with certain names that contains a line that comes after another line!
And more clearly:
for f in $(find . -iname <name> -type f);do echo $f:;sed -n '//,$ p' $f | grep -i <regexp2>;done
• Find specific files.
• In it print from a specific regular expression to the end
• Then find the line of the second regular expression - here we implement the part 'only after another line'
• And in the end, it shows only the files that contain this pattern.
13.10.2022 - 15:06 - Thursday
This one came to me when creating my own alarm clock. I wanted to separate hours, minutes and seconds from the same line read from input file.
The time look like this:
17:25:06
So I had to find a way to separate the fields and take them one by one as arguments.
They command to do that:
read -r year month day < <(date "+%Y:%m:%d" | awk -F':' '{print $1, $2, $3}')
This will give you 3 parameters with corresponding values for year, month and day.
You can of course use it for any command you want.
13.10.2022 - 17:54 - Thursday
Did you ever got to a situation where you needed to backup a file a lot of times once in few seconds/minutes?
Who have the time or the power to run cp <file><id> each time? (<id> for some identification of this specific backup)
You have to remember to do that and to find the easiest way to indentify the backup. The basic way is time.
So lets do it multiple times, and become a backup master! who only knows how to do one command :(
Instead, you can do this trick:
while true;do cp backups/<file>$(date "+%H%M%S");sleep 60;done
This command will keep creating backups with an interval you choose (in this case 60 seconds), and the most useful thing: its identification will be with the time it is created.
You can open new terminal with this command, open new pane in tmux (or any other terminal manager), or put it in the background with '&' in the end of the command.
Such low effort for doing something very useful.
I have so many tricks...
The older ones are here old tricks