Tag Archives | scripts

Exiting/Terminating Python scripts (Simple Examples)

Today, we’ll be diving into the topic of exiting/terminating Python scripts! Before we get started, you should have a basic understanding of what Python is and some basic knowledge about its use. You can use the IDE of your choice, but I’ll use  Microsoft’s Linux Subsystem for Windows (WSL) package this time. For more information on that and how you enable it on Windows 10 go here. The way Python executes a code block makes it execute each line in order, checking dependencies to import, reading definitions and classes to store in memory, and executing pieces of code in order allowing for loops and calls back to the defined definitions and classes. When the Python interpreter reaches the end of the file (EOF), it notices that it can’t read any more data from the source, whether that be the user’s input through an IDE or reading from a file. To demonstrate let’s try to get user input and interrupt the interpreter in the middle of execution!

Continue Reading →

Why does Python automatically exit a script when it’s done?

First, from your bash terminal in your PowerShell open a new file called “input.py”

nano input.py

Then paste the following into the shell by right-clicking on the PowerShell window

name=input("Don't type anything!\n")

print("Hi,",name,"!")

Now, press CTRL+X to save and exit the nano window and in your shell type:

python3 input.py
Don't type anything!

And press CTRL+D to terminate the program while it’s waiting for user input

Traceback (most recent call last):

File "input.py", line 1, in

name=input("Don't type anything!")

EOFError

The EOFError exception tells us that the Python interpreter hit the end of file (EOF) condition before it finished executing the code, as the user entered no input data.

When Python reaches the EOF condition at the same time that it has executed all the code it exits without throwing any exceptions which is one way Python may exit “gracefully.”

Detect script exit

If we want to tell when a Python program exits without throwing an exception, we can use the built-in Python atexit module.

Theatexit handles anything we want the program to do when it exits and is typically used to do program clean up before the program process terminates

To experiment with atexit, let’s modify our input.py example to print a message at the program exit. Open the input.py file again and replace the text with this

import atexit

atexit.register(print,"Program exited successfully!")

name=input("What's your name?\n")

print("Hi,",name,"!")

Type your name and when you hit enter you should get:

What's your name?
Example
Hi, Example !
Program exited successfully!

Notice how the exit text appears at the end of the output no matter where we place the atexit call, and how if we replace the atexit call with a simple print(), we get the exit text where the print() all was made, rather than where the code exits.

Program exited successfully!
What's your name?
Example
Hi, Example !

Graceful exit

There are several ways to exit a Python Program that doesn’t involve throwing an exception, the first was going to try is quit()

You can use the bash command echo $? to get the exit code of the Python interpreter.

python3
Python 3.8.2 (default, Apr 1 2020, 15:52:55)
[GCC 9.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> quit()
$ echo $?
0

We can also define the type of code the interpreter should exit with by handing quit() an integer argument less than 256

python3
Python 3.8.2 (default, Apr 1 2020, 15:52:55)
[GCC 9.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> quit(101)
$ echo $?
101

exit() has the same functionality as it is an alias for quit()

python3
Python 3.8.2 (default, Apr 1 2020, 15:52:55)
[GCC 9.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> exit(101)
$ echo $?
101

Neither quit() nor exit() are considered good practice, as they both require the site module which is meant to be used for interactive interpreters and not in programs. For our programs, we should use something like sys.exit

python3
Python 3.8.2 (default, Apr 1 2020, 15:52:55)
[GCC 9.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import sys
>>> sys.exit(101)
$ echo $?
101

Notice that we need to explicitly import a module to call exit(), this might seem like its not an improvement but it guarantees that the necessary module is loaded because it’s not a good assumption site will be loaded at runtime. If we don’t want to import extra modules, we can do what exit()quit() and sys.exit() are doing behind the scenes and raise SystemExit

python3
Python 3.8.2 (default, Apr 1 2020, 15:52:55)
[GCC 9.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> raise SystemExit(101)
$ echo $?
101

Exit with error messages

What if we get bad input from a user? Let’s look back at our input.py script, and add the ability to handle bad input from the user (CTRL+D to pass an EOF character)

nano input.py
try:

name=input("What's your name?\n")

print("Hi, "+name+"!")

except EOFError:

print("EOFError: You didn't enter anything!")

$ python3 input.py
What's your name?
EOFError: You didn't enter anything!

The try statement tells Python to try the code inside the statement and to pass any exception to the except statement before exiting.

Exiting without error

What if the user hands your program an error but you don’t want your code to print an error, or to do some sort of error handling to minimize user impact?

We can add a finally statement that lets us execute code after we do our error handling in catch

nano input.py
try:

name=input("What's your name?\n")

if(name==''):

print("Name cannot be blank!")

except EOFError:

#print("EOFError: You didn't enter anything!")
name="Blankname"

finally:

print("Hi, "+name+"!")

$ python3 input.py
What's your name?
Hi, Blankname!

Notice the user would never know an EOFError occurred, this can be used to pass default values in the event of poor input or arguments.

Exit and release your resources

Generally, Python releases all the resources you’ve called in your program automatically when it exits, but for certain processes, it’s good practice to encase some limited resources in a with block.

Often you’ll see this in open() calls, where failing to properly release the file could cause problems with reading or writing to the file later.

nano openfile.py
with open("testfile.txt","w") as file:

file.write("let's write some text!\n")

$ python3 openfile.py
$ cat testfile.txt
let's write some text!

The with block automatically releases all resources requisitioned within it. If we wanted to more explicitly ensure the file was closed, we can use the atexit.register() command to call close()

$ nano openfile.py
import atexit

file=open("testfile.txt","w")

file.write("let's write some text!\n")

atexit.register(file.close)

If resources are called without using a with block, make sure to explicitly release them in an atexit command.

Exit after a time

If we are worried our program might never terminate normally, then we can use Python’s multiprocessing module to ensure our program will terminate.

$ nano waiting.py
import time

import sys

from multiprocessing import Process

integer=sys.argv[1]

init=map(int, integer.strip('[]'))

num=list(init)[0]

def exclaim(int):

time.sleep(int)

print("You were very patient!")

if __name__ == '__main__':

program = Process(target=exclaim, args=(num,))

program.start()

program.join(timeout=5)

program.terminate()

$ python3 waiting.py 7
$ python3 waiting.py 0
You were very patient!

Notice how the process failed to complete when the function was told to wait for 7 seconds but completed and printed what it was supposed to when it was told to wait 0 seconds!

Exiting using a return statement

If we have a section of code we want to use to terminate the whole program, instead of letting the break statement continue code outside the loop, we can use the return sys.exit() to exit the code completely.

$ nano break.py
import time

import sys

def stop(isTrue):

for a in range(0,1):

if isTrue:

break

else:

print("You didn't want to break!")

return sys.exit()

mybool = False

stop(mybool)

print("You used break!")

Exit in the middle of a function

If we don’t want to use a return statement, we can still call the sys.exit() to close our program and provide a return in another branch. Let’s use our code from break.py again.

$ nano break.py
import time

import sys

def stop(isTrue):

for a in range(0,1):

if isTrue:

word="bird"

break

else:

print("You didn't want to break!")

sys.exit()

mybool = False

print(stop(mybool))

Exit when conditions are met

If we have a loop in our Python code and we want to make sure the code can exit if it encounters a problem, we can use a flag that it can check to terminate the program.

$ nano break.py
import time

import sys

myflag=False

def stop(val):

global myflag

while 1==1:

val=val+1

print(val)

if val%5==0:

myflag=True

if val%7==0:

myflag=True

if myflag:

sys.exit()

stop(1)

$ python3 break.py
2
3
4
5

Exit on keypress

If we want to hold our program open in the console till we press a key, we can use an unbound input() to close it.

$ nano holdopen.py
input("Press enter to continue")
$ python3 holdopen.py
Press enter to continue
$

We can also pass CTRL+C to the console to give Python a KeyboardInterrupt character. We can even handle the KeyboardInterrupt exception like we’ve handled exceptions before.

$ nano wait.py
import time

try:

i=0

while 1==1:

i=i+1

print(i)

time.sleep(1)

except KeyboardInterrupt:

print("\nWhoops I took too long")

raise SystemExit

$ python3 wait.py
1
2
3
^C

Whoops I took too long

Exit a multithreaded program

Exiting a multithreaded program is slightly more involved, as a simple sys.exit() is called from the thread that will only exit the current thread. The “dirty” way to do it is to use os._exit()

$ nano threads.py
import threading

import os

import sys

import time

integer=sys.argv[1]

init=map(int, integer.strip('[]'))

num=list(init)[0]

def exclaim(int):

time.sleep(int)

os._exit(1)

print("You were very patient!")

if __name__ == '__main__':

program = threading.Thread(target=exclaim, args=(num,))

program.start()

program.join()

print("This should print before the main thread terminates!")

$ python3 threads.py 6
$

As you can see, the program didn’t print the rest of the program before it exited, this is why os._exit() is typically reserved for a last resort, and calling either Thread.join() from the main thread is the preferred method for ending a multithreaded program.

$ nano threads.py

import threading

import os

import sys

import time

import atexit

integer=sys.argv[1]

init=map(int, integer.strip('[]'))

num=list(init)[0]

atexit.register(print,"Threads exited successfully!")

def exclaim(int):

time.sleep(int)

print("You were very patient!")

if __name__ == '__main__':

program = threading.Thread(target=exclaim, args=(num,))

program.start()

program.join()

$ python3 threads.py 6
You were very patient!
Threads exited successfully!

End without sys exit

Sys.exit() is only one of several ways we can exit our python programs, what sys.exit() does is raise SystemExit, so we can just as easily use any built-in Python exception or create one of our own!

$ nano myexception.py
class MyException(Exception):

pass

try:

raise MyException()

except MyException:

print("The exception works!")

$ python3 myexception.py
The exception works!

We can also use the os._exit() to tell the host system to kill the python process, although this doesn’t do atexit cleanup.

$ python3
Python 3.8.2 (default, Apr 1 2020, 15:52:55)
[GCC 9.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import os
>>> os._exit(1)

Exit upon exception

If we want to exit on any exception without any handling, we can use our try-except block to execute os._exit().

Note: this will also catch any sys.exit()quit()exit(), or raise SystemExit calls, as they all generate a SystemExit exception.

$ python3
Python 3.8.2 (default, Apr 1 2020, 15:52:55)
[GCC 9.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import os
>>> try:
... quit()
... except:
... os._exit(1)
...
$ echo $?
1

Exit and restart

Finally, we’ll explore what we do to exit Python and restart the program, which is useful in many cases.

$ nano restart.py
import atexit

import os

atexit.register(os.system,"python3 restart.py")

try:

n=0

while 1==1:

n=n+1

if n%5==0:

raise SystemExit
except:

print("Exception raised!")

$ python3 restart.py
Exception raised!
Exception raised!
...
Exception raised!
^Z
[3]+ Stopped python3 restart.py

I hope you find the tutorial useful. Keep coming back.

Thank you.

0

Expect command and how to automate shell scripts like magic

In the previous post, we talked about writing practical shell scripts and we saw how it is easy to write a shell script. Today we are going to talk about a tool that does magic to our shell scripts, that tool is the Expect command or Expect scripting language. Expect command or expect scripting language is a language that talks with your interactive programs or scripts that require user interaction. Expect scripting language works by expecting input, then the Expect script will send the response without any user interaction. You can say that this tool is your robot which will automate your scripts.

Continue Reading →

If Expect command if not installed on your system, you can install it using the following command:

$ apt-get install expect

Or on Red Hat based systems like CentOS:

$ yum install expect

Expect Command

Before we talk about expect command, Let’s see some of the expect command which used for interaction:

spawn                  Starting a script or a program.

expect                  Waiting for program output.

send                      Sending a reply to your program.

interact                Allowing you in interact with your program.

  • The spawn command is used to start a script or a program like the shell, FTP, Telnet, SSH, SCP, and so on.
  • The send command is used to send a reply to a script or a program.
  • The Expect command waits for input.
  • The interact command allows you to define a predefined user interaction.

We are going to type a shell script that asks some questions and we will make an Expect script that will answer those questions.

First, the shell script will look like this:

#!/bin/bash

echo "Hello, who are you?"

read $REPLY

echo "Can I ask you some questions?"

read $REPLY

echo "What is your favorite topic?"

read $REPLY

Now we will write the Expect scripts that will answer this automatically:

#!/usr/bin/expect -f

set timeout -1

spawn ./questions

expect "Hello, who are you?\r"

send -- "Im Adam\r"

expect "Can I ask you some questions?\r"

send -- "Sure\r"

expect "What is your favorite topic?\r"

send -- "Technology\r"

expect eof

The first line defines the expect command path which is #!/usr/bin/expect.

On the second line of code, we disable the timeout. Then start our script using spawn command.

We can use spawn to run any program we want or any other interactive script.

The remaining lines are the Expect script that interacts with our shell script.

The last line if the end of file which means the end of the interaction.

Now Showtime, let’s run our answer bot and make sure you make it executable.

$ chmod +x ./answerbot

$./answerbot

expect command

Cool!! All questions are answered as we expect.

If you get errors about the location of Expect command you can get the location using the which command:

$ which expect

We did not interact with our script at all, the Expect program do the job for us.

The above method can be applied to any interactive script or program.Although the above Expect script is very easy to write, maybe the Expect script little tricky for some people, well you have it.

Using autoexpect

To build an expect script automatically, you can the use autoexpect command.

autoexpect works like expect, but it builds the automation script for you. The script you want to automate is passed to autoexpect as a parameter and you answer the questions and your answers are saved in a file.

$ autoexpect ./questions

autoexpect command

A file is generated called script.exp contains the same code as we did above with some additions that we will leave it for now.

autoexpect script

If you run the auto generated file script.exp, you will see the same answers as expected:

autoexpect script execution

Awesome!! That super easy.

There are many commands that produce changeable output, like the case of FTP programs, the expect script may fail or stuck. To solve this problem, you can use wildcards for the changeable data to make your script more flexible.

Working with Variables

The set command is used to define variables in Expect scripts like this:

set MYVAR 5

To access the variable, precede it with $ like this $VAR1

To define command line arguments in Expect scripts, we use the following syntax:

set MYVAR [lindex $argv 0]

Here we define a variable MYVAR which equals the first passed argument.

You can get the first and the second arguments and store them in variables like this:

set my_name [lindex $argv 0]

set my_favorite [lindex $argv 1]

Let’s add variables to our script:

#!/usr/bin/expect -f

set my_name [lindex $argv 0]

set my_favorite [lindex $argv 1]

set timeout -1

spawn ./questions

expect "Hello, who are you?\r"

send -- "Im $my_name\r"

expect "Can I ask you some questions?\r"

send -- "Sure\r"

expect "What is your favorite topic?\r"

send -- "$my_favorite\r"

expect eof

Now try to run the Expect script with some parameters to see the output:

$ ./answerbot SomeName Programming

expect command variables

Awesome!! Now our automated Expect script is more dynamic.

Conditional Tests

You can write conditional tests using braces like this:

expect {

"something" { send -- "send this\r" }

"*another" { send -- "send another\r" }

}

We are going to change our script to return different conditions, and we will change our Expect script to handle those conditions.

We are going to emulate different expects with the following script:

#!/bin/bash

let number=$RANDOM

if [ $number -gt 25000 ]; then

echo "What is your favorite topic?"

else

echo "What is your favorite movie?"

fi

read $REPLY

A random number is generated every time you run the script and based on that number, we put a condition to return different expects.

Let’s make out Expect script that will deal with that.

#!/usr/bin/expect -f

set timeout -1

spawn ./questions

expect {

"*topic?" { send -- "Programming\r" }

"*movie?" { send -- "Star wars\r" }

}

expect eof

expect command conditions

Very clear. If the script hits the topic output, the Expect script will send programming and if the script hits movie output the expect script will send star wars. Isn’t cool?

If else Conditions

You can use if/else clauses in expect scripts like this:

#!/usr/bin/expect -f

set NUM 1

if { $NUM < 5 } {

puts "\Smaller than 5\n"

} elseif { $NUM > 5 } {

puts "\Bigger than 5\n"

} else {

puts "\Equals 5\n"

}

if command

Note: The opening brace must be on the same line.

While Loops

While loops in expect language must use braces to contain the expression like this:

#!/usr/bin/expect -f

set NUM 0

while { $NUM <= 5 } {

puts "\nNumber is $NUM"

set NUM [ expr $NUM + 1 ]

}

puts ""

while loop

For Loops

To make a for loop in expect, three fields must be specified, like the following format:

#!/usr/bin/expect -f

for {set NUM 0} {$NUM <= 5} {incr NUM} {

puts "\nNUM = $NUM"

}

puts ""

for loop

User-defined Functions

You can define a function using proc like this:

proc myfunc { TOTAL } {

set TOTAL [expr $TOTAL + 1]

return "$TOTAL"

}

And you can use them after that.

#!/usr/bin/expect -f

proc myfunc { TOTAL } {

set TOTAL [expr $TOTAL + 1]

return "$TOTAL"

}

set NUM 0

while {$NUM <= 5} {

puts "\nNumber $NUM"

set NUM [myfunc $NUM]

}

puts ""

user-defined functions

Interact Command

Sometimes your Expect script contains some sensitive information that you don’t want to share with other users who use your Expect scripts, like passwords or any other data, so you want your script to take this password from you and continuing automation normally.

The interact command reverts the control back to the keyboard.

When this command is executed, Expect will start reading from the keyboard.

This shell script will ask about the password as shown:

#!/bin/bash

echo "Hello, who are you?"

read $REPLY

echo "What is you password?"

read $REPLY

echo "What is your favorite topic?"

read $REPLY

Now we will write the Expect script that will prompt for the password:

#!/usr/bin/expect -f

set timeout -1

spawn ./questions

expect "Hello, who are you?\r"

send -- "Hi Im Adam\r"

expect "*password?\r"

interact ++ return

send "\r"

expect "*topic?\r"

send -- "Technology\r"

expect eof

interact command

After you type your password type ++ and the control will return back from the keyboard to the script.

Expect language is ported to many languages like C#, Java, Perl, Python, Ruby and Shell with almost the same concepts and syntax due to its simplicity and importance.

Expect scripting language is used in quality assurance, network measurements such as echo response time, automate file transfers, updates, and many other uses.

I hope you now supercharged with some of the most important aspects of Expect command, autoexpect command and how to use it to automate your tasks in a smarter way.

Thank you.

0

How to write practical shell scripts

In the last post, we talked about regular expressions and we saw how to use them in sed and awk for text processing, and we discussed before Linux sed command and awk command. During the series, we wrote small shell scripts, but we didn’t mix things up, I think we should take a small step further and write a useful shell script. However, the scripts in this post will help you to empower your scriptwriting skills. You can send messages to someone by phone or email, but one method, not commonly used anymore, is sending a message directly to the user’s terminal. We are going to build a bash script that will send a message to a user who is logged into the Linux system. For this simple shell script, only a few functions are required. Most of the required commands are common and have been covered in our series of shell scripting; you can review the previous posts.

Continue Reading →

Sending Messages

First, we need to know who is logged in. This can be done using the who command which retrieves all logged in users.

who

shell scripts who command

To send a message you need the username and his current terminal.

You need to know if messages are allowed or not for that user using the mesg command.

mesg

mesg command

If the result shows “is y” that means messaging is permitted. If the result shows “is n”, that means messaging is not permitted.

To check any logged user message status, use the who command with -T option.

who -T

If you see a dash (-) that means messages are turned off and if you see plus sign (+) that means messages are enabled.

To allow messages, type mesg command with the “y” option like this

mesg y

allow messages

Sure enough, it shows “is y” which means messages are permitted for this user.

Of course, we need another user to be able to communicate with him so in my case I’m going to connect to my PC using SSH and I’m already logged in with my user, so we have two users logged onto the system.

Let’s see how to send a message.

Write Command

The write command is used to send messages between users using the username and current terminal.

For those users who logged into the graphical environment (KDE, Gnome, Cinnamon or any), they can’t receive messages. The user must be logged onto the terminal

We will send a message to testuser user from my user likegeeks like this:

write testuser pts/1

write command

Type the write command followed by the user and the terminal and hit Enter.

When you hit Enter, you can start typing your message. After finishing the message, you can send the message by pressing the Ctrl+D key combination which is the end of file signal. I recommend you to review the post about signals and jobs.

Receive message

The receiver can recognize which user on which terminal sends the message. EOF means that the message is finished.

I think now we have all the parts to build our shell script.

Creating The Send Script

Before we create our shell script, we need to determine whether the user we want to send a message to him is currently logged on the system, this can be done using the who command to determine that.

logged=$(who | awk -v IGNORECASE=1 -v usr=$1 '{ if ($1==usr) { print $1 }exit }')

We get the logged in users using the who command and pipe it to awk and check if it is matching the entered user.

The final output from the awk command is stored in the variable logged.

Then we need to check the variable if it contains something or not:

if [ -z $logged ]; then

echo "$1 is not logged on."

echo "Exit"

exit

fi

I recommend you to read the post about the if statement and how to use it Bash Script.

Check logged user

The logged variable is tested to check if it is a zero or not.

If it is zero, the script prints the message, and the script is terminated.

If the user is logged, the logged variable contains the username.

Checking If The User Accepts Messages

To check if messages are allowed or not, use the who command with -T option.

check=$(who -T | grep -i -m 1 $1 | awk '{print $2}')

if [ "$check" != "+" ]; then

echo "$1 disable messaging."

echo "Exit"

exit

fi

Check message allowed

Notice that we use the who command with -T. This shows a (+) beside the username if messaging is permitted. Otherwise, it shows a (-) beside the username, if messaging is not permitted.

Finally, we check for a messaging indicator if the indicator is not set to plus sign (+).

Checking If Message Was Included

You can check if the message was included or not like this:

if [ -z $2 ]; then

echo "Message not found"

echo "Exit"

exit

fi

Getting the Current Terminal

Before we send a message, we need to get the user current terminal and store it in a variable.

terminal=$(who | grep -i -m 1 $1 | awk '{print $2}')

Then we can send the message:

echo $2 | write $logged $terminal

Now we can test the whole shell script to see how it goes:

$ ./senderscript likegeeks welcome

Let’s see the other shell window:

Send message

Good!  You can now send simple one-word messages.

Sending a Long Message

If you try to send more than one word:

$ ./senderscript likegeeks welcome to shell scripting

One word message

It didn’t work. Only the first word of the message is sent.

To fix this problem, we will use the shift command with the while loop.

shift

while [ -n "$1" ]; do

message=$message' '$1

shift

done

And now one thing needs to be fixed, which is the message parameter.

echo $whole_message | write $logged $terminal

So now the whole script should be like this:

If you try now:

$ ./senderscript likegeeks welcome to shell scripting

Complete message

Awesome!! It worked. Again, I’m not here to make a script to send the message to the user, but the main goal is to review our shell scripting knowledge and use all the parts we’ve learned together and see how things work together.

Monitoring Disk Space

Let’s build a script that monitors the biggest top ten directories.

If you add -s option to the du command, it will show summarized totals.

$ du -s /var/log/

The -S option is used to show the subdirectories totals.

$ du -S /var/log/

du command

You should use the sort command to sort the results generated by the du command to get the largest directories like this:

$ du -S /var/log/ | sort -rn

sort command

The -n to sort numerically and the -r option to reverse the order so it shows the bigger first.

The N command is used to label each line with a number:

sed '{11,$D; =}' |

sed 'N; s/\n/ /' |

Then we can clean the output using the awk command:

awk '{printf $1 ":" "\t" $2 "\t" $3 "\n"}'

Then we add a colon and a tab so it appears much better.

$ du -S /var/log/ |

sort -rn |

sed '{11,$D; =}' |

# pipe the first result for another one to clean it

sed 'N; s/\n/ /' |

# formated printing using printf

awk '{printf $1 ":" "\t" $2 "\t" $3 "\n"}'

Format output with sed and awk

Suppose we have a variable called  MY_DIRECTORIES that holds 2 folders.

MY_DIRECTORIES=”/home /var/log”

We will iterate over each directory from MY_DIRECTORIES variable and get the disk usage using du command.

So the shell script will look like this:

Monitor disk usage

Good!! Both directories /home and /var/log are shown on the same report.

You can filter files, so instead of calculating the consumption of all files, you can calculate the consumption for a specific extension like *.log or whatever.

One thing I have to mention here, in production systems, you can’t rely on disk space report instead, you should use disk quotas.

Quota package is specialized for that, but here we are learning how bash scripts work.

Again the shell scripts we’ve introduced here is for showing you how shell scripting work, there are a ton of ways to implement any task in Linux.

My post is finished! I tried to reduce the post length and make everything as simple as possible, hope you like it.

Keep coming back. Thank you.

0

Linux Bash Scripting Part5 – Signals and Jobs

In the previous post, we talked about input, output, and redirection in bash scripts. Today we will learn how to run and control them on a Linux system. Till now, we can run scripts only from the command line interface. This isn’t the only way to run Linux bash scripts. This post describes the different ways to control your Linux bash scripts. In shell scripts, we talked about important things called Input, Output and Redirection. Everything is a file in Linux and that includes input and output. So we need to understand each one in detail.

 

Continue Reading →

Your Linux bash scripts don’t control these signals, you can program your bash script to recognize signals and perform commands based on the signal that was sent.

Stop a Process

To stop a running process, you can press Ctrl+C which generates SIGINT signal to stop the current process running in the shell.

sleep 100

Ctrl+C

Linux bash scripting Signals and Jobs stop process

Pause a Process

The Ctrl+Z keys generate a SIGTSTP signal to stop any processes running in the shell, and that leaves the program in memory.

sleep 100

Ctrl+Z

pause process

The number between brackets which is (1) is the job number.

If try to exit the shell and you have a stopped job assigned to your shell, the bash warns you if you.

The ps command is used to view the stopped jobs.

ps –l

ps -l

In the S column (process state), it shows the traced (T) or stopped (S) states.

If you want to terminate a stopped job you can kill its process by using kill command.

kill processID

Trap Signals

To trap signals, you can use the trap command. If the script gets a signal defined by the trap command, it stops processing and instead the script handles the signal.

You can trap signals using the trap command like this:

#!/bin/bash

trap "echo 'Ctrl-C was trapped'" SIGINT

total=1

while [ $total -le 3 ]; do

echo "#$total"

sleep 2

total=$(($total + 1))

done

Every time you press Ctrl+C, the signal is trapped and the message is printed.

trap signal

If you press Ctrl+C, the echo statement specified in the trap command is printed instead of stopping the script. Cool, right?

Trapping The Script Exit

You can trap the shell script exit using the trap command like this:

#!/bin/bash

# Add the EXIT signal to trap it

trap "echo Goodbye..." EXIT

total=1

while [ $total -le 3 ]; do

echo "#$total"

sleep 2

total=$(($total + 1))

done

trap exit

When the bash script exits, the Goodbye message is printed as expected.

Also, if you exit the script before finishing its work, the EXIT trap will be fired.

Modifying Or Removing a Trap

You can reissue the trap command with new options like this:

#!/bin/bash

trap "echo 'Ctrl-C is trapped.'" SIGINT

total=1

while [ $total -le 3 ]; do

echo "Loop #$total"

sleep 2

total=$(($total + 1))

done

# Trap the SIGINT

trap "echo ' The trap changed'" SIGINT

total=1

while [ $total -le 3 ]; do

echo "Second Loop #$total"

sleep 1

total=$(($total + 1))

done

modify trap

Notice how the script manages the signal after changing the signal trap.

You can also remove a trap by using 2 dashes trap -- SIGNAL

#!/bin/bash

trap "echo 'Ctrl-C is trapped.'" SIGINT

total=1

while [ $total -le 3 ]; do

echo "#$total"

sleep 1

total=$(($total + 1))

done

trap -- SIGINT

echo "I just removed the trap"

total=1

while [ $total -le 3 ]; do

echo "Loop #2 #$total"

sleep 2

total=$(($total + 1))

done

Notice how the script processes the signal before removing the trap and after removing the trap.

./myscript

Crtl+C

remove trap

The first Ctrl+C was trapped and the script continues running while the second one exits the script because the trap was removed.

Running Linux Bash Scripts in Background Mode

If you see the output of the ps command, you will see all the running processes in the background and not tied to the terminal.

We can do the same, just place ampersand symbol (&) after the command.

#!/bin/bash

total=1

while [ $total -le 3 ]; do

sleep 2

total=$(($total + 1))

done

./myscipt &

run in background

Once you’ve done that, the script runs in a separate background process on the system and you can see the process id between the square brackets.

When the script dies,  you will see a message on the terminal.

Notice that while the background process is running, you can use your terminal monitor for STDOUT and STDERR messages so if an error occurs, you will see the error message and normal output.

run script in background

The background process will exit if you exit your terminal session.

So what if you want to continue running even if you close the terminal?

Running Scripts without a Hang-Up

You can run your Linux bash scripts in the background process even if you exit the terminal session using the nohup command.

The nohup command blocks any SIGHUP signals. This blocks the process from exiting when you exit your terminal.

nohup ./myscript &

linux bash nohup command

After running the nohup command, you can’t see any output or error from your script. The output and error messages are sent to a file called nohup.out.

Note: when running multiple commands from the same directory will override the nohup.out file content.

Viewing Jobs

To view the current jobs, you can use the jobs command.

#!/bin/bash

total=1

while [ $total -le 3 ]; do

echo "#$count"

sleep 5

total=$(($total + 1))

done

Then run it.

./myscript

Then press Ctrl+Z to stop the script.

linux bash view jobs

Run the same bash script but in the background using the ampersand symbol and redirect the output to a file just for clarification.

./myscript > outfile &

linux bash list jobs

The jobs command shows the stopped and the running jobs.

jobs –l

-l parameter to view the process ID

 Restarting Stopped Jobs

The bg command is used to restart a job in background mode.

./myscript

Then press Ctrl+Z

Now it is stopped.

bg

linux bash restart job

After using bg command, it is now running in background mode.

If you have multiple stopped jobs, you can do the same by specifying the job number to the bg command.

The fg command is used to restart a job in foreground mode.

fg 1

Scheduling a Job

The Linux system provides 2 ways to run a bash script at a predefined time:

  • at command.
  • cron table.

The at command

This is the format of the command

at [-f filename] time

The at command can accept different time formats:

  • Standard time format like 10:15.
  • An AM/PM indicator like 11:15PM.
  • A specifically named time like now, midnight.

You can include a specific date, using some different date formats:

  • A standard date format, such as MMDDYY or DD.MM.YY.
  • A text date, such as June 10 or Feb 12, with or without the year.
  • Now + 25 minutes.
  • 05:15AM tomorrow.
  • 11:15 + 7 days.

We don’t want to dig deep into the at command, but for now, just make it simple.

at -f ./myscript now

linux bash at command

The -M parameter is used to send the output to email if the system has email, and if not, this will suppress the output of the at command.

To list the pending jobs, use atq command:

linux bash at queue

Remove Pending Jobs

To remove a pending job, use the atrm command:

atrm 18

delete at queue

You must specify the job number to the atrm command.

Scheduling Scripts

What if you need to run a script at the same time every day or every month or so?

You can use the crontab command to schedule jobs.

To list the scheduled jobs, use the -l parameter:

crontab –l

The format for crontab is:

minute,Hour, dayofmonth, month, and dayofweek

So if you want to run a command daily at 10:30, type the following:

30 10 * * * command

The wildcard character (*) used to indicate that the cron will execute the command daily on every month at 10:30.

To run a command at 5:30 PM every Tuesday, you would use the following:

30 17 * * 2 command

The day of the week starts from 0 to 6 where Sunday=0 and Saturday=6.

To run a command at 10:00 on the beginning of every month:

00 10 1 * * command

The day of the month is from 1 to 31.

Let’s keep it simple for now and we will discuss the cron in great detail in future posts.

To edit the cron table, use the -e parameter like this:

crontab –e

Then type your command like the following:

30 10 * * * /home/likegeeks/Desktop/myscript

This will schedule our script to run at 10:30 every day.

Note: sometimes you see error says Resource temporarily unavailable.

All you have to do is this:

rm -f /var/run/crond.pid

You should be a root user to do this.

Just that simple!

You can use one of the pre-configured cron script directories like:

/etc/cron.hourly

/etc/cron.daily

/etc/cron.weekly

/etc/cron.monthly

Just put your bash script file on any of these directories and it will run periodically.

Starting Scripts at Login

In the previous posts, we’ve talked about startup files, I recommend you to review the previous.

$HOME/.bash_profile

$HOME/.bash_login

$HOME/.profile

To run your scripts at login, place your code in $HOME/.bash_profile.

Starting Scripts When Opening the Shell

OK, what about running our bash script when the shell opens? Easy.

Type your script on .bashrc file.

And now if you open the shell window, it will execute that command.

I hope you find the post useful. keep coming back.

Thank you.

0

Ansible tutorial * Automate your systems

In a previous tutorial, we talked about expect command and we saw how to automate scripts in Linux using its scripting language. Today, we will take a step further in our automation techniques and talk about a tool that automates tasks more professionally and for different platforms, this tool is Ansible. We will also talk about some Ansible features such as playbook, inventory, vault, role, and container. Ansible is an open source IT tool provided by Red-Hat Enterprise Linux (RHEL) that helps in configuration management, orchestration service, task and application deployment automation.

Continue Reading →

What is Ansible?

Ansible is an open source IT tool provided by Red-Hat Enterprise Linux (RHEL) that helps in configuration management, orchestration service, task and application deployment automation.

This tool is aimed to help system administrators who are seeking to minimize recurring tasks, seamless deployment, and easy automation.

Similar tools to Ansible are Puppet, SaltStack, and Chef which are the main configuration management tools available on the market.

Each one of these tools has its own advantages and disadvantages, so choosing the right one can be a bit challenging, depending on which features are needed or which programming language is preferred.

From the advantages of Ansible compared to other tools, Ansible is sort of a new tool that is built on Python and uses YAML templates for scripting its jobs.

YAML stands for “YAML Ain’t Markup Language” that is a very easy human-readable language. This helps new users to understand it easily.

Another advantage is to use Ansible there is no need to install an agent in the hosts which enhances the communication speed as it is using both push and pull models to send commands to its Linux nodes and for Windows nodes, the WinRM protocol is used.

As we stated above since it’s a new tool from its disadvantages is that it has a poor GUI, un-customized and immature platform when compared to other tools.

Even though Ansible considered to be used more frequently than ever and there is an increase in downloading it.

Ansible setup on Ubuntu

As we previously mentioned that it’s no need to install an agent in the hosts which is unlike other tools. For Ansible it’s a master node installation only which lacks background process, database dependency, and always running service and that makes it extremely light.

It is recommended to use the default package manager for Ubuntu while installing Ansible which will help to install the latest stable version.

Before starting the installation process and for the Linux package installation, you have to make sure that Python 2 (version 2.6 or later) or Python 3 (version 3.5 or later) is installed.

Even though that most of Linux OS package managers when asked to download Ansible will download the best Python version and its dependencies automatically.

And for the source installation, the development suite may be needed like the build-essential package for Ubuntu.

We can install Ansible on Ubuntu using one of the following two methods:

The first method through the Ubuntu package manager

First, add Ansible PPA for Ubuntu using the following command:

sudo apt-add-repository ppa:ansible/ansible

Second, press Enter to confirm the key server setup. Third, update the package manager using the following command:

sudo apt update

Fourth, Ansible is ready to be installed using the next command:

sudo apt install ansible

The second method of installing Ansible is from its source:

This method is sometimes helpful for users who need some particular requirements like for example you need to install the beta or development version of Ansible even if this may grant you early access to new features and future modules but also you need to be careful it is an unstable version that is still under development and testing. Also, this method is helpful if you don’t need to install Ansible through the package manager. So, to get the Ansible source package you can use one of the following techniques. First through downloading the .tar file: Download the .tar file

wget -c https://releases.ansible.com/ansible/ansible-2.6.0rc3.tar.gz

Unarchive it

tar -xzvf ./ansible-2.6.0rc3.tar.gz

Second through the GitHub source: But we will need to install first the git command

sudo apt install -y git

Then get Ansible

git clone https://github.com/ansible/ansible.git --recursive

After downloading the Ansible source by using one of the previous techniques we will start building Ansible but as we previously mentioned that we will need to install Python. So, we can use the following commands to install Python to make sure that Ansible requirements are met: Go to Ansible Source directory

cd ./ansible*

Install Python using easy_install

sudo easy_install pip

Install Python requirements

sudo pip install -r ./requirements.txt

Setup the environment in order to use Ansible

source ./hacking/env-setup

If you are using GitHub Source you can update the Ansible project and its submodules as following:

git pull --rebase
git submodule update --init --recursive

For every time you execute the previous step you will need to be sure that the environment is already set up properly through the next two commands:

echo "export ANSIBLE_HOSTS=/etc/ansible/hosts" >> ~/.bashrc
echo "source ~/ansible/hacking/env-setup" >> ~/.bashrc

Finally, the Ansible inventory can be usually found in /etc/ansible/hosts and its configuration file is usually found in /etc/ansible/ansible.cfg

Ansible master node configuration

Usually the Ansible configuration file (ansible.cfg) is located in /etc/ansible/ansible.cfg or in the home directory which belongs to the user who installed Ansible.

As soon as you have installed Ansible you can start using it with its default configuration. Next, we will be discussing the most important and useful Ansible configurations that will improve your Ansible work experience.

Starting from Ansible 2.4 and later “ansible-config” is a command that is used by Ansible users to list the enable Ansible options with their values.

Ansible configuration file is divided into several sections but in this article, we will only focus on [defaults] general section. So, Let’s have a look on this section basic parameters.

Using your favorite text editor (Gedit, vi, nano…) you can open the ansible.cfg configuration file:

sudo nano /etc/ansible/ansible.cfg

inventory: points out to the location of the inventory that Ansible uses to know the available hosts
inventory = /etc/ansible/hosts

roles_path: points out to the location where the Ansible playbook have to search for extra roles
roles_path = /etc/ansible/roles

log_path: points out to the location where Ansible log file is stored. Permission to write in this file should be given to Ansible user.
log_path = /var/log/ansible.log

retry_files_enabled: indicates the retry feature which allows Ansible to create a .retry file anytime a playbook fails. It’s recommended leaving this option disabled unless you really want it because if it is enabled it will create multiple files which will take space.
retry_files_enabled = False

host_keychecking: this parameter is used in constantly changing environments where old hosts machines are deleted and new hosts take their place. This parameter is usually used in a cloud or a virtualized environment.
host_key_checking = False

forks: Indicates the number of parallel tasks that can be executed to the client host. By default, its value is 5 and this to save system resources and network bandwidth but in case you have enough resources and a good bandwidth you can increase the number.
forks = 5

remote_port: contains the port number used by SSH on the hosts
remote_port = 22

nocolor: It gives you the ability to use different colors for Ansible playbook and tasks that shows errors and success.
nocolor = 0

Node Configuration for Linux client

OpenSSH-server is the only important and required a tool to be installed on the client node and by default, all new versions of Linux use SSH as the main remote access tool. So, you need to check the following points carefully:

  • SSH service is always up and running.
  • SSH port which is 22 by default should be allowed in the system’s firewall.

Node Configuration for Windows client

In order to make Ansible able to remotely manage Windows host the following applications should be installed on Windows nodes:

  • PowerShell version 3.0 or higher
  • .NET version 4.0

For missing requirements, there is an Ansible already made PowerShell script that can carry out this installation automatically you can find it in the following link https://github.com/jborean93/ansible-windows/blob/master/scripts/Upgrade-PowerShell.ps1

But before running the previous script you need to change the execution policy to be unrestricted by executing the following script and you need to run it with administrator privilege:

$link = "https://raw.githubusercontent.com/jborean93/ansible-windows/master/scripts/Upgrade-PowerShell.$script = "$env:temp\Upgrade-PowerShell.ps1"
$username = "Admin"

password = "secure_password"
(New-Object -TypeName System.Net.WebClient).DownloadFile($link, $script)
Set-ExecutionPolicy -ExecutionPolicy Unrestricted -Force
script -Version 5.1 -Username $username -Password $password -Verbose
Set-ExecutionPolicy -ExecutionPolicy Restricted -Force
$reg_winlogon_path = "HKLM:\Software\Microsoft\Windows NT\CurrentVersion\Winlogon"
Set-ItemProperty -Path $reg_winlogon_path -Name AutoAdminLogon -Value 0
Remove-ItemProperty -Path $reg_winlogon_path -Name DefaultUserName -ErrorAction SilentlyContinue
Remove-ItemProperty -Path $reg_winlogon_path -Name DefaultPassword -ErrorAction SilentlyContinue

After that run, the Ansible already made PowerShell script then run the execution policy script again to return it back to restricted.

Another important script that needs to be run to configure WinRM to make it up and running to listen to Ansible commands, this script is also Ansible already made and you can find it in the following link: https://github.com/ansible/ansible/blob/devel/examples/scripts/ConfigureRemotingForAnsible.ps1

Similarly, this script needed to be run under administrator privileges and execution policy to be unrestricted you can use the following piece of code:

$link = "https://raw.githubusercontent.com/ansible/ansible/devel/examples/scripts/ConfigureRemotingForAnsible.$script = "$env:temp\ConfigureRemotingForAnsible.ps1"

(New-Object -TypeName System.Net.WebClient).DownloadFile($link, $script)
powershell.exe -ExecutionPolicy ByPass -File $script

If no errors are thrown then this Ansible now should be able to manage this node.

YAML Basics

As we have previously mentioned that YAML is a human-friendly language that can be used to manage data. Next, we will talk about YAML basics and will show you how to write a code using YAML.

Guidelines to create a YAML file:

  • YAML uses spaces instead of tabs.
  • YAML is case sensitive
  • YAML file should be saved with the .yaml extension
  • YAML file sometimes starts with “—” and ends with “…” but it is optional.

Since YAML can be used to write Ansible playbooks so next we will show you how YAML is easy to use. So in the following example where we need to copy user configuration. If you are using an Ansible command it will look like that:

- name: Copy user configuration copy: src=/home/admin/setup.conf dest=/usr/local/projects/ owner=setup group=dev mode=0677 backup=yes

But in case you are using YAML it will be like this:
- name: Copy user configuration
copy:
src: /home/admin/setup.conf
dest: /usr/local/projects/
owner: setup
group: dev
mode: 0677
backup: yes

Another example is a .ini inventory file can be as following:
node0.lab.edu
[lab1servers]
node1.lab.edu
node2.lab.edu
[lab2servers]
node3.lab.edu

But in case you are using YAML it will look like this:
all:
hosts:
node0.lab.edu
children:
lab1servers:
hosts:
node1.lab.edu
node2.lab.edu
lab2server:
hosts:
node3.lab.edu

So from the previous two examples, you will find that YAML is easy to use, human-friendly, neat and good looking language.

Ansible Inventory

It is a .ini file that consists of records of IP addresses, hostnames of the host clients. It may also contain some other variables about the hosts.

In general, these file contents are organized in groups and each group has a name, this name is written between two square brackets like for example [Group1].

The location of Ansible inventory file is by default can be found in /etc/ansible/hosts. But it is recommended to put all the Ansible configuration files in a folder in the home directory of the user and this to allow the user to add and modify their configuration according to their needs. So, next is an example for opening the Ansible configuration file and setting the inventory:

sudo nano /etc/ansible/ansible.cfg
inventory = /home/user1/ansible/hosts

Also, you can choose an Ansible inventory file while executing a command by adding -i option to the command:
ansible -m ping -i ~/ansible/hosts

There are two Ansible inventory types static and dynamic. Static inventory can be used in small organizations which has small to medium infrastructure. While Dynamic inventory can be used in large organizations where there is a huge number of hosts, complicated tasks to be done and enormous errors may start to appear. If you are trying to add hosts with a similar style to Ansible inventory you use a counter block like the next example:

Inventory file with similar style hosts:
[servers]
node0.lec
node1.lec
node2.lec
node3.lec
node4.lec

Inventory using a counter block:
[servers]
Node[0:4].lec

Ansible Playbook

Ansible playbook simply is a systematic group of scripts which is using Ansible commands in a more organized method that can install and configure systems. Ansible playbook can perform the following tasks and delegate them to other servers:

  • Reorder multi-tier system roll-outs.
  • Applying application and systems patches.
  • Collecting data from client hosts and depending on the collected data it starts sending instant actions to servers, devices, and load balancers.

Ansible playbook is written in YAML which is a very simple human-readable language compared to other tradition coding languages. YAML also can allow users sharing their code in an easy way.

Ansible Roles

While converting Ansible playbooks into roles will give you the ability to change a set of configuration management tasks into reusable modules with multiple configurations which will be easily shared when needed.

Structure of Ansible role is very simple it consists of many folders each one of them consists of a lot of YAML files that by default have one main.yml file but they can have more than one file.

Ansible Vault

It is an Ansible encryption tool which allows users to encrypt various variables. This Ansible vault produces encrypted files to save variables those files can be moved to another location when needed.

Ansible vault can encrypt any different forms of data that are found in Ansible roles and playbooks. Also, it can encrypt task files in case you need to hide a variable name.

Ansible Container

It is an open source tool which allows users to automate everything about their containers from building to deployment to management. Ansible container allows better code management and implementing containers on any cloud registries.

By default, Ansible container is not installed from the beginning as a part of Ansible installation so you will need to set it up on a container host and during the installation process, you will need to choose a container engine to work on.

0

Linux Bash Scripting Part5 – Signals and Jobs

In the previous post, we talked about input, output, and redirection in bash scripts. Today we will learn how to run and control them on Linux system. Till now, we can run scripts only from the command line interface. This isn’t the only way to run Linux bash scripts. This post describes the different ways to control your Linux bash scripts. These are the most common Linux system signals:

 

Continue Reading →

Linux Signals

These are the most common Linux system signals:

Num        Name                    Job

1              SIGHUP               Process hangs up.

2             SIGINT                 Process Interruption.

3             SIGQUIT              Proces quit or stop.

9             SIGKILL               Process termination.

15           SIGTERM             Process termination.

17           SIGSTOP              Process stopping without termination.

18           SIGTSTP              Process stopping or pausing without termination.

19           SIGCONT             Process continuation after stopping.

Your Linux bash scripts don’t control these signals, you can program your bash script to recognize signals and perform commands based on the signal that was sent.

Stop a Process

To stop a running process, you can press Ctrl+C which generates SIGINT signal to stop the current process running in the shell.

$ sleep 100

Ctrl+C

stop process

Pause a Process

The Ctrl+Z keys generate a SIGTSTP signal to stop any processes running in the shell, and that leaves the program in memory.

$ sleep 100

Ctrl+Z

pause process

The number between brackets which is (1) is the job number.

If try to exit the shell and you have a stopped job assigned to your shell, the bash warns you if you.

The ps command is used to view the stopped jobs.

ps –l

ps -l

In the S column (process state), it shows the traced (T) or stopped (S) states.

If you want to terminate a stopped job you can kill its process by using kill command.

kill processID

Trap Signals

To trap signals, you can use the trap command. If the script gets a signal defined by the trap command, it stops processing and instead the script handles the signal.

You can trap signals using the trap command like this:

#!/bin/bash

trap "echo 'Ctrl-C was trapped'" SIGINT

total=1

while [ $total -le 3 ]

do

echo "#$total"

sleep 2

total=$(( $total + 1 ))

done

Every time you press Ctrl+C, the signal is trapped and the message is printed.

trap signal

If you press Ctrl+C, the echo statement specified in the trap command is printed instead of stopping the script. Cool, right?

Trapping The Script Exit

You can trap the shell script exit using the trap command like this:

#!/bin/bash

# Add the EXIT signal to trap it

trap "echo Goodbye..." EXIT

total=1

while [ $total -le 3 ]

do

echo "#$total"

sleep 2

total=$(( $total + 1 ))

done

trap exit

When the bash script exits, the Goodbye message is printed as expected.

Also, if you exit the script before finishing its work, the EXIT trap will be fired.

Modifying Or Removing a Trap

You can reissue the trap command with new options like this:

#!/bin/bash

trap "echo 'Ctrl-C is trapped.'" SIGINT

total=1

while [ $total -le 3 ]

do

echo "Loop #$total"

sleep 2

total=$(( $total + 1 ))

done

# Trap the SIGINT

trap "echo ' The trap changed'" SIGINT

total=1

while [ $total -le 3 ]

do

echo "Second Loop #$total"

sleep 1

total=$(( $total + 1 ))

done

modify trap

Notice how the script manages the signal after changing the signal trap.

You can also remove a trap by using 2 dashes trap SIGNAL
#!/bin/bash

trap "echo 'Ctrl-C is trapped.'" SIGINT

total=1

while [ $total -le 3 ]

do

echo "#$total"

sleep 1

total=$(( $total + 1 ))

done

trap -- SIGINT

echo "I just removed the trap"

total=1

while [ $total -le 3 ]

do

echo "Loop #2 #$total"

sleep 2

total=$(( $total + 1 ))

done

Notice how the script processes the signal before removing the trap and after removing the trap.

$ ./myscript

Crtl+C

remove trap

The first Ctrl+C was trapped and the script continues running while the second one exits the script because the trap was removed.

Running Linux Bash Scripts in Background Mode

If you see the output of the ps command, you will see all the running processes in the background and not tied to the terminal.

We can do the same, just place ampersand symbol (&) after the command.

#!/bin/bash

total=1

while [ $total -le 3 ]

do

sleep 2

total=$(( $total + 1 ))

done

$ ./myscipt &

run in background

Once you’ve done that, the script runs in a separate background process on the system and you can see the process id between the square brackets.

When the script dies,  you will see a message on the terminal.

Notice that while the background process is running, you can use your terminal monitor for STDOUT and STDERR messages so if an error occurs, you will see the error message and normal output.

run script in background

The background process will exit if you exit your terminal session.

So what if you want to continue running even if you close the terminal?

Running Scripts without a Hang-Up

You can run your Linux bash scripts in the background process even if you exit the terminal session using the nohup command.

The nohup command blocks any SIGHUP signals. This blocks the process from exiting when you exit your terminal.

$ nohup ./myscript &

linux bash nohup command

After running the nohup command, you can’t see any output or error from your script. The output and error messages are sent to a file called nohup.out.

Note: when running multiple commands from the same directory will override the nohup.out file content.

Viewing Jobs

To view the current jobs, you can use the jobs command.

#!/bin/bash

total=1

while [ $total -le 3 ]

do

echo "#$count"

sleep 5

total=$(( $total + 1 ))

done

Then run it.

$ ./myscript

Then press Ctrl+Z to stop the script.

linux bash view jobs

Run the same bash script but in the background using the ampersand symbol and redirect the output to a file just for clarification.

v$ ./myscript > outfile &

linux bash list jobs

The jobs command shows the stopped and the running jobs.

jobs –l

-l parameter to view the process ID

Restarting Stopped Jobs

The bg command is used to restart a job in background mode.

$ ./myscript

Then press Ctrl+Z

Now it is stopped.

$ bg

linux bash restart job

After using bg command, it is now running in background mode.

If you have multiple stopped jobs, you can do the same by specifying the job number to the bg command.

The fg command is used to restart a job in foreground mode.

$ fg 1

Scheduling a Job

The Linux system provides 2 ways to run a bash script at a predefined time:

  • at command.
  • cron table.

The at command

This is the format of the command

at [-f filename] time

The at command can accept different time formats:

  • Standard time format like 10:15.
  • An AM/PM indicator like 11:15PM.
  • A specifically named time like now, midnight.

You can include a specific date, using some different date formats:

  • A standard date format, such as MMDDYY or DD.MM.YY.
  • A text date, such as June 10 or Feb 12, with or without the year.
  • Now + 25 minutes.
  • 05:15AM tomorrow.
  • 11:15 + 7 days.

We don’t want to dig deep into the at command, but for now, just make it simple.

$ at -f ./myscript now

linux bash at command

The -M parameter is used to send the output to email if the system has email, and if not, this will suppress the output of the at command.

To list the pending jobs, use atq command:

linux bash at queue

Remove Pending Jobs

To remove a pending job, use the atrm command:

$ atrm 18

delete at queue

You must specify the job number to the atrm command.

Scheduling Scripts

What if you need to run a script at the same time every day or every month or so?

You can use the crontab command to schedule jobs.

To list the scheduled jobs, use the -l parameter:

$ crontab –l

The format for crontab is:

minute,Hour, dayofmonth, month, and dayofweek

So if you want to run a command daily at 10:30, type the following:

30 10 * * * command

The wildcard character (*) used to indicate that the cron will execute the command daily on every month at 10:30.

To run a command at 5:30 PM every Tuesday, you would use the following:

30 17 * * 2 command

The day of the week starts from 0 to 6 where Sunday=0 and Saturday=6.

To run a command at 10:00 on the beginning of every month:

00 10 1 * * command

The day of the month is from 1 to 31.

Let’s keep it simple for now and we will discuss the cron in great detail in future posts.

To edit the cron table, use the -e parameter like this:

crontab –e

Then type your command like the following:

30 10 * * * /home/likegeeks/Desktop/myscript

This will schedule our script to run at 10:30 every day.

Note: sometimes you see error says Resource temporarily unavailable.

All you have to do is this:

$ rm -f /var/run/crond.pid

You should be a root user to do this.

Just that simple!

You can use one of the pre-configured cron script directories like:

/etc/cron.hourly

/etc/cron.daily

/etc/cron.weekly

/etc/cron.monthly

Just put your bash script file on any of these directories and it will run periodically.

Starting Scripts at Login

In the previous posts, we’ve talked about startup files, I recommend you to review the previous.

$HOME/.bash_profile

$HOME/.bash_login

$HOME/.profile

To run your scripts at login, place your code in  $HOME/.bash_profile.

Starting Scripts When Opening the Shell

OK, what about running our bash script when the shell opens? Easy.

Type your script on .bashrc file.

And now if you open the shell window, it will execute that command.

I hope you find the post useful. keep coming back.

Thank you.

likegeeks.com

0