Archive | GNU/Linux İpuçları

Prevent SQL Injection

Before we talk about how to prevent SQL injection, we have to know the impact of SQL Injection attack which is one of the most dangerous attacks on the web. The attacker can steal your data or even worse, the whole web server can be stolen from one SQL injection vulnerability. I wrote this post to show you how to prevent SQL injection. If you need to know more about SQL injection itself and its types and all other stuff, you can do a simple search on google if you want. The solution is to clean the request parameters coming from the user. Keep in mind that the solution that I’ll share with you is not like the most solutions on the web that go to every SQL statement and clean the request variables one by one. My solution is preventing SQL injection without messing with your CMS files.

Continue Reading →

Well, maybe someone who is using PHP would say that is easy, it is just using a function like mysql_real_escape_string or mysqli_real_escape_string.

But that works only on a single dimensional array, what about a multi-dimensional array?

Well, we need to iterate over array items recursively

So what we will do is preventing SQL injection against multidimensional array.

Solution

This code does the magic for both single and multidimensional arrays using PHP:

if (!function_exists("clean")) {

//Gets the current configuration setting of magic_quotes_gpc if on or off

if (get_magic_quotes_gpc()) {

function magicquotes_Stripslashes(&$value, $key) {

$value = stripslashes($value);

}

$gpc = array(&$_COOKIE, &$_REQUEST);

array_walk_recursive($gpc, 'magicquotes_Stripslashes');

}

function clean(&$value, $key) {

//here the clean process for every array item

// use mysqli_real_escape_string instead if you use php 7

$value = mysql_real_escape_string($value);

}

}

$req = array(&$_REQUEST);

array_walk_recursive($req, 'clean');

The PHP function used to walk through the request multidimensional array is array_walk_recursive.

Just put this code on the top of your site or header file right after connecting to the database, your file could be up.php or header.php or something similar.

Because if you use this code before the connection occurs, it will show an error because you are using mysql_real_escape_string function which needs a SQL connection.

If you are using PHP 7, you’ll notice that MySQL extension is removed, so in order to make the above code working, you need to replace the function mysql_real_escape_string with mysqli_real_escape_string.

One final thing I have to mention regarding your code, you should keep your SQL parameters on all of your pages between quotes like this:

mysql_query("select * from users where email='$email' order by id");

Notice how the variable $email is quoted.

Without quoting your input like the above statement, SQL injection prevention won’t work.

likegeeks.com

0

Expect command and how to automate shell scripts like magic

In the previous post, we talked about writing practical shell scripts and we saw how it is easy to write a shell script. Today we are going to talk about a tool that does magic to our shell scripts, that tool is the Expect command or Expect scripting language. Expect command or expect scripting language is a language that talks with your interactive programs or scripts that require user interaction. Expect scripting language works by expecting input, then the Expect script will send the response without any user interaction. You can say that this tool is your robot which will automate your scripts. If Expect command if not installed on your system, you can install it using the following command:

Continue Reading →

$ apt-get install expect

Or on Red Hat based systems like CentOS:

$ yum install expect

Expect Command

Before we talk about expect command, Let’s see some of the expect command which used for interaction:

spawn                  Starting a script or a program.

expect                  Waiting for program output.

send                      Sending a reply to your program.

interact                Allowing you in interact with your program.

  • The spawn command is used to start a script or a program like the shell, FTP, Telnet, SSH, SCP, and so on.
  • The send command is used to send a reply to a script or a program.
  • The Expect command waits for input.
  • The interact command allows you to define a predefined user interaction.

We are going to type a shell script that asks some questions and we will make an Expect script that will answer those questions.

First, the shell script will look like this:

#!/bin/bash

echo "Hello, who are you?"

read $REPLY

echo "Can I ask you some questions?"

read $REPLY

echo "What is your favorite topic?"

read $REPLY

Now we will write the Expect scripts that will answer this automatically:

#!/usr/bin/expect -f

set timeout -1

spawn ./questions

expect "Hello, who are you?\r"

send -- "Im Adam\r"

expect "Can I ask you some questions?\r"

send -- "Sure\r"

expect "What is your favorite topic?\r"

send -- "Technology\r"

expect eof

The first line defines the expect command path which is  #!/usr/bin/expect .

On the second line of code, we disable the timeout. Then start our script using spawn command.

We can use spawn to run any program we want or any other interactive script.

The remaining lines are the Expect script that interacts with our shell script.

The last line if the end of file which means the end of the interaction.

Now Showtime, let’s run our answer bot and make sure you make it executable.

$ chmod +x ./answerbot

$./answerbot

expect command

Cool!! All questions are answered as we expect.

If you get errors about the location of Expect command you can get the location using the which command:

$ which expect

We did not interact with our script at all, the Expect program do the job for us.

The above method can be applied to any interactive script or program.Although the above Expect script is very easy to write, maybe the Expect script little tricky for some people, well you have it.

Using autoexpect

To build an expect script automatically, you can the use autoexpect command.

autoexpect works like expect, but it builds the automation script for you. The script you want to automate is passed to autoexpect as a parameter and you answer the questions and your answers are saved in a file.

$ autoexpect ./questions

autoexpect command

A file is generated called script.exp contains the same code as we did above with some additions that we will leave it for now.

autoexpect script

If you run the auto generated file script.exp, you will see the same answers as expected:

autoexpect script execution

Awesome!! That super easy.

There are many commands that produce changeable output, like the case of FTP programs, the expect script may fail or stuck. To solve this problem, you can use wildcards for the changeable data to make your script more flexible.

Working with Variables

The set command is used to define variables in Expect scripts like this:

set MYVAR 5

To access the variable, precede it with $ like this $VAR1

To define command line arguments in Expect scripts, we use the following syntax:

set MYVAR [lindex $argv 0]

Here we define a variable MYVAR which equals the first passed argument.

You can get the first and the second arguments and store them in variables like this:

set my_name [lindex $argv 0]

set my_favorite [lindex $argv 1]

Let’s add variables to our script:

#!/usr/bin/expect -f

set my_name [lindex $argv 0]

set my_favorite [lindex $argv 1]

set timeout -1

spawn ./questions

expect "Hello, who are you?\r"

send -- "Im $my_name\r"

expect "Can I ask you some questions?\r"

send -- "Sure\r"

expect "What is your favorite topic?\r"

send -- "$my_favorite\r"

expect eof

Now try to run the Expect script with some parameters to see the output:

$ ./answerbot SomeName Programming

expect command variables

Awesome!! Now our automated Expect script is more dynamic.

Conditional Tests

You can write conditional tests using braces like this:

expect {

"something" { send -- "send this\r" }

"*another" { send -- "send another\r" }

}

We are going to change our script to return different conditions, and we will change our Expect script to handle those conditions.

We are going to emulate different expects with the following script:

#!/bin/bash

let number=$RANDOM

if [ $number -gt 25000 ]

then

echo "What is your favorite topic?"

else

echo "What is your favorite movie?"

fi

read $REPLY

A random number is generated ever time you run the script and based on that number, we put a condition to return different expects.

Let’s make out Expect script that will deal with that.

#!/usr/bin/expect -f

set timeout -1

spawn ./questions

expect {

"*topic?" { send -- "Programming\r" }

"*movie?" { send -- "Star wars\r" }

}

expect eof

expect command conditions

Very clear. If the script hits the topic output, the Expect script will send programming and if the script hits movie output the expect script will send star wars. Isn’t cool?

If else Conditions

You can use if/else clauses in expect scripts like this:

#!/usr/bin/expect -f

set NUM 1

if { $NUM < 5 } {

puts "\Smaller than 5\n"

} elseif { $NUM > 5 } {

puts "\Bigger than 5\n"

} else {

puts "\Equals 5\n"

}

expect command if command

Note: The opening brace must be on the same line.

While Loops

While loops in expect language must use braces to contain the expression like this:

#!/usr/bin/expect -f

set NUM 0

while { $NUM <= 5 } {

puts "\nNumber is $NUM"

set NUM [ expr $NUM + 1 ]

}

puts ""

expect command while loop

For Loops

To make a for loop in expect, three fields must be specified, like the following format:

#!/usr/bin/expect -f

for {set NUM 0} {$NUM <= 5} {incr NUM} {

puts "\nNUM = $NUM"

}

puts ""

expect command for loop

User-defined Functions

You can define a function using proc like this:

proc myfunc { TOTAL } {

set TOTAL [expr $TOTAL + 1]

return "$TOTAL"

}

And you can use them after that.

#!/usr/bin/expect -f

proc myfunc { TOTAL } {

set TOTAL [expr $TOTAL + 1]

return "$TOTAL"

}

set NUM 0

while {$NUM <= 5} {

puts "\nNumber $NUM"

set NUM [myfunc $NUM]

}

puts ""

expect command user-defined functions

Interact Command

Sometimes your Expect script contains some sensitive information that you don’t want to share with other users who use your Expect scripts, like passwords or any other data, so you want your script to take this password from you and continuing automation normally.

The interact command reverts the control back to the keyboard.

When this command is executed, Expect will start reading from the keyboard.

This shell script will ask about the password as shown:

#!/bin/bash

echo "Hello, who are you?"

read $REPLY

echo "What is you password?"

read $REPLY

echo "What is your favorite topic?"

read $REPLY

Now we will write the Expect script that will prompt for the password:

#!/usr/bin/expect -f

set timeout -1

spawn ./questions

expect "Hello, who are you?\r"

send -- "Hi Im Adam\r"

expect "*password?\r"

interact ++ return

send "\r"

expect "*topic?\r"

send -- "Technology\r"

expect eof

interact command

After you type your password type ++ and the control will return back from the keyboard to the script.

Expect language is ported to many languages like C#, Java, Perl, Python, Ruby and Shell with almost the same concepts and syntax due to its simplicity and importance.

Expect scripting language is used in quality assurance, network measurements such as echo response time, automate file transfers, updates, and many other uses.

I hope you now supercharged with some of the most important aspects of Expect command, autoexpect command and how to use it to automate your tasks in a smarter way.

Thank you.

likegeeks.com

0

How to write practical shell scripts

In the last post, we talked about regular expressions and we saw how to use them in sed and awk for text processing, and we discussed before Linux sed command and awk command. During the series, we wrote small shell scripts, but we didn’t mix things up, I think we should take a small step further and write a useful shell script. However, the scripts in this post will help you to empower your script writing skills. You can send messages to someone by phone or email, but one method, not commonly used anymore, is sending a message directly to the user’s terminal. We are going to build a bash script that will send a message to a user who is logged into the Linux system. For this simple shell script, only a few functions are required. Most of the required commands are common and have been covered in our series of shell scripting; you can review the previous posts.

Continue Reading →

First, we need to know who is logged in. This can be done using the who command which retrieves all logged in users.

$ who

shell scripts who command

To send a message you need the username and his current terminal.

You need to know if messages are allowed or not for that user using the mesg command.

$ mesg

shell scripts mesg command

If the result shows “is y” that means messaging is permitted. If the result shows “is n”, that means messaging is not permitted.

To check any logged user message status, use the who command with -T option.

$ who T

If you see a dash (-) that means messages are turned off and if you see plus sign (+) that means messages are enabled.

To allow messages, type mesg command with the “y” option like this:

$ mesg y

shell scripts allow messages

Sure enough, it shows “is y” which means messages are permitted for this user.

Of course, we need another user to be able to communicate with him so in my case I’m going to connect to my PC using SSH and I’m already logged in with my user, so we have two users logged onto the system.

Let’s see how to send a message.

Write Command

The write command is used to send messages between users using the username and current terminal.

For those users who logged into the graphical environment (KDE, Gnome, Cinnamon or any), they can’t receive messages. The user must be logged onto the terminal

We will send a message to testuser user from my user likegeeks like this:

$ write testuser pts/1

shell scripts write command

Type the write command followed by the user and the terminal and hit Enter.

When you hit Enter, you can start typing your message. After finishing the message, you can send the message by pressing the Ctrl+D key combination which is the end of file signal. I recommend you to review the post about signals and jobs.

shell scripts receive message

The receiver can recognize which user on which terminal sends the message. EOF means that the message is finished.

I think now we have all the parts to build our shell script.

Creating The Send Script

Before we create our shell script, we need to determine whether the user we want to send a message to him is currently logged on the system, this can be done using who command to determine that.

logged=$(who | grep -i -m 1 $1 | awk '{print $1}')

We get the logged user using the grep command. The -m 1 option is used in case there are multiple sessions opened for the same user.

If the user is not online, the grep command returns either nothing.

This output is piped to the awk command. The awk command returns only the first item. The final output from the awk command is stored in the variable logged_on.

Then we need to check the variable if it contains something or not:

if [ -z $logged ]

then

echo "$1 is not logged on."

echo "Exit"

exit

fi

I recommend you to read the post about the if statement and how to use it Bash Script.

shell scripts check logged user

The logged variable is tested to check if it is a zero or not.

If it is zero, the script prints the message, and the script is terminated.

If the user is logged, the logged_on variable contains the username.

Checking If The User Accepts Messages

To check if messages are allowed or not, use the who command with -T option.

check=$(who -T | grep -i -m 1 $1 | awk '{print $2}')

if [ "$check" != "+" ]

then

echo "$1 disable messaging."

echo "Exit"

exit

fi

shell script check message allowed

Notice that we use the who command with -T. This shows a (+) beside the username if messaging is permitted. Otherwise, it shows a (-) beside the username, if messaging is not permitted.

Finally, we check for a messaging indicator if the indicator is not set to plus sign (+).

Checking If Message Was Included

You can check if the message was included or not like this:

if [ -z $2 ]

then

echo "Message not found"

echo "Exit"

exit

fi

Getting the Current Terminal

Before we send a message, we need to get the user current terminal and store it in a variable.

terminal=$(who | grep -i -m 1 $1 | awk '{print $2}')

Then we can send the message:

echo $2 | write $logged $terminal

Now we can test the whole shell script to see how it goes:

$ ./senderscript likegeeks welcome

Let’s see the other shell window:

shell script send message

Good!  You can now send a simple one-word messages.

Sending a Long Message

If you try to send more than one word:

$ ./senderscript likegeeks welcome to shell scripting

shell script oneword message

It didn’t work. Only the first word of the message is sent.

To fix this problem, we will use the shift command with the while loop.

shift

while [ -n "$1" ]

do

message=$message' '$1

shift

done

And now one thing needs to be fixed, which is the message parameter.

echo $whole_message | write $logged $terminal

So now the whole script should be like this:

#!/bin/bash

logged=$(who | grep -i -m 1 $1 | awk '{print $1}')

if [ -z $logged ]

then

echo "$1 is not logged on."

echo "Exit"

exit

fi

check=$(who -T | grep -i -m 1 $1 | awk '{print $2}')

if [ "$check" != "+" ]

then

echo "$1 disable messaging."

echo "Exit"

exit

fi

if [ -z $2 ]

then

echo "Message not found"

echo "Exit"

exit

fi

terminal=$(who | grep -i -m 1 $1 | awk '{print $2}')

shift

while [ -n "$1" ]

do

message=$message' '$1

shift

done

echo $message | write $logged $terminal

If you try now:

$ ./senderscript likegeeks welcome to shell scripting

shell script complete message

Awesome!! It worked. Again, I’m not here to make a script to send the message to the user, but the main goal is to review our shell scripting knowledge and use all the parts we’ve learned together and see how things work together.

Monitoring Disk Space

Let’s build a script that monitors the biggest top ten directories.

If you add -s option to the du command, it will show summarized totals.

$ du -s /var/log/

The -S option is used to show the subdirectories totals.

$ du -S /var/log/

shell script du command

You should use the sort command to sort the results generated by the du command to get the largest directories like this:

$ du -S /var/log/ | sort -rn

shell script sort command

The -n to sort numerically and the -r option to reverse the order so it shows the bigger first.

The N command is used to label each line with a number:

sed '{11,$D; =}' |

sed 'N; s/\n/ /' |

Then we can clean the output using the awk command:

awk '{printf $1 ":" "\t" $2 "\t" $3 "\n"}'

Then we add a colon and a tab so it appears much better.

$ du -S /var/log/ |

sort -rn |

sed '{11,$D; =}' |

# pipe the first result for another one to clean it

sed 'N; s/\n/ /' |

# formated printing using printf

awk '{printf $1 ":" "\t" $2 "\t" $3 "\n"}'

shell script format output with sed and awk

Suppose we have a variable called  MY_DIRECTORIES that holds 2 folders.

MY_DIRECTORIES=”/home /var/log”

We will iterate over each directory from MY_DIRECTORIES variable and get the disk usage using du command.

So the shell script will look like this:

#!/bin/bash

MY_DIRECTORIES="/home /var/log"

echo "Top Ten Directories"

for DIR in $MY_DIRECTORIES

do

echo "The $DIR Directory:"

du -S $DIR 2>/dev/empty |

sort -rn |

sed '{11,$D; =}' |

# pipe the first result for another one to clean it

sed 'N; s/\n/ /' |

# formated printing using printf

awk '{printf $1 "\t" "\t" $2 "\t" $3 "\r\n"}'

done

exit

shell script monitor disk usage

Good!! Both directories /home and /var/log are shown on the same report.

You can filter files, so instead of calculating the consumption of all files, you can calculate the consumption for a specific extension like *.log or whatever.

One thing I have to mention here, in production systems, you can’t rely on disk space report instead, you should use disk quotas.

Quota package is specialized for that, but here we are learning how bash scripts work.

Again the shell scripts we’ve introduced here is for showing you how shell scripting work, there are a ton of ways to implement any task in Linux.

My post is finished for now. I tried to reduce the post length and make everything simple as possible, hope you like it.

Keep coming back. Thank you.

likegeeks.com

0

31+ Examples for sed Linux Command in Text Manipulation

In the previous post, we talked about bash functions and how to use them from the command line directly and we saw some other cool stuff. Today we will talk about a very useful tool for string manipulation called sed or sed Linux command. Sed is used to work with text files like log files, configuration files, and other text files. In this post, we are going to focus on sed Linux command which is used for text manipulation, which is a very important step in our bash scripting journey. Linux system provides some tools for text processing, one of those tools is sed. We will discuss the 31+ examples with pictures to show the output of every example.

Continue Reading →

The sed command is an interactive text editor like nano. Sed Linux command edits data based on the rules you provide, you can use it like this:

$ sed options file

You are not limited to use sed to manipulate files, you apply it to the STDIN directly like this:

$ echo "Welcome to LikeGeeks page" | sed 's/page/website/'

sed Linux command

The s command replaces the first text with the second text pattern. In this case, the string “website” was replaced with the word “page”, so the result will be as shown.

The above example was a very basic example to demonstrate the tool. We can use sed Linux command to manipulate files as well.

This is our file:

sed manipulate file

$ sed 's/test/another test' ./myfile

The results are printed to the screen instantaneously, you don’t have to wait for processing the file to the end.

If your file is huge enough, you will see the result before the processing is finished.

Sed Linux command doesn’t update your data. It only sends the changed text to STDOUT. The file still untouched. If you need to overwrite the existing content, you can check our previous post which was talking about redirections.

Using Multiple sed Linux Commands in The Command Line

To run multiple sed commands, you can use the -e option like this:

$ sed -e 's/This/That/; s/test/another test/' ./myfile

sed multiple commands

Sed command must be separated by a semi colon without any spaces.

Also, you can use a single quotation to separate commands like this:

$ sed -e '

> s/This/That/

> s/test/another test/' myfile

sed separate commands

The same result, no big deal.

Reading Commands From a File

You can save your sed commands in a file and use them by specifying the file using -f option.

$ cat mycommands

s/This/That/

s/test/another test/

$ sed -f mycommands myfile

sed read commands from file

Substituting Flags

Look at the following example carefully:

$ cat myfile

$ sed 's/test/another test/' myfile

sed substitute flag

The above result shows the first occurrence in each line is only replaced. To substitute all occurrences of a pattern, use one of the following substitution flags.

The flags are written like this:

s/pattern/replacement/flags

There are four types of substitutions:

g, replace all occurrences.
A number, the occurrence number for the new text that you want to substitute.
p, print the original content.
w file: means write the results to a file.

You can limit your replacement by specifying the occurrence number that should be replaced like this:

$ sed 's/test/another test/2' myfile

sed number flag

As you can see, only the second occurrence on each line was replaced.

The g flag means global, which means a global replacement for all occurrences:

$ sed 's/test/another test/2' myfile

sed global flag

The p flag prints each line contains a pattern match, you can use the -n option to print the modified lines only.

$ cat myfile

$ sed -n 's/test/another test/p' myfile

sed supress lines

The w flag saves the output to a specified file:

$ sed 's/test/another test/w output' myfile

sed send output to file

The output is printed on the screen, but the matching lines are saved to the output file.

Replace Characters

Suppose that you want to search for bash shell and replace it with csh shell in the /etc/passwd file using sed, well, you can do it easily:

$ sed 's/\/bin\/bash/\/bin\/csh/' /etc/passwd

Oh!! that looks terrible.

Luckily, there is another way to achieve that. You can use the exclamation mark (!) as string delimiter like this:

$ sed 's!/bin/bash!/bin/csh!' /etc/passwd

Now it’s easier to read.

Limiting sed

Sed command processes your entire file. However, you can limit the sed command to process specific lines, there are two ways:

  • A range of lines.
  • A pattern that matches a specific line.

You can type one number to limit it to a specific line:

$ sed '2s/test/another test/' myfile

sed restricted

Only line two is modified.

What about using a range of lines:

$ sed '2,3s/test/another test/' myfile

sed replace range of lines

Also, we can start from a line to the end of the file:

$ sed '2,$s/test/another test/' myfile

sed replace to the end

Or you can use a pattern like this:

$ sed '/likegeeks/s/bash/csh/' /etc/passwd

sed pattern match

Awesome!!

You can use regular expressions to write this pattern to be more generic and useful.

Delete Lines

To delete lines, the delete (d) flag is your friend.

The delete flag deletes the text from the stream, not the original file.

$ sed '2d' myfile

sed delete line

Here we delete the second line only from myfile.

What about deleting a range of lines?

$ sed '2,3d' myfile

sed delete multiple line

Here we delete a range of lines, the second and the third.

Another type of ranges:

$ sed '3,$d' myfile

sed delete to the end

Here we delete from the third line to the end of the file.

All these examples never modify your original file.

$ sed '/test 1/d' myfile

sed deletepattern match

Here we use a pattern to delete the line if matched on the first line.

If you need to delete a range of lines, you can use two text patterns like this:

$ sed '/second/,/fourth/d' myfile

sed delete range of lines

The first to the third line deleted.

Insert and Append Text

You can insert or append text lines using the following flags:

  • The (i) flag.
  • The  (a) flag.

$ echo "Another test" | sed 'i\First test '

sed insert text

Here the text is added before the specified line.

$ echo "Another test" | sed 'a\First test '

sed append

Here the text is added after the specified line.

Well, what about adding text in the middle?

Easy, look at the following example:

$ sed '2i\This is the inserted line.' myfile

sed insert line

And the appending works the same way, but look at the position of the appended text:

$ sed '2a\This is the appended line.' myfile

sed append line

The same flags are used but with a location of insertion or appending.

Modifying Lines

To modify a specific line, you can use the (c) flag like this:

$ sed '3c\This is a modified line.' myfile

sed modify line

You can use a regular expression pattern and all lines match that pattern will be modified.

$ sed '/This is/c Line updated.' myfile

sed pattern match

Transform Characters

The transform flag (y) works on characters like this:

$ sed 'y/123/567/' myfile

sed transform character

The transformation is applied to all data and cannot be limited to a specific occurrence.

Print Line Numbers

You can print line number using the (=) sign like this:

$ sed '=' myfile

sed line numbers

However, by using -n combined with the equal sign, the sed command displays the line number that contains matching.

$ sed -n '/test/=' myfile

sed hide lines

Read Data From a File

You can use the (r) flag to read data from a file.

You can define a line number or a text pattern for the text that you want to read.

$ cat newfile

$ sed '3r newfile' myfile

sed read data from file

The content is just inserted after the third line as expected.

And this is using a text pattern:

$ sed '/test/r newfile' myfile

sed read match pattern

Cool right?

Useful Examples

We have a file that contains a text with a placeholder and we have another file that contains the data that will be filled in that placeholder.

We will use the (r) and (d) flags to do the job.

The word DATA in that file is a placeholder for a real content which is stored in another file called data.

We will replace it with the actual content:

$ Sed '/DATA>/ {

r newfile

d}' myfile

sed repalce placeholder

Awesome!! as you can see, the placeholder location is filled with the data from the other file.

This is just a very small intro about sed command. Actually, sed Linux command is another world by itself.

The only limitation is your imagination.

I hope you enjoy what’ve introduced today about the string manipulation using sed Linux command.

Thank you.

likegeeks.com

0

Bash Scripting Part6 – Create and Use Bash Functions

Before we talk about bash functions, let’s discuss this situation. When writing bash scripts, you’ll find yourself that you are using the same code in multiple places. If you get tired of writing the same lines of code again and again in your bash script, it would be nice to write the block of code once and call it anywhere in your bash script. The bash shell allows you to do just that with Functions. Bash functions are blocks of code that you can reuse them anywhere in your code. Anytime you want to use this block of code in your script, you simply type the function name given to it. We are going to talk about how to create your own bash functions and how to use them in shell scripts.

Continue Reading →

Creating a function

You can create a function like this:

functionName {

}

Or like this:

functionName() {

}

The parenthesis on the second snippet is used to pass values to the function from outside of it, so these values can be used inside the function.

Using Functions

#!/bin/bash

function myfunc {

echo "Using functions"

}

total=1

while [ $total -le 3 ]

do

myfunc

total=$(( $total + 1 ))

done

echo "Loop finished"

myfunc

echo "End of the script"

Here we’ve created a function called myfunc and in order to call it, we just typed its name.

bash functions

The function can be called many times as you want.

Notice: If you try to use a function which is not defined, what will happen?

#!/bin/bash

total=1

while [ $total -le 3 ]

do

myfunc

total=$(( $total + 1 ))

done

echo "Loop End"

function myfunc {

echo "Using function ..."

}

echo "End of the script"

bash functions call before declare

Oh, it’s an error because there no such function.

Another notice: bash function name must be unique. Otherwise, the new function will cancel the old function without any errors.

#!/bin/bash

function myfunc {

echo "The first function definition"

}

myfunc

function myfunc {

echo "The second function definition"

}

myfunc

echo "End of the script"

bash functions override definition

As you can see, the second function definition takes control from the first one without any error so take care when defining functions.

Using the return Command

The return command returns an integer from the function.

There are two ways of using return command; the first way is like this:

#!/bin/bash

function myfunc {

read -p "Enter a value: " value

echo "adding value"

return $(( $value + 10 ))

}

myfunc

echo "The new value is $?"

bash functions return command

The myfunc function adds 10 to the  $value variable then show the sum using the $? Variable.

Don’t execute any commands before getting the value of the function, because the variable $? returns the status of the last line.

This return method returns integers. what about returning strings?

Using Function Output

The second way of returning a value from a bash function is command substitution. This way, you can return anything from the function.

#!/bin/bash

function myfunc {

read -p "Enter a value: " value

echo $(( $value + 10 ))

}

result=$( myfunc)

echo "The value is $result"

bash functions output

Passing Parameters

We can deal with bash functions like small snippets that can be reused and that’s OK, but we need to make the function like an engine, we give it something and it returns a result based on what we provide.

You can use the environment variables to process the passed parameters to the function. The function name is declared as $0 variable, and the passed parameters are $1, $2, $3, etc.

You can get the number of passed parameters to the function using the ($#) variable.

We pass parameters like this:

myfunc $val1 10 20

The following example shows how to use the ($#) variable:

#!/bin/bash

function addnum {

if [ $# -gt 2 ]

then

# If parameters no equal 2

echo "Incorrect parameters passed"

else

# Otherwise add them

echo $(( $1 + $2 ))

fi

}

echo -n "Adding 10 and 15: "

value=$(addnum 10 15)

echo $value

echo -n "Adding three numbers: "

value=$(addnum 10 15 20)

echo $value

bash functions pass parameters

The addnum function gets the passed parameters count. If greater than 2 passed, it returns -1.

If there’s one parameter, the addnum function adds this parameter twice. If 2 parameters passed, the addnum function adds them together, and if you try to add three parameters it will return -1.

If you try to use the passed parameters inside the function, it fails:

#!/bin/bash

function myfunc {

echo $(( $1 + $2 + $3 + $4))

}

if [ $# -eq 4 ]

then

value=$( myfunc)

echo "Total= $value"

else

echo "Passed parameters like this: myfunc a b c d"

fi

bash functions unknown parameters

Instead, you have to send them to the function like this:

#!/bin/bash

function myfunc {

echo $(( $1 + $2 + $3 + $4))

}

if [ $# -eq 4 ]

then

value=$(myfunc $1 $2 $3 $4)

echo "Total= $value"

else

echo "Passed parameters like this: myfunc a b c d"

fi

bash functions parameters

Now it works!!

Processing Variables in Bash Functions

Every variable we use has a scope, the scope is variable visibility to your script.

You can define two types of variables:

  • Global
  • Local

Global Variables

They are visible and valid anywhere in the bash script. You can even get its value from inside the function.

If you declare a global variable within a function, you can get its value from outside the function.

Any variable you declare is a global variable by default. If you define a variable outside the function, you call it inside the function without problems:

#!/bin/bash

function myfunc {

input=$(( $input + 10 ))

}

read -p "Enter a number: " input

myfunc

echo "The new value is: $input"

bash functions global variables

If you change the variable value inside the function, the value will be changed outside of the function.

So how to overcome something like this? Use local variables.

Local Variables

If you will use the variable inside the function only, you can declare it as a local variable using the local keyword  like this:

local tmp=$(( $val + 10 ))

So if you have two variables, one inside the function and the other is outside the function and they have the identical name, they won’t affect each other.

#!/bin/bash

function myfunc {

local tmp=$[ $val + 10 ]

echo "The Temp from inside function is $tmp"

}

tmp=4

myfunc

echo "The temp from outside is $tmp"

bash functions local variables

When you use the $tmp variable inside the myfunc function, it doesn’t change the value of the $tmp which is outside the function.

Passing Arrays As Parameters

What will happen if you pass an array as a parameter to a function:

#!/bin/bash

function myfunc {

echo "The parameters are: $@"

arr=$1

echo "The received array is ${arr[*]}"

}

my_arr=(5 10 15)

echo "The old array is: ${my_arr[*]}"

myfunc $my_arr

bash functions pass arrays

The function only takes the first value of the array variable.

You should disassemble the array into its single values, then use these values as function parameters. Finally, pack them into an array in the function like this:

#!/bin/bash

function myfunc {

local new_arr

new_arr=("$@")

echo "Updated value is: ${new_arr[*]}"

}

my_arr=(4 5 6)

echo "Old array is ${my_arr[*]}"

myfunc ${my_arr[*]}

bash functions pass arrays solution

The array variable was rebuilt thanks to the function.

Recursive Function

This feature enables the function to call itself from within the function itself.

The classic example of a recursive function is calculating factorials. To calculate the factorial of 3, use the following equation:

3! = 1 * 2 * 3

Instead, we can use the recursive function like this:

x! = x * (x-1)!

So to write the factorial function using bash scripting, it will be like this:

#!/bin/bash

function fac_func {

if [ $1 -eq 1 ]

then

echo 1

else

local tmp=$(( $1 - 1 ))

local res=$(fac_func $tmp)

echo $(( $res * $1 ))

fi

}

read -p "Enter value: " val

res=$(fac_func $val)

echo "The factorial of $val is: $res"

bash recursive function

Using recursive bash functions is so easy!

Creating Libraries

Now we know how to write functions and how to call them, but what if you want to use these bash functions or blocks of code on different bash script files without copying and pasting it on your files.

You can create a library for your functions and point to that library from any file as you need.

By using the source command, you can embed the library file script inside your shell script.

The source command has an alias which is the dot. To source a file in a shell script, write the following line:

. ./myscript

Let’s assume that we have a file called myfuncs that contains the following:

function addnum {

echo $(( $1 + $2 + $3 + $4))

}

Now, we will use it in another bash script file like this:

#!/bin/bash

. ./myfuncs

result=$(addnum 10 10 5 5)

echo "Total = $result"

bash functions source command

Awesome!! We’ve used the bash functions inside our bash script file, we can also use them in our shell directly.

Use Bash Functions From Command Line

Well, that is easy, if you read the previous post which was about the signals and jobs you will have an idea about how to source our functions file in the .bashrc file and hence we can use the functions directly from the bash shell. Cool

Edit the .bashrc file and add this line:

. /home/likegeeks/Desktop/myfuncs

Make sure you type the correct path.

Now the function is available for us to use in the command line directly:

$ addnum 10 20

bash functions use from shell

Note: you may need to logout and login to use the bash functions from the shell.

Another note: if you make your function name like any of the built-in commands you will overwrite the default command so you should take care of that.

I hope you like the post. Keep coming back.

Thank you.

likegeeks.com

0

Linux Bash Scripting Part5 – Signals and Jobs

In the previous post, we talked about input, output, and redirection in bash scripts. Today we will learn how to run and control them on Linux system. Till now, we can run scripts only from the command line interface. This isn’t the only way to run Linux bash scripts. This post describes the different ways to control your Linux bash scripts. These are the most common Linux system signals:

 

Continue Reading →

Linux Signals

These are the most common Linux system signals:

Num        Name                    Job

1              SIGHUP               Process hangs up.

2             SIGINT                 Process Interruption.

3             SIGQUIT              Proces quit or stop.

9             SIGKILL               Process termination.

15           SIGTERM             Process termination.

17           SIGSTOP              Process stopping without termination.

18           SIGTSTP              Process stopping or pausing without termination.

19           SIGCONT             Process continuation after stopping.

Your Linux bash scripts don’t control these signals, you can program your bash script to recognize signals and perform commands based on the signal that was sent.

Stop a Process

To stop a running process, you can press Ctrl+C which generates SIGINT signal to stop the current process running in the shell.

$ sleep 100

Ctrl+C

stop process

Pause a Process

The Ctrl+Z keys generate a SIGTSTP signal to stop any processes running in the shell, and that leaves the program in memory.

$ sleep 100

Ctrl+Z

pause process

The number between brackets which is (1) is the job number.

If try to exit the shell and you have a stopped job assigned to your shell, the bash warns you if you.

The ps command is used to view the stopped jobs.

ps –l

ps -l

In the S column (process state), it shows the traced (T) or stopped (S) states.

If you want to terminate a stopped job you can kill its process by using kill command.

kill processID

Trap Signals

To trap signals, you can use the trap command. If the script gets a signal defined by the trap command, it stops processing and instead the script handles the signal.

You can trap signals using the trap command like this:

#!/bin/bash

trap "echo 'Ctrl-C was trapped'" SIGINT

total=1

while [ $total -le 3 ]

do

echo "#$total"

sleep 2

total=$(( $total + 1 ))

done

Every time you press Ctrl+C, the signal is trapped and the message is printed.

trap signal

If you press Ctrl+C, the echo statement specified in the trap command is printed instead of stopping the script. Cool, right?

Trapping The Script Exit

You can trap the shell script exit using the trap command like this:

#!/bin/bash

# Add the EXIT signal to trap it

trap "echo Goodbye..." EXIT

total=1

while [ $total -le 3 ]

do

echo "#$total"

sleep 2

total=$(( $total + 1 ))

done

trap exit

When the bash script exits, the Goodbye message is printed as expected.

Also, if you exit the script before finishing its work, the EXIT trap will be fired.

Modifying Or Removing a Trap

You can reissue the trap command with new options like this:

#!/bin/bash

trap "echo 'Ctrl-C is trapped.'" SIGINT

total=1

while [ $total -le 3 ]

do

echo "Loop #$total"

sleep 2

total=$(( $total + 1 ))

done

# Trap the SIGINT

trap "echo ' The trap changed'" SIGINT

total=1

while [ $total -le 3 ]

do

echo "Second Loop #$total"

sleep 1

total=$(( $total + 1 ))

done

modify trap

Notice how the script manages the signal after changing the signal trap.

You can also remove a trap by using 2 dashes trap SIGNAL
#!/bin/bash

trap "echo 'Ctrl-C is trapped.'" SIGINT

total=1

while [ $total -le 3 ]

do

echo "#$total"

sleep 1

total=$(( $total + 1 ))

done

trap -- SIGINT

echo "I just removed the trap"

total=1

while [ $total -le 3 ]

do

echo "Loop #2 #$total"

sleep 2

total=$(( $total + 1 ))

done

Notice how the script processes the signal before removing the trap and after removing the trap.

$ ./myscript

Crtl+C

remove trap

The first Ctrl+C was trapped and the script continues running while the second one exits the script because the trap was removed.

Running Linux Bash Scripts in Background Mode

If you see the output of the ps command, you will see all the running processes in the background and not tied to the terminal.

We can do the same, just place ampersand symbol (&) after the command.

#!/bin/bash

total=1

while [ $total -le 3 ]

do

sleep 2

total=$(( $total + 1 ))

done

$ ./myscipt &

run in background

Once you’ve done that, the script runs in a separate background process on the system and you can see the process id between the square brackets.

When the script dies,  you will see a message on the terminal.

Notice that while the background process is running, you can use your terminal monitor for STDOUT and STDERR messages so if an error occurs, you will see the error message and normal output.

run script in background

The background process will exit if you exit your terminal session.

So what if you want to continue running even if you close the terminal?

Running Scripts without a Hang-Up

You can run your Linux bash scripts in the background process even if you exit the terminal session using the nohup command.

The nohup command blocks any SIGHUP signals. This blocks the process from exiting when you exit your terminal.

$ nohup ./myscript &

linux bash nohup command

After running the nohup command, you can’t see any output or error from your script. The output and error messages are sent to a file called nohup.out.

Note: when running multiple commands from the same directory will override the nohup.out file content.

Viewing Jobs

To view the current jobs, you can use the jobs command.

#!/bin/bash

total=1

while [ $total -le 3 ]

do

echo "#$count"

sleep 5

total=$(( $total + 1 ))

done

Then run it.

$ ./myscript

Then press Ctrl+Z to stop the script.

linux bash view jobs

Run the same bash script but in the background using the ampersand symbol and redirect the output to a file just for clarification.

v$ ./myscript > outfile &

linux bash list jobs

The jobs command shows the stopped and the running jobs.

jobs –l

-l parameter to view the process ID

Restarting Stopped Jobs

The bg command is used to restart a job in background mode.

$ ./myscript

Then press Ctrl+Z

Now it is stopped.

$ bg

linux bash restart job

After using bg command, it is now running in background mode.

If you have multiple stopped jobs, you can do the same by specifying the job number to the bg command.

The fg command is used to restart a job in foreground mode.

$ fg 1

Scheduling a Job

The Linux system provides 2 ways to run a bash script at a predefined time:

  • at command.
  • cron table.

The at command

This is the format of the command

at [-f filename] time

The at command can accept different time formats:

  • Standard time format like 10:15.
  • An AM/PM indicator like 11:15PM.
  • A specifically named time like now, midnight.

You can include a specific date, using some different date formats:

  • A standard date format, such as MMDDYY or DD.MM.YY.
  • A text date, such as June 10 or Feb 12, with or without the year.
  • Now + 25 minutes.
  • 05:15AM tomorrow.
  • 11:15 + 7 days.

We don’t want to dig deep into the at command, but for now, just make it simple.

$ at -f ./myscript now

linux bash at command

The -M parameter is used to send the output to email if the system has email, and if not, this will suppress the output of the at command.

To list the pending jobs, use atq command:

linux bash at queue

Remove Pending Jobs

To remove a pending job, use the atrm command:

$ atrm 18

delete at queue

You must specify the job number to the atrm command.

Scheduling Scripts

What if you need to run a script at the same time every day or every month or so?

You can use the crontab command to schedule jobs.

To list the scheduled jobs, use the -l parameter:

$ crontab –l

The format for crontab is:

minute,Hour, dayofmonth, month, and dayofweek

So if you want to run a command daily at 10:30, type the following:

30 10 * * * command

The wildcard character (*) used to indicate that the cron will execute the command daily on every month at 10:30.

To run a command at 5:30 PM every Tuesday, you would use the following:

30 17 * * 2 command

The day of the week starts from 0 to 6 where Sunday=0 and Saturday=6.

To run a command at 10:00 on the beginning of every month:

00 10 1 * * command

The day of the month is from 1 to 31.

Let’s keep it simple for now and we will discuss the cron in great detail in future posts.

To edit the cron table, use the -e parameter like this:

crontab –e

Then type your command like the following:

30 10 * * * /home/likegeeks/Desktop/myscript

This will schedule our script to run at 10:30 every day.

Note: sometimes you see error says Resource temporarily unavailable.

All you have to do is this:

$ rm -f /var/run/crond.pid

You should be a root user to do this.

Just that simple!

You can use one of the pre-configured cron script directories like:

/etc/cron.hourly

/etc/cron.daily

/etc/cron.weekly

/etc/cron.monthly

Just put your bash script file on any of these directories and it will run periodically.

Starting Scripts at Login

In the previous posts, we’ve talked about startup files, I recommend you to review the previous.

$HOME/.bash_profile

$HOME/.bash_login

$HOME/.profile

To run your scripts at login, place your code in  $HOME/.bash_profile.

Starting Scripts When Opening the Shell

OK, what about running our bash script when the shell opens? Easy.

Type your script on .bashrc file.

And now if you open the shell window, it will execute that command.

I hope you find the post useful. keep coming back.

Thank you.

likegeeks.com

0

Shell Scripting Part4 – Input, Output, and Redirection

In the previous post, we talked about parameters and options in detail, today we will talk about something very important in shell scripting which are Input, Output, and Redirection. You can display the output from your shell scripts in two ways:

  • Display output on the screen.
  • Send output to a file.

Everything is a file in Linux and that includes input and output.

Continue Reading →

Each process can have 9 file descriptors opened at the same time. The file descriptors 0, 1, 2 are kept for the bash shell usage.

0              STDIN.

1              STDOUT.

2              STDERR.

You can use the above file descriptors to control input and output.

You need to fully understand these three because they are like the backbones of your shell scripting. So we are going to describe every one of them in detail.

STDIN

STDIN stands for standard input which is the keyboard by default.

You can replace the STDIN which is the keyboard and replace it with a file by using the input redirect symbol (<), it sends the data as keyboard typing. No magic!!

When you type the cat command without anything, it accepts input from STDIN. Any line you type, the cat command prints that line to the screen.

STDOUT

This stands for the standard output which is the screen by default.

You can redirect output to a file using the >> symbol.

If we have a file contains data, you can append data to it using this symbol like this:

pwd >> myfile

The output generated by pwd is appended to myfile without deleting the existed content.

shell-scripting-append

The following command tries to redirect the output to a file using > symbol.

ls –l xfile > myfile

shell-scripting-redirect-error

I have no file called xfile on my PC, and that generates an error which is sent to STDERR.

STDERR

This file descriptor is the standard error output of the shell which is sent to the screen by default.

If you need to redirect the errors to a log file instead of sending it to the screen, you can redirect errors using the redirection symbol.

Redirecting Errors

We can redirect the errors by placing the file descriptor which is 2 before the redirection symbol like this:

ls -l xfile 2>myfile

cat ./myfile

shell-scripting-redirect-error-to-file

As you can see, the error now is in the file and nothing on the screen.

Redirecting Errors and Normal Output

To redirect errors and the normal output, you have to precede each with the proper file descriptor like this:

ls –l myfile xfile anotherfile 2> errorcontent 1> correctcontent

shell-scripting-redirect-error-and-data

The ls command result is sent to the correctcontent file using the 1> symbol. And error messages were sent to the errorcontent file using the 2> symbol.

You can redirect normal output and errors to the same file using &> symbol like this:

ls –l myfile xfile anotherfile &> content

shell-scripting-redirect-all-to-file

All errors and normal output are redirected to file named content.

Output Redirection

There are two ways for output redirection:

  • Temporarily redirection.
  • Permanently redirection.

Temporary Redirections

For temporary redirections, you can use the >&2 symbol like this:

#!/bin/bash

echo "Error message" >&2

echo "Normal message"

shell-scripting-temp-redirection

So if we run it, we will see both lines printed normally because as we know errors go to the screen by default.

You can redirect errors to a file like this:

./myscript 2> myfile

shell-scripting-redirect-error-to-file

Shell scripting is Awesome! Normal output is sent to the screen, while the echo message which has >&2 symbol sends errors to the file.

Permanent Redirections

If you have much data that need to be redirected, you can have a permanent redirection using the exec command like this:

#!/bin/bash

exec 1>outfile

echo "Permanent redirection"

echo "from a shell to a file."

echo "without redirecting every line"

shell-scripting-redirect-all-to-file

If we look at the file called outfile, we will see the output of the echo lines.

We redirect the STDOUT at the beginning, what about in the middle of a script like this:

#!/bin/bash

exec 2>myerror

echo "Script Begining ..."

echo "Redirecting Output"

exec 1>myfile

echo "Output goes to the myfile"

echo "Output goes to myerror file" >&2

shell-scripting-permenant-redirection

The exec command redirects all errors to the file myerror, and standard output is sent to the screen.

The statement exec 1>myfile is used to redirect output to the myfile file, and finally, errors go to myerror file using >&2 symbol.

Redirecting Input

You can redirect input to a file instead of STDIN using exec command like this:

exec 0< myfile

This command tells the shell to take the input from a file called myfile instead of STDIN and here is an example:

#!/bin/bash

exec 0< testfile

total=1

while read line

do

echo "#$total: $line"

total=$(( $total + 1 ))

done

shell-scripting-redirect-input

Shell scripting is easy.

You know how to use the read command to get user input. If you redirect the STDIN to a file, the read command will try to read from STDIN which points to the file.

Some Linux system administrators use this technique to read the log files for processing and we will discuss more ways to read the log on the upcoming posts in a professional way.

Creating Custom Redirection

You know that there are 9 file descriptors, you use only 3 of them for input, output, and error.

The remaining six file descriptors are available for use for input and output redirection.

The exec command is used to assign a file descriptor for output like this:

#!/bin/bash

exec 3>myfile

echo "This line appears on the screen"

echo "This line stored on myfile" >&3

echo "This line appears on the screen"

shell-scripting-create-redirection

Creating Input File Descriptors

To redirect input file descriptors do the following:

1-  Save the STDIN to another file descriptor.

2- Redirecting it to a file.

3- Revert STDIN to its original location.

Look at the following code to understand these steps:

#!/bin/bash

exec 7<&0

exec 0< myfile

total=1

while read line

do

echo "#$total: $line"

total=$(( $total + 1 ))

done

exec 0<&7

read -p "Finished? " res

case $res in

y) echo "Goodbye";;

n) echo "Sorry, this is the end.";;

esac

shell-scripting-create-input-file-descriptor

The STDIN is saved to file descriptor 7 and the STDIN is redirected to a file.

The STDIN reverted back to its original location after iterating over file lines.

The last read command just to make sure that STDIN is reverted back to and you can use the keyboard normally.

Close File Descriptors

The file descriptors are closed automatically when the script exits. If you want to close the file descriptor yourself, redirect the file descriptor to this symbol &- it will be closed.

#!/bin/bash

exec 3> myfile

echo "Testing ..." >&3

exec 3>&-

echo "Nothing works" >&3

shell-scripting-closing-file-desciptor

As you can see, it gives error bad file descriptor because it is no longer exist.

lsof Command

The lsof command is used to list all the opened files on the system and background processes.

On many Linux systems like Fedora, the lsof command is located under /usr/sbin.

This is some of the important options for lsof command:

-p: for process ID.

-d: for the file descriptor.

You can get the process PID using $$ variable.

shell-scripting-list-opened-desciptors

The -a is used to combine results of -p option and -d option.

Now, testing the command from a script:

#!/bin/bash

exec 4> myfile1

exec 5> myfile2

exec 6< myfile3

lsof -a -p $$ -d 0,1,2,4,5,6

shell-scripting-list-custom-descriptors

The shell script creates the file descriptors 4 and 5 for writing and 6 for reading.

Suppressing Command Output

Sometimes you don’t want to see any output. We redirect the output to the black hole which is /dev/null.

For example, we can suppress errors like this:

ls -al badfile anotherfile 2> /dev/null

And this idea is also used when you want to truncate a file without deleting it completely.

cat /dev/null > myfile

Now you understand the input, output, how to redirect them, how to create your own file descriptor, and redirect to it.

I hope you enjoy it. keep coming back.

Thank you.

likegeeks.com

0
Translate »