Tag Archives | output

Linux Bash Scripting Part5 – Signals and Jobs

In the previous post, we talked about input, output, and redirection in bash scripts. Today we will learn how to run and control them on a Linux system. Till now, we can run scripts only from the command line interface. This isn’t the only way to run Linux bash scripts. This post describes the different ways to control your Linux bash scripts. In shell scripts, we talked about important things called Input, Output and Redirection. Everything is a file in Linux and that includes input and output. So we need to understand each one in detail.

 

Continue Reading →

Your Linux bash scripts don’t control these signals, you can program your bash script to recognize signals and perform commands based on the signal that was sent.

Stop a Process

To stop a running process, you can press Ctrl+C which generates SIGINT signal to stop the current process running in the shell.

sleep 100

Ctrl+C

Linux bash scripting Signals and Jobs stop process

Pause a Process

The Ctrl+Z keys generate a SIGTSTP signal to stop any processes running in the shell, and that leaves the program in memory.

sleep 100

Ctrl+Z

pause process

The number between brackets which is (1) is the job number.

If try to exit the shell and you have a stopped job assigned to your shell, the bash warns you if you.

The ps command is used to view the stopped jobs.

ps –l

ps -l

In the S column (process state), it shows the traced (T) or stopped (S) states.

If you want to terminate a stopped job you can kill its process by using kill command.

kill processID

Trap Signals

To trap signals, you can use the trap command. If the script gets a signal defined by the trap command, it stops processing and instead the script handles the signal.

You can trap signals using the trap command like this:

#!/bin/bash

trap "echo 'Ctrl-C was trapped'" SIGINT

total=1

while [ $total -le 3 ]; do

echo "#$total"

sleep 2

total=$(($total + 1))

done

Every time you press Ctrl+C, the signal is trapped and the message is printed.

trap signal

If you press Ctrl+C, the echo statement specified in the trap command is printed instead of stopping the script. Cool, right?

Trapping The Script Exit

You can trap the shell script exit using the trap command like this:

#!/bin/bash

# Add the EXIT signal to trap it

trap "echo Goodbye..." EXIT

total=1

while [ $total -le 3 ]; do

echo "#$total"

sleep 2

total=$(($total + 1))

done

trap exit

When the bash script exits, the Goodbye message is printed as expected.

Also, if you exit the script before finishing its work, the EXIT trap will be fired.

Modifying Or Removing a Trap

You can reissue the trap command with new options like this:

#!/bin/bash

trap "echo 'Ctrl-C is trapped.'" SIGINT

total=1

while [ $total -le 3 ]; do

echo "Loop #$total"

sleep 2

total=$(($total + 1))

done

# Trap the SIGINT

trap "echo ' The trap changed'" SIGINT

total=1

while [ $total -le 3 ]; do

echo "Second Loop #$total"

sleep 1

total=$(($total + 1))

done

modify trap

Notice how the script manages the signal after changing the signal trap.

You can also remove a trap by using 2 dashes trap -- SIGNAL

#!/bin/bash

trap "echo 'Ctrl-C is trapped.'" SIGINT

total=1

while [ $total -le 3 ]; do

echo "#$total"

sleep 1

total=$(($total + 1))

done

trap -- SIGINT

echo "I just removed the trap"

total=1

while [ $total -le 3 ]; do

echo "Loop #2 #$total"

sleep 2

total=$(($total + 1))

done

Notice how the script processes the signal before removing the trap and after removing the trap.

./myscript

Crtl+C

remove trap

The first Ctrl+C was trapped and the script continues running while the second one exits the script because the trap was removed.

Running Linux Bash Scripts in Background Mode

If you see the output of the ps command, you will see all the running processes in the background and not tied to the terminal.

We can do the same, just place ampersand symbol (&) after the command.

#!/bin/bash

total=1

while [ $total -le 3 ]; do

sleep 2

total=$(($total + 1))

done

./myscipt &

run in background

Once you’ve done that, the script runs in a separate background process on the system and you can see the process id between the square brackets.

When the script dies,  you will see a message on the terminal.

Notice that while the background process is running, you can use your terminal monitor for STDOUT and STDERR messages so if an error occurs, you will see the error message and normal output.

run script in background

The background process will exit if you exit your terminal session.

So what if you want to continue running even if you close the terminal?

Running Scripts without a Hang-Up

You can run your Linux bash scripts in the background process even if you exit the terminal session using the nohup command.

The nohup command blocks any SIGHUP signals. This blocks the process from exiting when you exit your terminal.

nohup ./myscript &

linux bash nohup command

After running the nohup command, you can’t see any output or error from your script. The output and error messages are sent to a file called nohup.out.

Note: when running multiple commands from the same directory will override the nohup.out file content.

Viewing Jobs

To view the current jobs, you can use the jobs command.

#!/bin/bash

total=1

while [ $total -le 3 ]; do

echo "#$count"

sleep 5

total=$(($total + 1))

done

Then run it.

./myscript

Then press Ctrl+Z to stop the script.

linux bash view jobs

Run the same bash script but in the background using the ampersand symbol and redirect the output to a file just for clarification.

./myscript > outfile &

linux bash list jobs

The jobs command shows the stopped and the running jobs.

jobs –l

-l parameter to view the process ID

 Restarting Stopped Jobs

The bg command is used to restart a job in background mode.

./myscript

Then press Ctrl+Z

Now it is stopped.

bg

linux bash restart job

After using bg command, it is now running in background mode.

If you have multiple stopped jobs, you can do the same by specifying the job number to the bg command.

The fg command is used to restart a job in foreground mode.

fg 1

Scheduling a Job

The Linux system provides 2 ways to run a bash script at a predefined time:

  • at command.
  • cron table.

The at command

This is the format of the command

at [-f filename] time

The at command can accept different time formats:

  • Standard time format like 10:15.
  • An AM/PM indicator like 11:15PM.
  • A specifically named time like now, midnight.

You can include a specific date, using some different date formats:

  • A standard date format, such as MMDDYY or DD.MM.YY.
  • A text date, such as June 10 or Feb 12, with or without the year.
  • Now + 25 minutes.
  • 05:15AM tomorrow.
  • 11:15 + 7 days.

We don’t want to dig deep into the at command, but for now, just make it simple.

at -f ./myscript now

linux bash at command

The -M parameter is used to send the output to email if the system has email, and if not, this will suppress the output of the at command.

To list the pending jobs, use atq command:

linux bash at queue

Remove Pending Jobs

To remove a pending job, use the atrm command:

atrm 18

delete at queue

You must specify the job number to the atrm command.

Scheduling Scripts

What if you need to run a script at the same time every day or every month or so?

You can use the crontab command to schedule jobs.

To list the scheduled jobs, use the -l parameter:

crontab –l

The format for crontab is:

minute,Hour, dayofmonth, month, and dayofweek

So if you want to run a command daily at 10:30, type the following:

30 10 * * * command

The wildcard character (*) used to indicate that the cron will execute the command daily on every month at 10:30.

To run a command at 5:30 PM every Tuesday, you would use the following:

30 17 * * 2 command

The day of the week starts from 0 to 6 where Sunday=0 and Saturday=6.

To run a command at 10:00 on the beginning of every month:

00 10 1 * * command

The day of the month is from 1 to 31.

Let’s keep it simple for now and we will discuss the cron in great detail in future posts.

To edit the cron table, use the -e parameter like this:

crontab –e

Then type your command like the following:

30 10 * * * /home/likegeeks/Desktop/myscript

This will schedule our script to run at 10:30 every day.

Note: sometimes you see error says Resource temporarily unavailable.

All you have to do is this:

rm -f /var/run/crond.pid

You should be a root user to do this.

Just that simple!

You can use one of the pre-configured cron script directories like:

/etc/cron.hourly

/etc/cron.daily

/etc/cron.weekly

/etc/cron.monthly

Just put your bash script file on any of these directories and it will run periodically.

Starting Scripts at Login

In the previous posts, we’ve talked about startup files, I recommend you to review the previous.

$HOME/.bash_profile

$HOME/.bash_login

$HOME/.profile

To run your scripts at login, place your code in $HOME/.bash_profile.

Starting Scripts When Opening the Shell

OK, what about running our bash script when the shell opens? Easy.

Type your script on .bashrc file.

And now if you open the shell window, it will execute that command.

I hope you find the post useful. keep coming back.

Thank you.

0

Process Large Files Using PHP

If you want to process large files using PHP, you may use some of the ordinary PHP functions like file_get_contents() or file() which has a limitation when working with very large files. These functions rely on the memory_limit setting in php.ini file, you may increase the value but these functions still are not suitable for very large files because these functions will put the entire file content into memory at one point. Any file that has a size larger than memory_limit setting will not be loaded into memory, so what if you have 20 GB file and you want to process it using PHP? Another limitation is the speed of producing output. Let’s assume that you will accumulate the output in an array then output it at once which gives a bad user experience. For this limitation, we can use the yield keyword to generate an immediate result.

Continue Reading →

SplFileObject Class

In this post, we will use the SplFileObject class which is a part of Standard PHP Library.

For our demonstration, I will create a class to process large files using PHP.

The class will take the file name as input to the constructor:

class BigFile
{
protected $file;
public function __construct($filename, $mode = "r")
{
if (!file_exists($filename)) {
throw new Exception("File not found");
}
$this->file = new SplFileObject($filename, $mode);
}
}

Now we will define a method for iterating through the file, this method will use fgets() function to read one line at a time.

You can create another method that uses fread() function.

Read Text Files

The fgets() is suitable for parsing text files that include line feeds while fread() is suitable for parsing binary files.

protected function iterateText()
{
$count = 0;
while (!$this->file->eof()) {
yield $this->file->fgets();
$count++;
}
return $count;
}

This function will be used to iterate through lines of text files.

Read Binary Files

Another function which will be used for parsing binary files:

protected function iterateBinary($bytes)
{
$count = 0;
while (!$this->file->eof()) {
yield $this->file->fread($bytes);
$count++;
}
}

Read in One Direction

Now we will define a method that will take the iteration type and return NoRewindIterator instance.

We use the NoRewindIterator to enforce reading in one direction.

public function iterate($type = "Text", $bytes = NULL)
{
if ($type == "Text") {
return new NoRewindIterator($this->iterateText());
} else {
return new NoRewindIterator($this->iterateBinary($bytes));
}
}

Now the entire class will look like this:

class BigFile
{
protected $file;
public function __construct($filename, $mode = "r")
{
if (!file_exists($filename)) {
throw new Exception("File not found");
}
$this->file = new SplFileObject($filename, $mode);
}
protected function iterateText()
{
$count = 0;
while (!$this->file->eof()) {
yield $this->file->fgets();
$count++;
}
return $count;
}
protected function iterateBinary($bytes){
$count = 0;
while (!$this->file->eof()) {
yield $this->file->fread($bytes);
$count++;
}
}
public function iterate($type = "Text", $bytes = NULL)
{
if ($type == "Text") {
return new NoRewindIterator($this->iterateText());
} else {
return new NoRewindIterator($this->iterateBinary($bytes));
}
}
}

Parse large Files

Let’s test our class:

$largefile = new BigFile("file.csv");
$iterator = $largefile->iterate("Text"); // Text or Binary based on your file type
foreach ($iterator as $line) {
echo $line;
}

This class should read any large file without limitations Great!!

You can use this class in your Laravel projects by autoloading your class and add it to composer.json file.

Now you can parse and process large files using PHP easily.

Keep coming back.

Thank you.

0

Shell Scripting Part4 – Input, Output, and Redirection

In the previous post, we talked about parameters and options in detail, today we will talk about something very important in shell scripting which are Input, Output, and Redirection. You can display the output from your shell scripts in two ways:

  • Display output on the screen.
  • Send output to a file.

Everything is a file in Linux and that includes input and output.

Continue Reading →

Each process can have 9 file descriptors opened at the same time. The file descriptors 0, 1, 2 are kept for the bash shell usage.

0              STDIN.

1              STDOUT.

2              STDERR.

You can use the above file descriptors to control input and output.

You need to fully understand these three because they are like the backbones of your shell scripting. So we are going to describe every one of them in detail.

STDIN

STDIN stands for standard input which is the keyboard by default.

You can replace the STDIN which is the keyboard and replace it with a file by using the input redirect symbol (<), it sends the data as keyboard typing. No magic!!

When you type the cat command without anything, it accepts input from STDIN. Any line you type, the cat command prints that line to the screen.

STDOUT

This stands for the standard output which is the screen by default.

You can redirect output to a file using the >> symbol.

If we have a file contains data, you can append data to it using this symbol like this:

pwd >> myfile

The output generated by pwd is appended to myfile without deleting the existed content.

shell-scripting-append

The following command tries to redirect the output to a file using > symbol.

ls –l xfile > myfile

shell-scripting-redirect-error

I have no file called xfile on my PC, and that generates an error which is sent to STDERR.

STDERR

This file descriptor is the standard error output of the shell which is sent to the screen by default.

If you need to redirect the errors to a log file instead of sending it to the screen, you can redirect errors using the redirection symbol.

Redirecting Errors

We can redirect the errors by placing the file descriptor which is 2 before the redirection symbol like this:

ls -l xfile 2>myfile

cat ./myfile

shell-scripting-redirect-error-to-file

As you can see, the error now is in the file and nothing on the screen.

Redirecting Errors and Normal Output

To redirect errors and the normal output, you have to precede each with the proper file descriptor like this:

ls –l myfile xfile anotherfile 2> errorcontent 1> correctcontent

shell-scripting-redirect-error-and-data

The ls command result is sent to the correctcontent file using the 1> symbol. And error messages were sent to the errorcontent file using the 2> symbol.

You can redirect normal output and errors to the same file using &> symbol like this:

ls –l myfile xfile anotherfile &> content

shell-scripting-redirect-all-to-file

All errors and normal output are redirected to file named content.

Output Redirection

There are two ways for output redirection:

  • Temporarily redirection.
  • Permanently redirection.

Temporary Redirections

For temporary redirections, you can use the >&2 symbol like this:

#!/bin/bash

echo "Error message" >&2

echo "Normal message"

shell-scripting-temp-redirection

So if we run it, we will see both lines printed normally because as we know errors go to the screen by default.

You can redirect errors to a file like this:

./myscript 2> myfile

shell-scripting-redirect-error-to-file

Shell scripting is Awesome! Normal output is sent to the screen, while the echo message which has >&2 symbol sends errors to the file.

Permanent Redirections

If you have much data that need to be redirected, you can have a permanent redirection using the exec command like this:

#!/bin/bash

exec 1>outfile

echo "Permanent redirection"

echo "from a shell to a file."

echo "without redirecting every line"

shell-scripting-redirect-all-to-file

If we look at the file called outfile, we will see the output of the echo lines.

We redirect the STDOUT at the beginning, what about in the middle of a script like this:

#!/bin/bash

exec 2>myerror

echo "Script Begining ..."

echo "Redirecting Output"

exec 1>myfile

echo "Output goes to the myfile"

echo "Output goes to myerror file" >&2

shell-scripting-permenant-redirection

The exec command redirects all errors to the file myerror, and standard output is sent to the screen.

The statement exec 1>myfile is used to redirect output to the myfile file, and finally, errors go to myerror file using >&2 symbol.

Redirecting Input

You can redirect input to a file instead of STDIN using exec command like this:

exec 0< myfile

This command tells the shell to take the input from a file called myfile instead of STDIN and here is an example:

#!/bin/bash

exec 0< testfile

total=1

while read line

do

echo "#$total: $line"

total=$(( $total + 1 ))

done

shell-scripting-redirect-input

Shell scripting is easy.

You know how to use the read command to get user input. If you redirect the STDIN to a file, the read command will try to read from STDIN which points to the file.

Some Linux system administrators use this technique to read the log files for processing and we will discuss more ways to read the log on the upcoming posts in a professional way.

Creating Custom Redirection

You know that there are 9 file descriptors, you use only 3 of them for input, output, and error.

The remaining six file descriptors are available for use for input and output redirection.

The exec command is used to assign a file descriptor for output like this:

#!/bin/bash

exec 3>myfile

echo "This line appears on the screen"

echo "This line stored on myfile" >&3

echo "This line appears on the screen"

shell-scripting-create-redirection

Creating Input File Descriptors

To redirect input file descriptors do the following:

1-  Save the STDIN to another file descriptor.

2- Redirecting it to a file.

3- Revert STDIN to its original location.

Look at the following code to understand these steps:

#!/bin/bash

exec 7<&0

exec 0< myfile

total=1

while read line

do

echo "#$total: $line"

total=$(( $total + 1 ))

done

exec 0<&7

read -p "Finished? " res

case $res in

y) echo "Goodbye";;

n) echo "Sorry, this is the end.";;

esac

shell-scripting-create-input-file-descriptor

The STDIN is saved to file descriptor 7 and the STDIN is redirected to a file.

The STDIN reverted back to its original location after iterating over file lines.

The last read command just to make sure that STDIN is reverted back to and you can use the keyboard normally.

Close File Descriptors

The file descriptors are closed automatically when the script exits. If you want to close the file descriptor yourself, redirect the file descriptor to this symbol &- it will be closed.

#!/bin/bash

exec 3> myfile

echo "Testing ..." >&3

exec 3>&-

echo "Nothing works" >&3

shell-scripting-closing-file-desciptor

As you can see, it gives error bad file descriptor because it is no longer exist.

lsof Command

The lsof command is used to list all the opened files on the system and background processes.

On many Linux systems like Fedora, the lsof command is located under /usr/sbin.

This is some of the important options for lsof command:

-p: for process ID.

-d: for the file descriptor.

You can get the process PID using $$ variable.

shell-scripting-list-opened-desciptors

The -a is used to combine results of -p option and -d option.

Now, testing the command from a script:

#!/bin/bash

exec 4> myfile1

exec 5> myfile2

exec 6< myfile3

lsof -a -p $$ -d 0,1,2,4,5,6

shell-scripting-list-custom-descriptors

The shell script creates the file descriptors 4 and 5 for writing and 6 for reading.

Suppressing Command Output

Sometimes you don’t want to see any output. We redirect the output to the black hole which is /dev/null.

For example, we can suppress errors like this:

ls -al badfile anotherfile 2> /dev/null

And this idea is also used when you want to truncate a file without deleting it completely.

cat /dev/null > myfile

Now you understand the input, output, how to redirect them, how to create your own file descriptor, and redirect to it.

I hope you enjoy it. keep coming back.

Thank you.

likegeeks.com

0