Archive | GNU/Linux İpuçları

Debian 10 Buster’a Kodi kurulumu

Televizyon ve uzaktan kumanda ile kullanım için GNU/Linux, OSX, Windows, iOS ve Android yüklü aygıtlarda 10 metreye kadar bir kullanıcı arayüzü üzerinden özgür ve açık kaynak kodlu (GPL) bir medya oynatıcı olarak işlev gören ve eskiden XBMC olarak bilinen Kodi‘yi Debian 10 Buster’a nasıl kurabiliriz? Bugünkü konumuz bu. Tüm dijital ortamı güzel ve kullanıcı dostu bir paketle bir araya getiren bir eğlence merkezi olan Kodi, %100 özgür ve açık kaynak kodludur. Bilindiği gibi Kodi’ye kimi eklentiler kurularak özelliklerini genişletmek ve birçok yararlı özelliğe kavuşmak da mümkündür. Sisteminizde sudo öntanımlı olarak kurulmamışsa, sudo’yu kurabilir, kendinizi /etc/sudoers dosyasına ekleyebilirsiniz. Ve ardından terminali açın,  su - komutuyla root olun. Debian depoları her zaman en güncel sürümü sağlamasa da, biz sistemimize mümkün olduğunca güncel bir sürüm kurmaya çalışacağız.

Continue Reading →

Şimdi terminale, aşağıdaki komutu girerek etc/apt/sources.list dosyasını açalım:

sudo nano /etc/apt/sources.list

Ardından dosyanın sonuna aşağıdaki satırı yapıştırın:

deb http://http.debian.net/debian jessie-backports main

Şimdi depolarımızı güncelleyelim:

sudo apt update

Sonra aşağıdaki komutu kullanarak Kodi’yi yükleyelim:

sudo apt install kodi

Şimdi Kodi’yi menüde bulup simgesine tıklayarak ya da terminalde aşağıdaki komutu kullanarak çalıştırabilirsiniz:

kodi

Herhangi bir nedenle, sonradan Kodi’yi kaldırmak isterseniz, aşağıdaki komutu kullanabilirsiniz:

sudo apt remove --auto-remove kodi

0

Recover deleted files on Linux (Beginners Tutorial)

Have you ever deleted any important files by mistake? Who doesn’t! Okay, but can I recover them? In this post, you will learn how to recover deleted files on Linux using various programs on different file systems. You will see how to recover deleted files from SD cards, HDDs, and deleted partitions on different Linux file systems such as EXT3, EXT4 and even from Windows file systems such as FAT32 & NTFS. This is quite a problem. Often, Linux users frequently install several systems at the same time and they may delete a partition bu mistake during the installation process. However, how to recover files from those deleted partitions? For this, we need to recover the partition using a tool called TestDisk. Testdisk is a powerful partition analysis and data recovery utility. It is shipped with a large number of Linux distributions such as Debian and Ubuntu. On the other hand, the application is cross-platform and supports a large number of partition tables such as Intel, MSDOS, and Mac. These are the most popular partition tables. Also, it supports many file systems such as NTFS, EXT4 and other nonpopular file systems such as BeOS and ReiserFS.

Continue Reading →

Recover files from deleted partition

When a file is deleted, the list of clusters occupied by the file is deleted, marking those sectors available for the use. If the clusters have not been overwritten, TestDisk can recover the files.

First, start the application like this:

$ testdisk

Next, you have the option to create a new file for the logs. If you want to create one, choose the create option and press Enter. If you don’t want a log file, select the No Log option.

Using testdisk

Next, the disks or partitions recognized by the system will be scanned. In this particular case, sda is the partition we want to recover.

Scan the devices

TestDisk recognizes various types of partition tables. It is usually Intel. Unless you are using a specialized one.

Select the partition table

In the next screen, you will see a series of options that the program has. For this particular case, we need to choose the Analyse option.

With this option, the program will exhaustively analyze the disk to find the structure.

Analyse the disk

Then, it will ask about the type of search you wish to do. Usually, choose the Quick Search option.

The partition structure

If you are lucky, you will see the deleted partition. If not, you will have to choose a deeper search.

Using TestDisk

Then, choose the Write option to write the partition table. When finished, restart the system and you will have your partition back!

Recover a partition

Reboot to apply the changes

Note that during these steps, it may take a long time. It depends on the disk size.

According to the type of file system, this partition may have, particular instructions will be followed. They will be detailed later.

Recover deleted files from an external drive

Now let us imagine we have an external flash drive and by mistake, you have deleted some files from it. How to restore them?

Thanks to TestDisk, the process becomes quite similar to that of a deleted partition. But there are some differences.

To start the program we will use the testdisk command. Also, we can add the flash drive as a parameter like this:

$ sudo testdisk /dev/sdb

TestDisk to recover files

Next, select proceed. Then, choose the partition table type.

Select the partition table

Then, select the Advanced options to recover files.

Advanced options on TestDisk

The next step is selecting the partition and the Undelete option.

Undelete files with TestDisk

Then, you will see all the deleted files on the partition.

Recover files with TestDisk

Now, select the destination folder to place the recovered files. You need to press C on the first option to place the files on the current directory.

Select the destination folder

Finally, you will see this message:

Everything OK with TestDisk

Congratulations! Files restored.

Recover deleted files from SD card

Usually, on an SD card, it is common to notice that they are used for multimedia files. Therefore, it is advisable to use a more specialized program for these files.

In this case, we will use the application called Photorec that comes incorporated in TestDisk.

First, insert the SD card on the PC. Next, run photorec as root:

$ sudo photorec [device]

Then, you will see the following image. Select the media and proceed and press Enter.

Using photorec to recover files from SD

Next, select the partition. And select Options and press enter.

Select the partition

There you will see the recovery options that will be performed on the SD card.

Photorec options

Press q to return to the previous screen. And there it is necessary to choose the types of files that we want to recover. This is achieved by selecting the File Opt option.

Formats to recover

Press the s key to select and deselect all formats. You can also select the types of files you want to recover using the right key. To save the selected options press the b key. Return to the main menu using the q key.

Then, on the main menu, choose the Search option to start the process. And choose the file system.

Select the file system

You will then be presented with two options. Free and Whole. Normally, Free is enough. If you want to do a deep analysis, choose Whole but keep in mind that it will slow down the process.

Now, it is necessary to choose the location where the files will be saved. To do this, press the c key.

Select the destination

After choosing the destination, the recovery process will start. Remember that the system will collapse and freeze. So be patient.

In the end, you will see a message informing you of everything that has happened.

Photorec report

Next, check the results.

Check the results

Recover deleted files from NTFS

NTFS is a Windows file system. If you are one of those who use both systems on the computer, then you may need to restore deleted files from a Windows partition with this file system.

To do this, we have a tool called ntfsundelete that is quite simple to use.

First, you need to scan the disk or partition. For example:

$ sudo ntfsundelete /dev/sda1

Using ntfsundelete

Then, we will be able to recover the deleted file with the following command:

$ sudo ntfsundelete [HD_Or_partition] -u -m [filename]

Recovering files using ntfsundelete 

 The recovered files now belong to the root user. The last step is to change the permissions and owners of the files using the chown command.

Recover Files from FAT32

Another common Windows file system is FAT32. You can recover files from FAT32 is by using TestDisk.

So again run testdisk as root user and pass the disk as a parameter:

$ sudo testdisk [partition/HD]

TestDisk is compatible with FAT32 

Then continue the steps as described above to restore of the files.

Recover on memory files (Using inode)

If you delete a file that is used by another process, you can restore it from the memory using inode.

To do this, some initial conditions must be established. First, the deleted file MUST remain open by another process. Then you have to verify the process and finally recover it and change its permissions.

In this case, I will create a file called example.txt using the nano editor and add some text:

$ nano example.txt

Then save the changes and open another terminal window and use the file. For example, with the less command.

$ less example.txt

Using the less command

 Open another terminal session, delete the file and make sure it’s deleted:

$ rm example.txt

$ ls example.txt

Delete the example file

As you can see, the file no longer exists. But we will be able to recover it. To do this, let’s get the number of the process associated with the inode of the file.

$ lsof | grep example.txt

Check the deleted file

You will notice the process and command that is using the file (the less command). From that image, we have to pay attention to the second and fourth values. These are the PID of the process and the descriptor of the file respectively.

Then, recover it with the following command:

$ ls -l /proc/2325/fd/4

Find the process of the deleted file

Then copy it to whatever location you want and that is enough to recover it.

$ sudo cp /proc/2325/fd/4 .

Next, check the results and open the file:

Recover a deleted file using inode  

This way we can recover a deleted file that still on memory and used by a process with the inode.

Recover Deleted Files from EXT4 (Using extundelete)

EXT4 is the default file system on most Linux distributions. It is quite fast and with technical features that are very well taken advantage of by the Linux kernel.

One of the used tools to recover files from EXT4 filesystem is extundelete.

Extundelete is an open-source application that allows recovering deleted files from a partition or a disk with EXT3 or EXT4 file system. It is simple to use and comes by default installed on most Linux distributions.

To recover a certain file, just use the following command:

$ sudo extundelete [device] -restore-file [pathfile]

For example:

$ sudo extundelete /dev/sdb1 -restore-file home/angelo/other.txt

If you want to recover all the files in a folder, use the wildcard character:

$ extundelete /dev/sda6 -restore-file home/angelo/*

But if you want to restore all files on the partition or disk, the next command would suffice:

$ extundelete /dev/sda6 -restore-all

Using extundelete to recover files

So, the recovered files will be on the RECOVERED_FILES directory. So this way, you can recover deleted files using extundelete.

Using debugfs

It is also possible to use the debugfs tool to recover deleted files. This tool also uses the inode number of the deleted file. However, it only works on EXT4 file systems.

Its operation is quite simple, too. First, you have to enter the partition or device.

$ debugfs [device]

For example,

$ sudo debugfs /dev/sdb1

Using debugfs

Then, after a while, you will be able to login to the debugfs console to search for recently deleted files.

$ debugfs: lsdel

inodes to recover

In the first column, you will see the inode number of the deleted files in that device. Then, restore it with the following command:

$ debugfs mi

And that is it. It is quite easy.

Using ext4magic

Another alternative way to recover deleted files on a disk with an Ext4 file system is to use Ext4magic. This application is also quite simple to use.

The most basic syntax of the application is the following:

$ sudo ext4magic [device] -f [folder_to_scan] -r -d [output_folder]

If I wanted to recover the deleted files from a folder called files, the command would be similar to this one:

$ sudo ext4magic /dev/sdb1 -r -d files

Using ext4magic to recover files

That is how easy it is to use ext4magic. All this thanks to the fact that Ext4 is a community and open source file system.

Recover overwritten files (Using Scalpel)

Scapel is another open-source tool that allows you to recover files from formatted drives, overwritten files and even damaged drives. It is well known for its speed and efficiency. In this sense, it emerges as an alternative to consider.

Scalpel carves files without the help of filesystems. It tries to extract headers and footers of files and tries to guess the entire file structure using some well-designed algorithms.

Like TestDisk, it is available in the official repositories of most Linux distributions. Therefore, its installation is reduced to the use of the terminal and the package manager of the distribution.

The fastest and easiest way to use Scapel is as follows:

$ scalpel [device] -o [output_folder]

The output_folder where scapel will place all recovered files. Note that Scalpel will create the output directory itself.

But how does Scapel know which files to recover? Well, that is defined in the application configuration file.

This configuration file is usually located at the following location:

/etc/scalpel/scalpel.conf

And you can open it with your favorite text editor and there you will only have to uncomment the lines to define the file formats to search.

Scalpel configuration file

The file formats you uncomment, Scalpel will search for it.

Next, run the full Scalpel command and in the output folder, you will see the recovered files.

$ sudo scalpel /dev/sdb1 -o recovered_files1

Using scalpel to recover files

Sometimes, Scalpel restores parts of the file. That depends on the health of the drive and how much data has been corrupted.

Also, there are many craving algorithms you can use, but we discussed here the basic way of craving data.

Recover files from a non-bootable system

This is a delicate case because we need to access from a Live cd of Ubuntu or another similar Linux distribution. Once we have boot, we could use TestDisk to try to recover the data.

In this case, we would have to use an external drive where to save the data. On the other hand, in case TestDisk can’t do the job, we can also try extundelete or ext4magic as long as the partition is Ext4.

If it does not work, you could try regenerating the partition using TestDisk as explained above.

Conclusion

It is possible to delete files accidentally. The idea is to know the appropriate tools and techniques to recover these files.

In this post, we have covered several circumstances and different file systems that could help avoid such problems.

Keep coming back.

Thank you.

0

Expect command and how to automate shell scripts like magic

In the previous post, we talked about writing practical shell scripts and we saw how it is easy to write a shell script. Today we are going to talk about a tool that does magic to our shell scripts, that tool is the Expect command or Expect scripting language. Expect command or expect scripting language is a language that talks with your interactive programs or scripts that require user interaction. Expect scripting language works by expecting input, then the Expect script will send the response without any user interaction. You can say that this tool is your robot which will automate your scripts.

Continue Reading →

If Expect command if not installed on your system, you can install it using the following command:

$ apt-get install expect

Or on Red Hat based systems like CentOS:

$ yum install expect

Expect Command

Before we talk about expect command, Let’s see some of the expect command which used for interaction:

spawn                  Starting a script or a program.

expect                  Waiting for program output.

send                      Sending a reply to your program.

interact                Allowing you in interact with your program.

  • The spawn command is used to start a script or a program like the shell, FTP, Telnet, SSH, SCP, and so on.
  • The send command is used to send a reply to a script or a program.
  • The Expect command waits for input.
  • The interact command allows you to define a predefined user interaction.

We are going to type a shell script that asks some questions and we will make an Expect script that will answer those questions.

First, the shell script will look like this:

#!/bin/bash

echo "Hello, who are you?"

read $REPLY

echo "Can I ask you some questions?"

read $REPLY

echo "What is your favorite topic?"

read $REPLY

Now we will write the Expect scripts that will answer this automatically:

#!/usr/bin/expect -f

set timeout -1

spawn ./questions

expect "Hello, who are you?\r"

send -- "Im Adam\r"

expect "Can I ask you some questions?\r"

send -- "Sure\r"

expect "What is your favorite topic?\r"

send -- "Technology\r"

expect eof

The first line defines the expect command path which is #!/usr/bin/expect.

On the second line of code, we disable the timeout. Then start our script using spawn command.

We can use spawn to run any program we want or any other interactive script.

The remaining lines are the Expect script that interacts with our shell script.

The last line if the end of file which means the end of the interaction.

Now Showtime, let’s run our answer bot and make sure you make it executable.

$ chmod +x ./answerbot

$./answerbot

expect command

Cool!! All questions are answered as we expect.

If you get errors about the location of Expect command you can get the location using the which command:

$ which expect

We did not interact with our script at all, the Expect program do the job for us.

The above method can be applied to any interactive script or program.Although the above Expect script is very easy to write, maybe the Expect script little tricky for some people, well you have it.

Using autoexpect

To build an expect script automatically, you can the use autoexpect command.

autoexpect works like expect, but it builds the automation script for you. The script you want to automate is passed to autoexpect as a parameter and you answer the questions and your answers are saved in a file.

$ autoexpect ./questions

autoexpect command

A file is generated called script.exp contains the same code as we did above with some additions that we will leave it for now.

autoexpect script

If you run the auto generated file script.exp, you will see the same answers as expected:

autoexpect script execution

Awesome!! That super easy.

There are many commands that produce changeable output, like the case of FTP programs, the expect script may fail or stuck. To solve this problem, you can use wildcards for the changeable data to make your script more flexible.

Working with Variables

The set command is used to define variables in Expect scripts like this:

set MYVAR 5

To access the variable, precede it with $ like this $VAR1

To define command line arguments in Expect scripts, we use the following syntax:

set MYVAR [lindex $argv 0]

Here we define a variable MYVAR which equals the first passed argument.

You can get the first and the second arguments and store them in variables like this:

set my_name [lindex $argv 0]

set my_favorite [lindex $argv 1]

Let’s add variables to our script:

#!/usr/bin/expect -f

set my_name [lindex $argv 0]

set my_favorite [lindex $argv 1]

set timeout -1

spawn ./questions

expect "Hello, who are you?\r"

send -- "Im $my_name\r"

expect "Can I ask you some questions?\r"

send -- "Sure\r"

expect "What is your favorite topic?\r"

send -- "$my_favorite\r"

expect eof

Now try to run the Expect script with some parameters to see the output:

$ ./answerbot SomeName Programming

expect command variables

Awesome!! Now our automated Expect script is more dynamic.

Conditional Tests

You can write conditional tests using braces like this:

expect {

"something" { send -- "send this\r" }

"*another" { send -- "send another\r" }

}

We are going to change our script to return different conditions, and we will change our Expect script to handle those conditions.

We are going to emulate different expects with the following script:

#!/bin/bash

let number=$RANDOM

if [ $number -gt 25000 ]; then

echo "What is your favorite topic?"

else

echo "What is your favorite movie?"

fi

read $REPLY

A random number is generated every time you run the script and based on that number, we put a condition to return different expects.

Let’s make out Expect script that will deal with that.

#!/usr/bin/expect -f

set timeout -1

spawn ./questions

expect {

"*topic?" { send -- "Programming\r" }

"*movie?" { send -- "Star wars\r" }

}

expect eof

expect command conditions

Very clear. If the script hits the topic output, the Expect script will send programming and if the script hits movie output the expect script will send star wars. Isn’t cool?

If else Conditions

You can use if/else clauses in expect scripts like this:

#!/usr/bin/expect -f

set NUM 1

if { $NUM < 5 } {

puts "\Smaller than 5\n"

} elseif { $NUM > 5 } {

puts "\Bigger than 5\n"

} else {

puts "\Equals 5\n"

}

if command

Note: The opening brace must be on the same line.

While Loops

While loops in expect language must use braces to contain the expression like this:

#!/usr/bin/expect -f

set NUM 0

while { $NUM <= 5 } {

puts "\nNumber is $NUM"

set NUM [ expr $NUM + 1 ]

}

puts ""

while loop

For Loops

To make a for loop in expect, three fields must be specified, like the following format:

#!/usr/bin/expect -f

for {set NUM 0} {$NUM <= 5} {incr NUM} {

puts "\nNUM = $NUM"

}

puts ""

for loop

User-defined Functions

You can define a function using proc like this:

proc myfunc { TOTAL } {

set TOTAL [expr $TOTAL + 1]

return "$TOTAL"

}

And you can use them after that.

#!/usr/bin/expect -f

proc myfunc { TOTAL } {

set TOTAL [expr $TOTAL + 1]

return "$TOTAL"

}

set NUM 0

while {$NUM <= 5} {

puts "\nNumber $NUM"

set NUM [myfunc $NUM]

}

puts ""

user-defined functions

Interact Command

Sometimes your Expect script contains some sensitive information that you don’t want to share with other users who use your Expect scripts, like passwords or any other data, so you want your script to take this password from you and continuing automation normally.

The interact command reverts the control back to the keyboard.

When this command is executed, Expect will start reading from the keyboard.

This shell script will ask about the password as shown:

#!/bin/bash

echo "Hello, who are you?"

read $REPLY

echo "What is you password?"

read $REPLY

echo "What is your favorite topic?"

read $REPLY

Now we will write the Expect script that will prompt for the password:

#!/usr/bin/expect -f

set timeout -1

spawn ./questions

expect "Hello, who are you?\r"

send -- "Hi Im Adam\r"

expect "*password?\r"

interact ++ return

send "\r"

expect "*topic?\r"

send -- "Technology\r"

expect eof

interact command

After you type your password type ++ and the control will return back from the keyboard to the script.

Expect language is ported to many languages like C#, Java, Perl, Python, Ruby and Shell with almost the same concepts and syntax due to its simplicity and importance.

Expect scripting language is used in quality assurance, network measurements such as echo response time, automate file transfers, updates, and many other uses.

I hope you now supercharged with some of the most important aspects of Expect command, autoexpect command and how to use it to automate your tasks in a smarter way.

Thank you.

0

How to write practical shell scripts

In the last post, we talked about regular expressions and we saw how to use them in sed and awk for text processing, and we discussed before Linux sed command and awk command. During the series, we wrote small shell scripts, but we didn’t mix things up, I think we should take a small step further and write a useful shell script. However, the scripts in this post will help you to empower your scriptwriting skills. You can send messages to someone by phone or email, but one method, not commonly used anymore, is sending a message directly to the user’s terminal. We are going to build a bash script that will send a message to a user who is logged into the Linux system. For this simple shell script, only a few functions are required. Most of the required commands are common and have been covered in our series of shell scripting; you can review the previous posts.

Continue Reading →

Sending Messages

First, we need to know who is logged in. This can be done using the who command which retrieves all logged in users.

who

shell scripts who command

To send a message you need the username and his current terminal.

You need to know if messages are allowed or not for that user using the mesg command.

mesg

mesg command

If the result shows “is y” that means messaging is permitted. If the result shows “is n”, that means messaging is not permitted.

To check any logged user message status, use the who command with -T option.

who -T

If you see a dash (-) that means messages are turned off and if you see plus sign (+) that means messages are enabled.

To allow messages, type mesg command with the “y” option like this

mesg y

allow messages

Sure enough, it shows “is y” which means messages are permitted for this user.

Of course, we need another user to be able to communicate with him so in my case I’m going to connect to my PC using SSH and I’m already logged in with my user, so we have two users logged onto the system.

Let’s see how to send a message.

Write Command

The write command is used to send messages between users using the username and current terminal.

For those users who logged into the graphical environment (KDE, Gnome, Cinnamon or any), they can’t receive messages. The user must be logged onto the terminal

We will send a message to testuser user from my user likegeeks like this:

write testuser pts/1

write command

Type the write command followed by the user and the terminal and hit Enter.

When you hit Enter, you can start typing your message. After finishing the message, you can send the message by pressing the Ctrl+D key combination which is the end of file signal. I recommend you to review the post about signals and jobs.

Receive message

The receiver can recognize which user on which terminal sends the message. EOF means that the message is finished.

I think now we have all the parts to build our shell script.

Creating The Send Script

Before we create our shell script, we need to determine whether the user we want to send a message to him is currently logged on the system, this can be done using the who command to determine that.

logged=$(who | awk -v IGNORECASE=1 -v usr=$1 '{ if ($1==usr) { print $1 }exit }')

We get the logged in users using the who command and pipe it to awk and check if it is matching the entered user.

The final output from the awk command is stored in the variable logged.

Then we need to check the variable if it contains something or not:

if [ -z $logged ]; then

echo "$1 is not logged on."

echo "Exit"

exit

fi

I recommend you to read the post about the if statement and how to use it Bash Script.

Check logged user

The logged variable is tested to check if it is a zero or not.

If it is zero, the script prints the message, and the script is terminated.

If the user is logged, the logged variable contains the username.

Checking If The User Accepts Messages

To check if messages are allowed or not, use the who command with -T option.

check=$(who -T | grep -i -m 1 $1 | awk '{print $2}')

if [ "$check" != "+" ]; then

echo "$1 disable messaging."

echo "Exit"

exit

fi

Check message allowed

Notice that we use the who command with -T. This shows a (+) beside the username if messaging is permitted. Otherwise, it shows a (-) beside the username, if messaging is not permitted.

Finally, we check for a messaging indicator if the indicator is not set to plus sign (+).

Checking If Message Was Included

You can check if the message was included or not like this:

if [ -z $2 ]; then

echo "Message not found"

echo "Exit"

exit

fi

Getting the Current Terminal

Before we send a message, we need to get the user current terminal and store it in a variable.

terminal=$(who | grep -i -m 1 $1 | awk '{print $2}')

Then we can send the message:

echo $2 | write $logged $terminal

Now we can test the whole shell script to see how it goes:

$ ./senderscript likegeeks welcome

Let’s see the other shell window:

Send message

Good!  You can now send simple one-word messages.

Sending a Long Message

If you try to send more than one word:

$ ./senderscript likegeeks welcome to shell scripting

One word message

It didn’t work. Only the first word of the message is sent.

To fix this problem, we will use the shift command with the while loop.

shift

while [ -n "$1" ]; do

message=$message' '$1

shift

done

And now one thing needs to be fixed, which is the message parameter.

echo $whole_message | write $logged $terminal

So now the whole script should be like this:

If you try now:

$ ./senderscript likegeeks welcome to shell scripting

Complete message

Awesome!! It worked. Again, I’m not here to make a script to send the message to the user, but the main goal is to review our shell scripting knowledge and use all the parts we’ve learned together and see how things work together.

Monitoring Disk Space

Let’s build a script that monitors the biggest top ten directories.

If you add -s option to the du command, it will show summarized totals.

$ du -s /var/log/

The -S option is used to show the subdirectories totals.

$ du -S /var/log/

du command

You should use the sort command to sort the results generated by the du command to get the largest directories like this:

$ du -S /var/log/ | sort -rn

sort command

The -n to sort numerically and the -r option to reverse the order so it shows the bigger first.

The N command is used to label each line with a number:

sed '{11,$D; =}' |

sed 'N; s/\n/ /' |

Then we can clean the output using the awk command:

awk '{printf $1 ":" "\t" $2 "\t" $3 "\n"}'

Then we add a colon and a tab so it appears much better.

$ du -S /var/log/ |

sort -rn |

sed '{11,$D; =}' |

# pipe the first result for another one to clean it

sed 'N; s/\n/ /' |

# formated printing using printf

awk '{printf $1 ":" "\t" $2 "\t" $3 "\n"}'

Format output with sed and awk

Suppose we have a variable called  MY_DIRECTORIES that holds 2 folders.

MY_DIRECTORIES=”/home /var/log”

We will iterate over each directory from MY_DIRECTORIES variable and get the disk usage using du command.

So the shell script will look like this:

Monitor disk usage

Good!! Both directories /home and /var/log are shown on the same report.

You can filter files, so instead of calculating the consumption of all files, you can calculate the consumption for a specific extension like *.log or whatever.

One thing I have to mention here, in production systems, you can’t rely on disk space report instead, you should use disk quotas.

Quota package is specialized for that, but here we are learning how bash scripts work.

Again the shell scripts we’ve introduced here is for showing you how shell scripting work, there are a ton of ways to implement any task in Linux.

My post is finished! I tried to reduce the post length and make everything as simple as possible, hope you like it.

Keep coming back. Thank you.

0

Regex tutorial for Linux (Sed & AWK) examples

In order to successfully work with the Linux sed editor and the awk command in your shell scripts, you have to understand regular expressions or in short regex. Since there are many engines for regex, we will use the shell regex and see the bash power in working with regex. First, we need to understand what regex is, then we will see how to use it. For some people, when they see the regular expressions for the first time they said what are these ASCII pukes !! Well, A regular expression or regex, in general, is a pattern of text you define that a Linux program like sed or awk uses it to filter text. We saw some of those patterns when introducing basic Linux commands and saw how the ls command uses wildcard characters to filter output.

Continue Reading →

Types of regex

There are many different applications use different types of regex in Linux, like the regex included in programming languages (Java, Perl, Python,,,) and Linux programs like (sed, awk, grep,) and many other applications.

A regex pattern uses a regular expression engine which translates those patterns.

Linux has two regular expression engines:

  • The Basic Regular Expression (BRE) engine.
  • The Extended Regular Expression (ERE) engine.

Most Linux programs work well with BRE engine specifications, but some tools like sed understand some of the BRE engine rules.

The POSIX ERE engine is shipped with some programming languages. It provides more patterns like matching digits, and words. The awk command uses the ERE engine to process its regular expression patterns.

Since there are many regex implementations, it’s difficult to write patterns that work on all engines. Hence, we will focus on the most commonly found regex and demonstrate how to use it in the sed and awk.

Define BRE Patterns

You can define a pattern to match text like this:

echo "Testing regex using sed" | sed -n '/regex/p'

echo "Testing regex using awk" | awk '/regex/{print $0}'

Linux regex tutorial

You may notice that the regex doesn’t care where the pattern occurs or how many times in the data stream.

The first rule to know is that regular expression patterns are case sensitive.

echo "Welcome to LikeGeeks" | awk '/Geeks/{print $0}'

echo "Welcome to Likegeeks" | awk '/Geeks/{print $0}'

regex character case

The first regex succeeds because the word “Geeks” exists in the upper case, while the second line fails because it uses small letters.

You can use spaces or numbers in your pattern like this:

echo "Testing regex 2 again" | awk '/regex 2/{print $0}'

space character

Special Characters

regex patterns use some special characters. And you can’t include them in your patterns and if you do so, you won’t get the expected result.

These special characters are recognized by regex:

.*[]^${}\+?|()

You need to escape these special characters using the backslash character (\).

For example, if you want to match a dollar sign ($), escape it with a backslash character like this:

cat myfile

There is 10$ on my pocket

awk '/\$/{print $0}' myfile

dollar sign

If you need to match the backslash (\) itself, you need to escape it like this:

echo "\ is a special character" | awk '/\\/{print $0}'

special character

Despite the forward slash isn’t a special character, you still get an error if you use it directly.

echo "3 / 2" | awk '///{print $0}'

regex slash

So you need to escape it like this:

echo "3 / 2" | awk '/\//{print $0}'

escape slash

Anchor Characters

To locate the beginning of a line in a text, use the caret character (^).

You can use it like this:

echo "welcome to likegeeks website" | awk '/^likegeeks/{print $0}'

echo "likegeeks website" | awk '/^likegeeks/{print $0}'

anchor begin character

The caret character (^) matches the start of text:

awk '/^this/{print $0}' myfile

caret anchor

What if you use it in the middle of the text?

echo "This ^ caret is printed as it is" | sed -n '/s ^/p'

caret character

It’s printed as it is like a normal character.

When using awk, you have to escape it like this:

echo "This ^ is a test" | awk '/s \^/{print $0}'

escape caret

This is about looking at the beginning of the text, what about looking at the end?

The dollar sign ($) checks for the end a line:

echo "Testing regex again" | awk '/again$/{print $0}'

end anchor

You can use both the caret and dollar sign on the same line like this:

cat myfile
this is a test
This is another test
And this is one more

awk '/^this is a test$/{print $0}' myfile

combine anchors

As you can see, it prints only the line that has the matching pattern only.

You can filter blank lines with the following pattern:

awk '!/^$/{print $0}' myfile

Here we introduce the negation which is done by the exclamation mark !

The pattern searches for empty lines where nothing between the beginning and the end of the line and negates that to print only the lines have text.

The dot Character

The dot character is used to match any character except newline (\n).

Look at the following example to get the idea:

cat myfile
this is a test
This is another test
And this is one more
start with this

awk '/.st/{print $0}' myfile

dot character

You can see from the result that it prints only the first two lines because they contain the st pattern while the third line does not have that pattern and fourth line start with st so that also doesn’t match our pattern.

Character Classes

You can match any character with the dot special character, but what if you match a set of characters only, you can use a character class.

The character class matches a set of characters if any of them found, the pattern matches.

The chracter classis defined using square brackets [] like this:

awk '/[oi]th/{print $0}' myfile

character classes

Here we search for any th characters that have o character or i before it.

This comes handy when you are searching for words that may contain upper or lower case and you are not sure about that.

echo "testing regex" | awk '/[Tt]esting regex/{print $0}'

echo "Testing regex" | awk '/[Tt]esting regex/{print $0}'

upper and lower case

Of course, it is not limited to characters; you can use numbers or whatever you want. You can employ it as you want as long as you got the idea.

Negating Character Classes

What about searching for a character that is not in the character class?

To achieve that, precede the character class range with a caret like this:

awk '/[^oi]th/{print $0}' myfile

negate character classes

So anything is acceptable except o and i.

Using Ranges

To specify a range of characters, you can use the (-) symbol like this:

awk '/[e-p]st/{print $0}' myfile

regex ranges

This matches all characters between e and p then followed by st as shown.

You can also use ranges for numbers:

echo "123" | awk '/[0-9][0-9][0-9]/'

echo "12a" | awk '/[0-9][0-9][0-9]/'

number range

You can use multiple and separated ranges like this:

awk '/[a-fm-z]st/{print $0}' myfile

non-continuous range

The pattern here means from a to f, and m to z must appear before the st text.

echo "abc" | awk '/[[:alpha:]]/{print $0}'

echo "abc" | awk '/[[:digit:]]/{print $0}'

echo "abc123" | awk '/[[:digit:]]/{print $0}'

special character classes

The Asterisk

The asterisk means that the character must exist zero or more times.

echo "test" | awk '/tes*t/{print $0}'

echo "tessst" | awk '/tes*t/{print $0}'

asterisk

This pattern symbol is useful for checking misspelling or language variations.

echo "I like green color" | awk '/colou*r/{print $0}'

echo "I like green colour " | awk '/colou*r/{print $0}'

asterisk example

Here in these examples whether you type it color or colour it will match, because the asterisk means if the “u” character existed many times or zero time that will match.

To match any number of any character, you can use the dot with the asterisk like this:

awk '/this.*test/{print $0}' myfile

asterisk with dot

It doesn’t matter how many words between the words “this” and “test”, any line matches, will be printed.

You can use the asterisk character with the character class.

echo "st" | awk '/s[ae]*t/{print $0}'

echo "sat" | awk '/s[ae]*t/{print $0}'

echo "set" | awk '/s[ae]*t/{print $0}'

asterisk with character classes

All three examples match because the asterisk means if you find zero times or more any “a” character or “e” print it.

Extended Regular Expressions

The following are some of the patterns that belong to Posix ERE:

The question mark

The question mark means the previous character can exist once or none.

echo "tet" | awk '/tes?t/{print $0}'

echo "test" | awk '/tes?t/{print $0}'

echo "tesst" | awk '/tes?t/{print $0}'

question mark

The question mark can be used in combination with a character class:

echo "tst" | awk '/t[ae]?st/{print $0}'

echo "test" | awk '/t[ae]?st/{print $0}'

echo "tast" | awk '/t[ae]?st/{print $0}'

echo "taest" | awk '/t[ae]?st/{print $0}'

echo "teest" | awk '/t[ae]?st/{print $0}'

question mark with character classes

If any of the character class items exists, the pattern matching passes. Otherwise, the pattern will fail.

The Plus Sign

The plus sign means that the character before the plus sign should exist one or more times, but must exist once at least.

echo "test" | awk '/te+st/{print $0}'

echo "teest" | awk '/te+st/{print $0}'

echo "tst" | awk '/te+st/{print $0}'

plus sign

If the “e” character not found, it fails.

You can use it with character classes like this:

echo "tst" | awk '/t[ae]+st/{print $0}'

echo "test" | awk '/t[ae]+st/{print $0}'

echo "teast" | awk '/t[ae]+st/{print $0}'

echo "teeast" | awk '/t[ae]+st/{print $0}'

plus sign with character classes

if any character from the character class exists, it succeeds.

Curly Braces

Curly braces enable you to specify the number of existence for a pattern, it has two formats:

n: The regex appears exactly n times.

n,m: The regex appears at least n times, but no more than m times.

echo "tst" | awk '/te{1}st/{print $0}'

echo "test" | awk '/te{1}st/{print $0}'

curly braces

In old versions of awk, you should use –re-interval option for the awk command to make it read curly braces, but in newer versions you don’t need it.

echo "tst" | awk '/te{1,2}st/{print $0}'

echo "test" | awk '/te{1,2}st/{print $0}'

echo "teest" | awk '/te{1,2}st/{print $0}'

echo "teeest" | awk '/te{1,2}st/{print $0}'

curly braces interval pattern

In this example, if the “e” character exists one or two times, it succeeds; otherwise, it fails.

You can use it with character classes like this:

echo "tst" | awk '/t[ae]{1,2}st/{print $0}'

echo "test" | awk '/t[ae]{1,2}st/{print $0}'

echo "teest" | awk '/t[ae]{1,2}st/{print $0}'

echo "teeast" | awk '/t[ae]{1,2}st/{print $0}'

interval pattern with character classes

If there are one or two instances of the letter “a” or “e” the pattern passes, otherwise, it fails.

Pipe Symbol

The pipe symbol makes a logical OR between 2 patterns. If one of the patterns exists, it succeeds, otherwise, it fails, here is an example:

echo "Testing regex" | awk '/regex|regular expressions/{print $0}'

echo "Testing regular expressions" | awk '/regex|regular expressions/{print $0}'

echo "This is something else" | awk '/regex|regular expressions/{print $0}'

pipe symbol

Don’t type any spaces between the pattern and the pipe symbol.

Grouping Expressions

You can group expressions so the regex engines will consider them one piece.

echo "Like" | awk '/Like(Geeks)?/{print $0}'

echo "LikeGeeks" | awk '/Like(Geeks)?/{print $0}'

grouping expressions

The grouping of the “Geeks” makes the regex engine treats it as one piece, so if “LikeGeeks” or the word “Like” exist, it succeeds.

Practical examples

We saw some simple demonstrations of using regular expression patterns, it’s time to put that in action, just for practicing.

Counting Directory Files

Let’s look at a bash script that counts the executable files in a folder from the PATH environment variable.

echo $PATH

To get a directory listing, you must replace each colon with space.

echo $PATH | sed 's/:/ /g'

Now let’s iterate through each directory using the for loop like this:

mypath=$(echo $PATH | sed 's/:/ /g')

for directory in $mypath; do

done

Great!!

You can get the files on each directory using the ls command and save it in a variable.

You may notice some directories doesn’t exist, no problem with this its OK.

count files

Cool!! This is the power of regex. These few lines of code count all files in all directories. Of course, there is a Linux command to do that very easy, but here we discuss how to employ regex on something you can use. You can come up with some more useful ideas.

Validating E-mail Address

There are a ton of websites that offer ready to use regex patterns for everything including e-mail, phone number, and much more, this is handy but we want to understand how it works.

username@hostname.com

The username can use any alphanumeric characters combined with dot, dash, plus sign, underscore.

The hostname can use any alphanumeric characters combined with a dot and underscore.

For the username, the following pattern fits all usernames:

^([a-zA-Z0-9_\-\.\+]+)@

The plus sign means one character or more must exist followed by the @ sign.

Then the hostname pattern should be like this:

([a-zA-Z0-9_\-\.]+)

There are special rules for the TLDs or Top-level domains, and they must be not less than 2 and five characters maximum. The following is the regex pattern for the top-level domain.

\.([a-zA-Z]{2,5})$

Now we put them all together:

^([a-zA-Z0-9_\-\.\+]+)@([a-zA-Z0-9_\-\.]+)\.([a-zA-Z]{2,5})$

Let’s test that regex against an email:

echo "name@host.com" | awk '/^([a-zA-Z0-9_\-\.\+]+)@([a-zA-Z0-9_\-\.]+)\.([a-zA-Z]{2,5})$/{print $0}'

echo "name@host.com.us" | awk '/^([a-zA-Z0-9_\-\.\+]+)@([a-zA-Z0-9_\-\.]+)\.([a-zA-Z]{2,5})$/{print $0}'

validate email

Awesome!! Works great.

This was just the beginning of regex world that never ends. I hope after this post you understand these ASCII pukes 🙂 and use it more professionally.

I hope you like the post.

Thank you.

0

30 Examples for Awk Command in Text Processing

In the previous post, we talked about sed command and we saw many examples of using it in text processing and we saw how it is good in this, but it has some limitations. Sometimes you need something powerful, giving you more control to process data. This is where awk command comes in. The awk command or GNU awk in specific provides a scripting language for text processing. With awk scripting language, you can make the following: Define variables, use string and arithmetic operators, use control flow and loops, generate formatted reports, actually, you can process log files that contain maybe millions of lines to output a readable report that you can benefit from.

Continue Reading →

Awk Options

The awk command is used like this:

awk options program file

Awk can take the following options:

-F fs To specify a file separator.

-f file To specify a file that contains awk script.

-v var=value To declare a variable.

We will see how to process files and print results using awk.

Read AWK Scripts

To define an awk script, use braces surrounded by single quotation marks like this:

awk '{print "Welcome to awk command tutorial "}'

awk command

If you type anything, it returns the same welcome string we provide.

To terminate the program, press The Ctrl+D. Looks tricky, don’t panic, the best is yet to come.

Using Variables

With awk, you can process text files. Awk assigns some variables for each data field found:

  • $0 for the whole line.
  • $1 for the first field.
  • $2 for the second field.
  • $n for the nth field.

The whitespace character like space or tab is the default separator between fields in awk.

Check this example and see how awk processes it:

awk '{print $1}' myfile

awk command variables

The above example prints the first word of each line.

Sometimes the separator in some files is not space nor tab but something else. You can specify it using –F option:

awk -F: '{print $1}' /etc/passwd

awk command passwd

This command prints the first field in the passwd file. We use the colon as a separator because the passwd file uses it.

Using Multiple Commands

To run multiple commands, separate them with a semicolon like this:

echo "Hello Tom" | awk '{$2="Adam"; print $0}'

awk multiple commands

The first command makes the $2 field equals Adam. The second command prints the entire line.

Reading The Script From a File

You can type your awk script in a file and specify that file using the -f option.

Our file contains this script:

{print $1 " home at " $6}

awk -F: -f testfile /etc/passwd

read from file

Here we print the username and his home path from /etc/passwd, and surely the separator is specified with capital -F which is the colon.

You can your awk script file like this:

{

text = $1 " home at " $6

print text

}

awk -F: -f testfile /etc/passwd

multiple commands

Awk Preprocessing

If you need to create a title or a header for your result or so. You can use the BEGIN keyword to achieve this. It runs before processing the data:

awk 'BEGIN {print "Report Title"}'

Let’s apply it to something we can see the result:

awk 'BEGIN {print "The File Contents:"}

{print $0}' myfile

begin command

Awk Postprocessing

To run a script after processing the data, use the END keyword:

awk 'BEGIN {print "The File Contents:"}

{print $0}

END {print "File footer"}' myfile

end command

This is useful, you can use it to add a footer for example.

Let’s combine them together in a script file:

BEGIN {

print "Users and thier corresponding home"

print " UserName \t HomePath"

print "___________ \t __________"

FS=":"

}

{

print $1 " \t " $6

}

END {

print "The end"

}

First, the top section is created using BEGIN keyword. Then we define the FS and print the footer at the end.

awk -f myscript /etc/passwd

complete script

Built-in Variables

We saw the data field variables $1, $2 $3, etc are used to extract data fields, we also deal with the field separator FS.

But these are not the only variables, there are more built-in variables.

The following list shows some of the built-in variables:

FIELDWIDTHS     Specifies the field width.

RS     Specifies the record separator.

FS     Specifies the field separator.

OFS  Specifies the Output separator.

ORS  Specifies the Output separator.

By default, the OFS variable is the space, you can set the OFS variable to specify the separator you need:

awk 'BEGIN{FS=":"; OFS="-"} {print $1,$6,$7}' /etc/passwd

builtin variables

Sometimes, the fields are distributed without a fixed separator. In these cases, FIELDWIDTHS variable solves the problem.

Suppose we have this content:

1235.96521

927-8.3652

36257.8157

awk 'BEGIN{FIELDWIDTHS="3 4 3"}{print $1,$2,$3}' testfile

field width

Look at the output. The output fields are 3 per line and each field length is based on what we assigned by FIELDWIDTH exactly.

Suppose that your data are distributed on different lines like the following:

Person Name

123 High Street

(222) 466-1234

Another person

487 High Street

(523) 643-8754

In the above example, awk fails to process fields properly because the fields are separated by newlines and not spaces.

You need to set the FS to the newline (\n) and the RS to a blank text, so empty lines will be considered separators.

awk 'BEGIN{FS="\n"; RS=""} {print $1,$3}' addresses

field separator

Awesome! we can read the records and fields properly.

More Variables

There are some other variables that help you to get more information:

ARGC     Retrieves the number of passed parameters.

ARGV     Retrieves the command line parameters.

ENVIRON     Array of the shell environment variables and corresponding values.

FILENAME    The file name that is processed by awk.

NF     Fields count of the line being processed.

NR    Retrieves total count of processed records.

FNR     The record which is processed.

IGNORECASE     To ignore the character case.

You can review the previous post shell scripting to know more about these variables.

Let’s test them.

awk 'BEGIN{print ARGC,ARGV[1]}' myfile

awk command arguments

The ENVIRON variable retrieves the shell environment variables like this:

$ awk '

BEGIN{

print ENVIRON["PATH"]

}'

data variables

You can use bash variables without ENVIRON variables like this:

echo | awk -v home=$HOME '{print "My home is " home}'

awk shell variables

The NF variable specifies the last field in the record without knowing its position:

awk 'BEGIN{FS=":"; OFS=":"} {print $1,$NF}' /etc/passwd

awk command NF

The NF variable can be used as a data field variable if you type it like this: $NF.

Let’s take a look at these two examples to know the difference between FNR and NR variables:

awk 'BEGIN{FS=","}{print $1,"FNR="FNR}' myfile myfile

awk command FNR

In this example, the awk command defines two input files. The same file, but processed twice. The output is the first field value and the FNR variable.

Now, check the NR variable and see the difference:

awk '

BEGIN {FS=","}

{print $1,"FNR="FNR,"NR="NR}

END{print "Total",NR,"processed lines"}' myfile myfile

awk command NR FNR

The FNR variable becomes 1 when comes to the second file, but the NR variable keeps its value.

User Defined Variables

Variable names could be anything, but it can’t begin with a number.

You can assign a variable as in shell scripting like this:

awk '

BEGIN{

test="Welcome to LikeGeeks website"

print test

}'

user variables

Structured Commands

The awk scripting language supports if conditional statement.

The testfile contains the following:

10

15

6

33

45

awk '{if ($1 > 30) print $1}' testfile

if command

Just that simple.

You should use braces if you want to run multiple statements:

awk '{

if ($1 > 30)

{

x = $1 * 3

print x

}

}' testfile

multiple statements

You can use else statements like this:

awk '{

if ($1 > 30)

{

x = $1 * 3

print x

} else

{

x = $1 / 2

print x

}}' testfile

awk command else

Or type them on the same line and separate the if statement with a semicolon like this:

else one line

While Loop

You can use the while loop to iterate over data with a condition.

cat myfile

124 127 130

112 142 135

175 158 245

118 231 147

awk '{

sum = 0

i = 1

while (i < 5)

{

sum += $i

i++

}

average = sum / 3

print "Average:",average

}' testfile

while loop

The while loop runs and every time it adds 1 to the sum variable until the i variable becomes 4.

You can exit the loop using break command like this:

awk '{

tot = 0

i = 1

while (i < 5)

{

tot += $i

if (i == 3)

break

i++

}

average = tot / 3

print "Average is:",average

}' testfile

awk command break

The for Loop

The awk scripting language supports the for loops:

awk '{

total = 0

for (var = 1; var < 5; var++)

{

total += $var

}

avg = total / 3

print "Average:",avg

}' testfile

for loop

Formatted Printing

The printf command in awk allows you to print formatted output using format specifiers.

The format specifiers are written like this:

%[modifier]control-letter

This list shows the format specifiers you can use with printf:

c              Prints numeric output as a string.

d             Prints an integer value.

e             Prints scientific numbers.

f               Prints float values.

o             Prints an octal value.

s             Prints a text string.

Here we use printf to format our output:

awk 'BEGIN{

x = 100 * 100

printf "The result is: %e\n", x

}'

awk command printf

Here is an example of printing scientific numbers.

We are not going to try every format specifier. You know the concept.

Built-In Functions

Awk provides several built-in functions like:

Mathematical Functions

If you love math, you can use these functions in your awk scripts:

sin(x) | cos(x) | sqrt(x) | exp(x) | log(x) | rand()

And they can be used normally:

awk 'BEGIN{x=exp(5); print x}'

math functions

String Functions

There are many string functions, you can check the list, but we will examine one of them as an example and the rest is the same:

awk 'BEGIN{x = "likegeeks"; print toupper(x)}'

string functions

The function toupper converts character case to upper case for the passed string.

User Defined Functions

You can define your function and use them like this:

awk '

function myfunc()

{

printf "The user %s has home path at %s\n", $1,$6

}

BEGIN{FS=":"}

{

myfunc()

}' /etc/passwd

user defined functions

Here we define a function called myprint, then we use it in our script to print output using printf function.

I hope you like the post.

Thank you.

0

Debian 10 Buster’a GNU Octave nasıl yüklenir?

Özgür bir yazılım olan GNU Octave; çoğunlukla, ticari karşılığı olan MATLAB ile uyumlu bir dil kullanır. Doğrusal ve doğrusal olmayan matematiksel problemleri sayısal olarak çözmeye ve başka sayısal deneyler yapmaya elverişli bir komut satırı arayüzü sunar. Komut ekranı ve göresel arayüzleri destekleyen yazılım,  GNU Projesi kapsamında 1988 yılından beri geliştirilmektedir ve Batch-uyumlu bir dil olarak da kullanılabilir. GNU Genel Kamu Lisansı şartlarına uygun olarak yeniden dağıtımı yapılabilen ve/veya değiştirilebilen GNU Octave;  John W. Eaton ve başka pek çok kişi tarafından yazılmıştır. GNU Octave özgür bir yazılım olduğu için ek fonksiyonlar yazarak ve ekleyerek ya da yaşadığınız problemleri paylaşarak onu daha da kullanışlı hale getirmeye katkıda bulunmak mümkündür. Öncelikli olarak sayısal hesaplamalar için tasarlanmış yüksek seviyeli bir dil olan GNU Octave Debian 10 Buster’a nasıl yüklenir?

Continue Reading →

Öncelikle depolarımızı aşağıdaki komutla güncelleyelim:

sudo apt update

Artık GNU Octave’ı aşağıdaki komutla kurabilirsiniz:

sudo apt install octave

Kurulumu onaylamak için E tuşuna ve ardından <Enter> tuşuna basın. APT paket yöneticisi gerekli tüm paketleri indirip yükleyecektir. GNU Octave kurulduktan sonra, Debian 10’un uygulama menüsünde simgesini bulabilirsiniz. Bu yazının yazıldığı sırada GNU Octave’ın en son versiyonu, 5.1.0 idi. Ancak, resmi paket deposundaki GNU Octave sürümü daha eskidir. Dilerseniz, GNU Octave 5.1.0’ı Debian 10 için flathub flatpak deposundan indirebilirsiniz. Flatpak, varsayılan olarak Debian 10’da yüklü değildir. Ancak, Flatpak’ı Debian 10’a Debian 10’un resmi paket deposundan kolayca yükleyebilirsiniz. Bunun için yine önce depolarımızı aşağıdaki komutla güncelleyelim:

sudo apt update

Şimdi, Flatpak’ı aşağıdaki komutla kurun:

sudo apt install flatpak gnome-software-plugin-flatpak

Şimdi Debian 10’daki Flathub Flatpak deposunu aşağıdaki komutu kullanarak ekleyin:

sudo flatpak remote-add --if-not-exists flathub
https://flathub.org/repo/flathub.flatpakrepo

Şimdi, aşağıdaki komutu kullanarak bilgisayarınızı yeniden başlatın:

sudo reboot

Bilgisayarınız başladığında, GNU Octave’ın son sürümünü Flathub’dan yüklemek için aşağıdaki komutu çalıştırın:

flatpak install flathub org.octave.Octave

İşlemler biraz zaman alabilir. Ancak sorunsuzca tamamlanacaktır. Artık GNU Octave’ın en yeni sürümünü kullanabilirsiniz.

0