Convert String Hash

Problem

Need to convert a string to a hash in Perl.



Solution

Split can also be used to generate a hash, from a string.

In this instance split looks at two things, the key to value separator (in this case an equals sign) and the comma to separate groups.



Example



$str="mouse=one,cat=two,dog=three";
%ash=split /,|=/, $str;

while(($k,$v)=each(%ash)) {
print $k."=>".$v."n";
}

Produces:

cat=>two
dog=>three
mouse=>one



Reference

[tags]Perl Coding School[/tags]



Word Counting

Problem

You want to count the words in a file, using Perl.



Solution

Obviously wc -w can be used for this under UNIX.

But fairly decent demo on the power of split and push! 🙂



Example



while(<>) { push @words, split; }

print "found ".scalar(@words)." words in filen";

Works nicely, even over multiple lines:


[[email protected]]/var/log/httpd% echo "test 1 23 4 n a b c" | perl -e 'while() {
chomp();
push @words, split;
}
print "found ".scalar(@words)." words in filen";
'
found 7 words in file



Reference

[tags]Perl Coding School[/tags]



Number Output Lines

Problem

You want to number each line of output.



Solution

Again other ways to do this with UNIX, such as grep -vn XXXX filename.

But sure there are times you want the line number, of standard input.



Example



echo "testingntestingn123" | perl -e 'while(<>) {
chomp();
print($..": ".$_."n");

}'
1: testing
2: testing
3: 123

Or all on one line


echo "testingntestingn123" | perl -ane 'chomp();print($..": ".$_."n");'
1: testing
2: testing
3: 123



Reference

[tags]Automatic Numbering, Perl Coding School[/tags]



Perl Doc Usage

Problem

You need help on a specific Perl function or module. Or just a general question.



Solution

Perl installation comes with its own documentation system.

In this post, I’ll just cover how to invoke the various perl documentation.

Going forward I’ll add posts describing how to create the different types of Perl doco.



Example


See different Perl areas under the documentation system
perldoc perl

Lookup function name
perldoc -f func name

Display source of module
perldoc -m mod name

Search perl FAQ
perldoc -q pattern

Search entire Perl installation
perldoc -r pattern



Reference

[tags]Perl, Tutorials, Help, Perl Coding School[/tags]



Importing AWK SED

Problem

You have some way cool awk or sed scripts, but you want to use Perl.



Solution

Easy. Perl comes with some excellent binaries, which will convert AWK or SED scripts to Perl. Excellent tool for learning Perl too!! 🙂



Example


To convert from AWK to Perl:


a2p awkscript

To convert from Sed to Perl:


s2p awkscript


$ date | awk ' { print $(NF-1) } '
WST

$ cat > awkscript
{ print $(NF-1) }

$ date | awk -f awkscript
WST

$ a2p awkscript
#!/usr/bin/perl
eval 'exec /usr/bin/perl -S $0 ${1+"[email protected]"}'
if $running_under_some_shell;
# this emulates #! processing on NIH machines.
# (remove #! line above if indigestible)

eval '$'.$1.'$2;' while $ARGV[0] =~ /^([A-Za-z_0-9]+=)(.*)/ && shift;
# process any FOO=bar switches

$[ = 1; # set array base to 1
$, = ' '; # set output field separator
$ = "n"; # set output record separator

while () {
chomp; # strip record separator
@Fld = split(' ', $_, 9999);
print $Fld[$#Fld - 1];
}

$ a2p awkscript > perlscript

$ date | ./perlscript
WST



Reference

[tags]Perl, awk, sed, Perl Coding School[/tags]



Environmental Variables

Problem

You want to set something once and have it remembered between UNIX settings.



Solution

Set some variables. Simplest way to set environmental variables, is via the shell rc or profile scripts.



Example


For example if your shell is zsh, use .zprofile or .zshrc. For bash it is .bashrc or .profile.

To invoke env vars from cron, use something like this:


30 * * * * zsh -c "(source $HOME/.zprofile; cmd ... ")

To source your .zprofile before command invocation. Additionally you can use a dot to source a script containing your vars:


. scriptname



Reference

[tags]UNIX, variables, shell scripting, Unix Coding School[/tags]



Counting Column Contents

Problem

You want to count the contents of a column. Accumulatively totaling each number in a given column.



Solution

To total up the contents of a given column, use following awk code:



Example



nawk -v col=# ' { tot+=$col } END { print tot } '

In this example, do a du of the current directory and then total column one (1). Notice awk receives a variable on the column to sum up.


du -ks * | awk -v col=1 ' { tot+=$col } END { print tot } '
11168



Reference

[tags]UNIX, awk, nawk, gawk, Unix Coding School[/tags]



Regular Expression Matching NOT

Problem

You want to find a line, not matching the pattern.



Solution

I find this invaluable whilst editing crons. Or in vi – like this: [^x] where x equals the character you want to ignore.



Example


For example after performing a crontab -e, wanting to skip comments.


crontab -e
/^[^#]

Also with sed – substituting tags, etc:


sed 's#]*>##g' filename

This says edit your crontab, match the next line that has a beginning, immediately followed by a character that is not a comment.

After performing this once, just type n – for the next match (i.e. next line that does not start with a comment).



Reference

[tags]UNIX, vi, regular expression, Unix Coding School[/tags]



Finding Disk Space Hogs

Problem

Common admin task, to find what is using all the disk space.



Solution

Simpliest thing to do, in a known directory – for example /var/log – is to run du -ks *.

That will show all files and directories, in the current directory along with their disk space in kilobytes.

Then cd down the top ones and rerun. You can also pump the output through sort (-n on Solaris and +n on Linux).



Example



cd /var/log
du -ks * | sort -n # Linux
cd httpd
du -ks * | sort -n

With some flavours of UNIX, du shows actual physical disk space being used – where as df shows disk space being reserved. This is only really noticeable when you remove a file, that is still being written too. du shows the current directory is only using x kb and df still says you are at 100%. 🙂 You need to find and kill the process.

Another useful way of finding largest files, is find command – something like this:


find / -xdev -type f -a size +20000 -ls

This says find files over 10MB (20 thousand 512 byte blocks), do not traverse file systems mounted on this one (-xdev). You can also say -mount to stop find traversing these file systems. That way df and find tally – just re-run it for var, etc as required.

Again this can be pumped through to sort:


find / -xdev -type f -a -size +20000 -ls | sort -k7 -n # linux
find / -xdev -type f -a -size +20000 -ls | sort +6n # Solaris - starts field count from zero



Reference

[tags]UNIX, df, du, Solaris, Unix Coding School[/tags]



Display Directory Usage As Percentage

Problem

How many times have you been given the job, to clear down space in a UNIX file system?

Ever wanted to see the percentage, on a directory by directory basis.



Solution

Here is a simple bit of code, to display the current directory’s percentage of over all space available



Example



(du -ks . | tr 'n' ' '; (df -k . | tail -1)) | gawk ' { printf("%0.0f%n",($1/$4)*100); } '

Here is a run through, checking /usr:


[[email protected] usr]# (du -ks . | tr 'n' ' '; (df -k . | tail -1)) | gawk ' { printf("%0.0f%n",($1/$4)*100); } '
37%

[[email protected] usr]# cd java

[[email protected] java]# (du -ks . | tr 'n' ' '; (df -k . | tail -1)) | gawk ' { printf("%0.0f%n",($1/$4)*100); } '
4%

[[email protected] usr]# cd ../X11R6
[[email protected] X11R6]# (du -ks . | tr 'n' ' '; (df -k . | tail -1)) | gawk ' { printf("%0.0f%n",($1/$4)*100); } '
2%

[[email protected] X11R6]# cd ../lib
[[email protected] lib]# (du -ks . | tr 'n' ' '; (df -k . | tail -1)) | gawk ' { printf("%0.0f%n",($1/$4)*100); } '
15%

[[email protected] local]# cd ../share
[[email protected] share]# (du -ks . | tr 'n' ' '; (df -k . | tail -1)) | gawk ' { printf("%0.0f%n",($1/$4)*100); } '
13%



Reference

[tags]UNIX, Admin, Unix Coding School[/tags]