Thursday, 10 April 2014

Sed/Awk Top Commands(Unix interview questions)

In this post i will mention very useful sed/awk commands,These are quite handy in day to day work and also useful in unix interview.

How to Print only Blank Line of File.
sed -n '/^$/p' Test_file.txt

To Print First and Last Line using Sed Command

sed  -n ‘1p’ Test_file.txt

sed –n ‘$p’ Test_file.txt

To Print all line Except First Line

sed –n ‘1!p’ Test_file.txt

Delete all Line except First Line

sed –n ‘1!d’ Test_file.txt

How to get only Zero Byte files which are present in the directory

ls -ltr| awk '/^-/ { if($5 ==0) print $9 }'

How add a First record and Last Record to the current file in Linux

sed -i -e '1i Header' -e '$a Trailor' test_file.txt

How to display Even number of records into one file and Odd number of records into another file

awk 'NR %2 == 0' test_files.txt
awk 'NR %2 !=  0' test_files.txt

Remove all empty lines:

sed '/^$/d' test_file.txt

sed '/./!d' test_file.txt

Add at start of line :
awk '{print "START"$0}' FILE 

Add at end of line :
awk '{print $0"END"}' FILE 

Too see a particular line

For example if you just want to see 180th line in file sed -n '180p' testfile.txt

To find a particular column in file 

cat testfile.txt |awk -F"," '{print $2}'

To rename file with current date 

mv test test_`date +%Y-%m-%d

Command to take out all those lines which are having 8 at 17th position 
grep '^.\{16\}8' testfile.txt >testfile_new.txt

To remove nth line without openning file 
sed 'nd' file1>file2 to remove multiple lines sed -e 1d -e 5d

To find top 20 files with most space

ls -ltr|awk -F" " '{print $5 $9}' sort -ntail -20

To find record in first file not in second 
comm -13 testfile1.txt testfile2.txt

If you are looking from something that is contained in a file but you don't know which directory it is in do the following:
find . -name "*" xargs grep -i something This will find all of the files in the directory and below and grep for the string something in those files!

Delete Files Delete all the files starting with name testfile 

find . -type f -name "testfile*" exec rm -f {} \;

Remove blank space from file 

sed -e "s/ *//g" testfile.txt >testfile.txt_wo_space

How to do grep on large number of files in a direcory 

Normally you get the error "grep argument list too long" if you try to do grep in directory having large number of files ,It can be avoided by using it in below manner

grep -rl "Search Text" /tmp


How to do count of lines in ebicdic file 

dd if=ebcdic_file bs=$REC_SIZE > /dev/null

9 comments:

  1. Introduction and Defining Data Model
    Introduction to Informatica MDM Hub
    http://www.21cssindia.com/courses/informatica-mdm-online-training-99.html
    Master Data
    Master Data Management
    A Reliable Foundation of Master Reference Data
    Components of MDM Hub
    Application Server Tier
    Database Server Tier
    Batch Data Process Flow
    Trust Framework
    Consolidation Flag
    Employees to learn at their own pace and maintain control of learning “where, when and how” with boundless access 24/7by 21st Century Software Solutions. contact@21cssindia.com ---- Call Us +919000444287

    ReplyDelete
  2. It could be done without the need of aggregator and joiner.
    Step 1: Assign serial numbers to each row after source qualifier using sequence generator or expression transformation.
    Step 2: Use sorter to sort the output on this generated serial number in descending order.
    Step 3: Use another expression to generate sequence numbers on this data set
    Step 4: Use filter transformation to pass only records with generated sequence number <=3.

    ReplyDelete
  3. onlinetrainings12 June 2014 at 22:25

    informatica mdm online training| informatica mdm training ...
    www.21cssindia.com/courses/informatica-mdm-online-training-99.htmlOnline
    training informatica mdm, online informatica mdm training, informatica
    mdm online training, informatica mdm training, informatica mdm
    enquiry, ...

    ReplyDelete
  4. right this is good and short way

    ReplyDelete
  5. hi, thanks a lot for that, i have a question, how can i do to normalize another data set for a second target column ?

    ReplyDelete
  6. suppose some more data is inserted in the source then what happens

    ReplyDelete
  7. Method mentioned by author is good one

    ReplyDelete
  8. Prathap chandar S7 April 2015 at 04:53

    Thank you for sharing the valuable information.

    ReplyDelete
  9. Thanks a lot, It was indeed useful

    ReplyDelete