Wednesday, December 15, 2010

Listing files/directories in EMC Networker Backup

To see which files got backed up in the backup we can use nsrinfo command
mminfo -q "name=/etc,volume=,level=incr"
-r "savetime(22),nsavetime"
“nsrinfo -t nsavetime clientName”
for ex, nsrinfo -t 1233034744 coolclient
nsrinfo -v -t 1233034744 coolclient
for detailed information of the files got backed up in that SSID backup corresponding to which there is nsavetime.

Index Backup in different Media Pool in EMC Networker

Quick Note -
Index Backups to different or respective pools
ndices will get appropriate level backups with each client backup that occurs. If you want to force a full index backup though for all machines, and you have all machines in a group, you can run (from the command line):

savegrp -O -l full groupName
for ex, savegrp -v -O -l full "datavault Volume"

savegrp -v -O -l full -G groupname

for ex. savegrp -O -l mysqlserver -------> this will backup the full level backup of index & bootstrap record.
where “groupName” is the name of the index group (or other group that has all clients in it). The -O option saves only bootstrap/index information, not the actual client data itself.

Nsrstage case study for cloned savesets in EMC Networker (Research)

Here is a Little piece of research I did on the Networker for the SSID & Clone ID's

Case 1 : original save set -------> clone --------> stage
created tape for the Default group and Default clone group. both have separate tapes assigned to that pools.
now i created 1g file and wrote it to the default tape group.
save -b Default /tmp/1g.file
mminfo -q "name=/tmp" -r ssid,cloneid,nsavetime
note the ssid and clone id.

Now we will clone this saveset to the default clone pool
mminfo -q "volume=<>" -r ssid,cloneid,nsavetime
mminfo -q "volume=<>" -r ssid,cloneid,nsavetime
compare the ssid for both the conditions

Now relabel the default tape to make it blank.
nsrstage the default clone pool to the default pool
note the ssid,cloneid for the staged pool
mminfo -q "volume=<> -r ssid > 1.txt
nsrstage -v -b Default -m -S -f 1.txt
mminfo -q "<>" -r ssid,cloneid
compare it to the default clone one to see whether there is change or what?

1) after save
mminfo -q "Name=/tmp/1g.tmp" -r ssid,cloneid,nsavetime
ssid clone id save time
3219192121 1289812281 1289812231

2) after cloning
mminfo -q "volume=AZ0061L4" -r ssid,cloneid,nsavetime
AZ0061L4 - Default group

ssid clone id save time
3219192121 1289812281 1289812231

mminfo -q "volume=AZ0059L4" -r ssid,cloneid,nsavetime
AZ0059L4 - Default clone group

ssid clone id save time
3219192121 1289836444 1289812231

3) after staging
mminfo -q "volume=AZ0061L4" -r ssid,cloneid,nsavetime

ssid clone id save time
3219192121 1289837524 1289812231

mminfo -q "volume=AZ0059L4" -r ssid,cloneid,nsavetime

6095:mminfo: no matches found for the query

Case 2 : original save set --------------> stage
Create 2 tape for the default group.
Now create the 1g file and write it to Default group.
note the ssid and cloneid for this
stage this to this another tape.
again note the ssid,cloneid for both tapes and verify what changes

1) after save
mminfo -q "name=/tmp/1g.tmp" -r ssid,cloneid,nsavetime --------> AZ0061L4

ssid clone id save time
3084999581 1289837469 1289837417

2) after staging
mminfo -q "volume=AZ0059L4" -r ssid,cloneid,nsavetime -------------> AZ0059L4

ssid clone id save time
3084999581 1289838604 1289837417

mminfo -q "volume=AZ0061L4" -r ssid,cloneid,nsavetime

6095:mminfo: no matches found for the query

EMC Networker Recover Command usage

Quick Note -
Recover command can be used both in interactive mode as well as parameters can be given in the recover command itself as in the case of the recover using the SSID option.
Use the recover command in the interactive mode
Find the save time using the mminfo command such as
mminfo -q "name=/tmp" -r volume,level,sumsize,savetime
Recover command will show the save sets for the respective client or the server which itself is a client using the ls command.
At the prompt of the recover command, enter the chagetime command.
use changetime -l 10/27/10
or
use changetime -l 10/27/10 10:01 am
This will list the files present at that time
To see the attributes you can use the ll command inside the recover interactive command
Changetime command accepts the date as 10/27/10 [dd/mm/yy] 10:00 am

To mark the file for the recovery, the recovery file has to be created
To build this use this
add -q
To view the build list for the recovery
list -l ----------> to view the recovery list
To delete the filename form the recovery list
delete <>
Destinaton command to see the default recovery location.
Use volumes command to see the required volumes
Use relocate 
to specify the new destination directory for the recovered files. Use recover command at last to recover the file to the desired destination.

EMC Networker 7 series Manual Backup through commands

Manual backups are sometime necessary, so here is the way
1) Use save -l incr -b Default /tmp
this will create a Backup with manual flag but the flag -l is ignored.
Networker needs a Time-Stamp to base a non-full backup against. The timestamp is going to be the nsavetime of a previous backup. ( for increamental it will be the nsavetime for most recent (whatever) backup for the saveset was.
2) To see the information use mminfo command
mminfo -q "name=/tmp" -r volume,level,sumsize,nsavetime,client,ssid (etc)
note the nsavetime of the last backup.
3) Now using this time with the -t option , this will backup all changes to that directory since the last time given.
Now this will show as manual bakcup in mminfo, but as you can comapre the size you will see its increamental backup only.
for ex. save -q -LL -t <> /tmp
4) As you can see, we’ve got a full backup, and a subsequent manual backup that is effectively an incremental against the full.
5) This is usefull in the scenario when someone has to take the backup without giving administrative access to the networker server.

Index Recovery in EMC Networker 7 series server

Hi Guys this is quick note to myself -
Sometimes we need to recover files from tapes for which the retention policy is expired and the Online index is not present in the server for browsing the desired file for the recovery.
To recover all client file indexes for a particular client
nsrck -L7 client name
Through this we can recover the NDMP client indexes
scanner -i option will not work for NDMP client index

To rescan the tape for the index building for the tape for which the browse policy is expired or the index is not present in Online server index do the following steps
Mount the tape into the drive in Read only mode, first make it read only then mount it in the drive
then use the following command
1)scanner -S <> -im /dev/nst0 -----> this will scan the index from the Tape for the specified SSID and -i - builds both the media and online index m - media database for the tape mounted in the /dev/nst0
2)scanner -c <> -im /dev/nst0 -------> this will rebuild the index for the client specified cleint in the command and will read from the device specified in /dev/nst0, then if it comprises for than 1 voluem then it will ask for another tape to mount and will ask for device, if its mounted in the another device. otherwise unmount from the first one and mount it in the same drive. this will rebuild the whole index for the specified client and will ask for all tapes
3) scanner -m /dev/nrst8 ----------> to rebuild the client online file index for a client from a tape
(this is hopefully for the Tapes which has index backups in it after every data backup)
Then run nsrck -L7 -t "06/07/99" coolclient
This will check for the client coolclient and will recover the index from the tape for that client.

Now we can browse the file for recovery in Online index of the server.

Bootstrap recovery from Crashed Networker server

Quick Note -
1. Install OS and netwoker same version and same patches
2. Reconfigure the NDMP Jukebox
3. Reset the autochanger Device by nsrjb -vHE, this will reset the autochanger, ejects backup volume, reinitializes the element status and checks each slot for a volume. in linux use ielem
and sjiielm in other platforms.
4. Inventory the auto changer by using the nsrjb -I > this helps to determine whether the volumes required to recover the bootstrap are located inside the auto changer
5. Locate the Latest bootstrap save set ID, to identify the recent bootstrap save set ID. Insert the most recent media volumes used for the scheduled backups into the appropriate device
6. Insert the first volume of the bootstrao save set into the first drive of the autochanger.
nsrjb -lnv -S -f
where slot is where the first volume is located
Device name is the pathname for the first drive found by the inquire command
7. Run scanner -B command to determine the save set ID of the most recent bootstrap on the mounted media. if you do not locate this ID, run the scanner -B command on the preceding media to locate it.
8. Record the bootstrap save set ID and the volume label from the output.
9. Recover the bootstrap and resource database using the mmrecov command
10. Then load the volume when prompted
11. Then the res.R directory is created. copy it to respective location and the restart the service
12. Then run the scanner command with the -m options to scan the save sets and tapes into the media database.
13. To identify only appendable volumes run
mminfo -q '!full' -r "volume,client,mediarec,mediafile,next,name"
mark a volume read only
nsrmm -y -o readonly volume name
Repeat the steps 3 and 4 for every append able volumes found in the previous step
mount the tape into the tape drive
nsrjb -ln -f device
use the last file and record number from the previous mminfo command and issue the following scanner command
scanner -f number -r number -m device
this will make the media database before crash

Dry Run of Directives in EMC Networker 7 series for Verification

Quick Note -
Lets say there is a directory dir1 -------> dir2
There is .nsr file in the dir1 which says skip dir2
Then to check this run
Save -b Default -nv /dir1
This will match the skip directive and give you the dry run for the space required excluding that
-n tells networker not to perform the actual backup
-v - verbosity

Recovering from aborted saveset using SSID Recover in EMC Networker 7.4

First of all find the aborted savesetid (SSID) through the GUI or using the mminfo command, then fire this command from client system
recover -d path -s backupserver -iN -S ssid
where path - where you want to recover the files
backupserver - backup server from which you want to recover files
ssid - ssid of the aborted saveset
-iN - specifies the default overwrite response
N - system save set
We will get the listing of the files recovered, and at last will give error for the files which are in transit at the time of abort.
Hence we can recover from the aborted save set up-to the file which got successfully backed-up.

Using this method we can also recover files using the specific SSID for the recovery only, such as recovering the incremental backup for that backup only.Generally this is usefull in case of the nsrinfo command.

Skip directories/files in EMC Networker 7 series (Directives)

Quick Note -
In the directory you want to control, create a file called .nsr (in Linux) or nsr.dir (in Windows) containing skip directives such as
+skip: *.tmp
+skip: nsrbackup > where nsrbackup is a directory
The “+” means it also covers subdirectories.
Separate Multiple directives by skip in +skip directive
dont use +skip directive for each.
separate it by space.
enter the Skip directives as follows

OR

Enter the directory
then use skip
for ex.
<<"/mydir">>
+skip: *.*
this will skip all files and folders in the directory while backup.

Raw file creation in linux

Raw Data files can be created pretty fast in Linux using
dd if=/dev/zero of=/tmp/file1g.tmp bs=1M count=1024
bs = block size
count = No. of blocks

To create sparse-file in linux using dd command
dd if=/dev/zero of=sparse-file bs=1k count=0 seek=5120
Will create a file of five megabytes in size, but with no data stored on disk (only metadata)
dd if=/dev/zero of=sparse-file bs=1k count=0 seek=10000 > this will create the file size of 10 MB.
we can create the sparse file in local computer but while scp'ing it to different computer it will be extracted from metadata and will show full data size

Increasing the count section will increase the actual data size, 0 = means no actual data
you can change count to 1,2,3,4,etc

UUID file system identification to improve /etc/fstab stability in Linux

The naming convention used in Linux such as sda/sdb/sdc etc are system chosen.
In SAN Environment, the LUN priority changes after certain operations which alters the device names detected in Linux, causing the wrong static mapping according to the /etc/fstab file or no mapping at all.
So to prevent this from happening it is good to map the mounts using the unique UUID written on every device.
Hence the device to directory mounting remains the same all the time.

The UUID's are assigned to the separate partitions i.e. sda1/2/3/4 etc.
fstab entry for example is
UUID= /media/sdb1 xfs defaults,umask=009,gid=47 0 0
Here it will get it mounted in the /media/sdb1 directory

Existing UUID can be seen by using this
ls -l /dev/disk/by-uuid/

More details can be found about the file systems using
sudo vol_id /dev/sdb1

or by using
blkid /dev/sdb1

New UUID ( Unique Identifiers ) can also be written to the Drives or LUN's detected.
To assign new uuids to the new /dev/sda-b-c-d detected
1)uuidgen
2)tune2fs -U /dev/sdc (for ex) - for ext2,3,4
3)xfs_admin -U /dev/sdc for xfs
4)reiserfstune -u /de/sdc for reiserfs

iSCSI configuration using iscsiadm in Linux

Quick Note -
1. Install the necessary packages
2. Then /etc/init.d/open-iscsi start
3. Then run the command iscsiadm -m discovery --type=st --portal=
4. The initiator name can be found in /etc/iscsi/initiatorname.iscsi, put it in the filer
delete any prevous file present in nodes for previous iscsi initiator
5. chkconfig -l open-iscsi > to find the scsi status at runlevel
6. iscsiadm -m node > to show the discovered targets
7. iscsiadm -m node -P 1 > to show the interface binding with it
8. iscsiadm -m node -T <> -p <> -l to bind the storage to the device in /dev/

xfs_growfs to dynamically grow the xfs file system

Quick Note -
Follow the procedure explained with scenario.
1) We have 90 GB of Disk available or LUN on the server
2) Create a partition on it of 80GB using parted as GPT partition
parted /dev/sdd
print
mklabel gpt
mkpart
start
end 80GB
quit
3)Format it using mkfs.xfs and copy some data into it after mounting
4)Unmount the /dev/sdd1
5)parted /dev/sdd
print
Cancel if it says for for detected GPT partition
rm 1 - to remove the partition
mkpart or type mkpart primary xfs 0 -0
primary
start - 0
end 90GB
quit
6)Just remount it
7) Then xfs_growfs -d /<> this will give message as changed blocks from x to y.
check by df -h
DONE

Sparse File concept in Linux

A sparse file is a type of file that attempts to use file system space more efficiently when blocks allocated to the file are mostly empty. This is achieved by writing brief information (metadata) representing the empty blocks to disk instead of the actual "empty" space which makes up the block, using less disk space. The full block size is written to disk as the actual size only when the block contains "real" (non-empty) data.
When reading sparse files, the file system transparently converts metadata representing empty blocks into "real" blocks filled with zero bytes at runtime. The application is unaware of this conversion.

To create sparse-file in linux using dd command
dd if=/dev/zero of=sparse-file bs=1k count=0 seek=5120

EFI GPT Partition Table (Addressing more than 2 TB space in file system)

Making Partition with parted utility which can address more than 2TB space
test device is sdd
1)parted /dev/sdd
2)mklabel gpt > label the partition as EFI GPT partition table
3)p
4)note the size given there in the above command
5)mkpart
6)press enter for primary partition
7)specify the partition type
8)give the start size space ex. 0
9)give the size you want to make the partition for ex.3650GB
10)q
11)now the partition /dev/sdd1 is created.
11)mkfs.xfs -f -b size=4k /dev/sdd1 > to make the partition with the block size 4096, and -f > force overwrite on exisiting file system.
12)mount the partition.

Tuesday, December 14, 2010

Linux based Network monitoring tools

Hi, Guys.
Here is the List of the best tools for Network Administration on Linux Platform.
1)MRTG-RRDtool
2)bmon
3)bwm-ng
4)cacti
5)cbm
6)dstat
7)etherape
8)ipband
9)iftop
10)iperf
iperf -s on server side
iperf -c on client side
iperf -c -d to do both side testing
ipref -s -w
ipref -c -w note that linux will double this buffer amount
11)jperf
12)ifstat
13)iptraf
14)ifplugstatus
15)jnettop
16)nload
17)netwatch
18)netperf
19)pktstat
20)speedometer
21)WMND

Shrinking Virtualbox VDI files to save disk space

Virtual Box VDI format files grows quite heavy over the time.
These following methods can be used to shrink the virtualbox VDI virtual hard drive files in various environments.

To compact the Windows Virtual Machine on Windows Host OS

1)Defragment the drive
2)use nullfile.exe tool to write 0 byte to the file
3)run command as in case of windows platform
// VBoxManage modifyvdi " vdi path " compact // <---- COMMAND

To compact Windows Virtual Machine on Linux host OS
Defragment it.
nullfile it.
VBoxManage modifyhd NAME-OF-YOUR-VIRTUAL-HARD-DRIVE --compact

To compress Linux Virtual Machine on Linux Host os
sudo dd if=/dev/zero of=/zerofile
sudo rm /zerofile
VBoxManage modifyhd NAME-OF-YOUR-VIRTUAL-HARD-DRIVE --compact

To compress Linux Virtual Machine on Windows Host OS
sudo dd if=/dev/zero of=/zerofile;
sudo rm /zerofile
// VBoxManage modifyvdi " vdi path " compact // <---- COMMAND

Linux TCP/IP Tuning

Now here I will be discussing some techniques to improve the Performance of TCP/IP in Linux.
This is basically quick note to myself.
What happens when the packet is to be transmitted out of NIC?
1) Process writes data to socket file descriptor
2) Kernel encapsulates it into PDU (Protocol Data Unit)
3) PDU's are moved to transmit queue (transmit buffer)
4) Driver copies PDU from head of queue to NIC
5) NIC raises interrupt when the PDU is transmitted

What happens when the packet is received on NIC?
1) NIC receives frame and use DMA(Direct Memory Access) to copy into recieve buffer
2) NIC raises CPU interrupt
3) Kernel handles interrupt and schedules a softirq
4) Softirq is handled to move packets up to the IP layer for routing decision
5) If it's a local delivery, then the packet is decapsulated and placed into Socket's receive buffer.

tc - traffic control command used for modifying kernel parameters related to Network Stack.

For collisions,packets dropped & queues use following commands
tc -s qdisc show dev eth0
tc - traffic control
s - statistics
qdisc - queing discipline

ifconfig eth0 | grep txqueuelen

ip link set eth0 txqueuelen 10001 - to change the queue length of interface

Network parameter paths in Linux (In this case Ubuntu) (can be changed by /etc/sysctl.conf also for persistent configurations)

1)/proc/sys/net/core/rmem_default - default buffer size for receive buffer
2)/proc/sys/net/core/wmem_default - default buffer size or transmit buffer
3)/proc/sys/net/core/rmem_max - maximum recieve buffer used by an application
4)/proc/sys/net/core/wmem_max - maximum send buffer used by an application
5)/proc/sys/net/ipv4/tcp_rmem - TCP reads
6)/proc/sys/net/ipv4/tcp_wmem - TCP writes

Values are in sequence of "Minimum default Maximum".

We can directly see the values of these parameters. Other way is to use the sysctl -a command as follows :
sysctl -a | grep net.core.r
sysctl -a | grep net.core.w
(run sysctl -p to apply the changes in sysctl.conf)

Terminology while doing socket programming:
read = receive
write = transmit

Thoughts on OS virtualization & Bare Metal virtualization

OS Virtualization - OpenVZ/Virtuozzo/OpenLXC
Bare Metal Virtualization - Vmware/Citrix/Hyper-V

Benefits of OS Virtualization:
1)Complete isolation of the processes running, like the BSD jails.
2)No kernel stack overhead on the system as the containers share the same kernel from the host machine.
3)Around 1-2% total overhead on the system as compared to the full virtualization.
4)No locking of resources to the virtual machines. Locked dedicated resources can only be used by that virtual machine in full virtualization model. In OS virtualization it can be shared between multiple isolated environments if the particular container is not using that resource. This provides efficient resource utilization.
5)All Containers are at the same kernel patch level as that of the host OS. Hence less management is required for updating.
6)Fast application deployment through Template Caching & Management.
7)Fast package management through Local repository servers running as containers.
8)VZ architecture uses common file system for the containers, so each virtual environment is just separated by the chroot parameters.
9)The virtual machine can be cloned just by copying the files in one directory to another and creating the config file.
10)OS virtualization parameters :
Files: System libraries,applications, virtualized /proc and /sys, virtualized locks, etc
Users & groups: own root user as well as groups.
Process: A container only sees its own processes.PID are virtualized so the init PID looks like 1.
Network: Virtual network devices with Host routed & bridged mode,Netfilter module support and private routing tables.
Devices: If needed any container can be granted access to real devices like Network interfaces,serial ports,disk partitions by bypassing it to specific containers or by virtualizing it.
IPC objects: Shared memory model support,etc.
11)Two level disk quotas, fair cpu scheduler & UBC (User bean counters) these parameters can be changed at runtime eliminating the need for reboot.
12)CPU unit parameter gives total control of the CPU Time utilization priority.
13)I/O schedular priorities can also be decided for the containers.
14)Live Migration using check pointing.

Limitations of the OS Virtualization:
1)This Virtualization technology only supports VPN technologies like PPP & TUN/TAP.
2)IPSec crypto is not supported as of now.
3)Host OS kernel is responsible for all container operations, issues in this kernel can hamper the whole containers.
4)If Host OS is compromised, the whole containers are compromised. Strong security policies needs to be maintained on Host OSes.
5)No complete isolation of resources, hence causes some latency issues in selected scenarios.