Monday, July 30, 2007

Today's links

GTD
Liferemix has a whole bunch of productivity blogs
GTD primer

Scan all incoming paper
From unclutterer.com
Ken Silver

iPod fixes

Linux
Dolby 5.1 sound
Mounting ISOs

IPTV info

Wednesday, July 25, 2007

Back of the napkin Network speed testing

I've been trying to compare iSCSI, NFS, and general network speed lately.

Awhile ago, my main Linux system was a dual PIII 500MHz. I was getting 20MB/s over the gigabit ethernet which seemed slow to me. At work I've seen 40MB/s to 60MB/s. I upgraded the server to an AMD dual core system and get 60MB/s.

My back of the napkin testing was pushing a gigabyte with dd:
time dd if=/dev/zero bs=1048576 count=1024 of=

GNU dd can also display MB/s.

I figure /dev/zero is the fastest source of bits. I can output to /dev/null to get the fastest data sink. Or local disk or NFS or SMB. But that still didn't measure just the network.

I found ttcp which sets up a client/server. On the sink I do ttcp -r > /dev/null. On the source, I pipe dd to ttcp -t . Netcat by Hobbit could do the same, but now there's a GNU nc that doesn't let you do -lp! I suppose I should just compile my own.

Something like:
On sink: nc -lvnp 5150 > /dev/null
On source: dd blah | nc -v -w 2 serverip


Bill McGonigle has this note about iperf. It looks interesting too.
Iperf's home
TCP tuning

The Iperf tarball has a test directory (read the README in it):
one side: perl server
other: perl client tests remote local | tee iperf.log
Tune some stuff
Run it again to iperf2.log
grep Mbits.s /tmp/iperf*.log | awk '{print $1, $(NF-1)}' | sort -n +1 | less
If iperf2.log is at the bottom, you got more megabits.

Is it accurate? I don't know. Does it help measure change? Yep. And that's what tuning is about.

Links

Online Media God
Sun BigAdmin Not just for Solaris
Fix RPM db
Google Video (FLV) conversion with ffmpeg

Lifehacking/Productivity
Productivity/Life Hack site
GTD Good Easy on a mac
Bit Literacy Review
Text based todo

Tuesday, July 24, 2007

Why run VMs? part 1

I'm a Unix guy, but often work in a windows environ. Usually there's something that only runs in windows that I need to run.

I've used DOSemu, Executor (Mac OS 7), Win4Lin, VMware Workstation (2.x -> 6.x), Bochs, QEMU, Basillisk and wine. All have various capabilities and impacts.

If you just want to run windows apps, having a 2nd computer running terminal server over a gigabit net is probably the best for features. You get full hardware access, reasonable display speed (gigabit switches and KVMs are inexpensive).

For a Macintosh, VNC over an SSH connection works well on Mac OSX. I've used VNC to control System 7 systems. I find VNC on a mac works better then on windows. Windows doesn't let VNC at the login screen while OSX does.

VMware and Win4Lin depended on the kernel so any update to that meant a reconfigure. *sigh* And hopefully that kernel was supported. Otherwise you had a choice to make. I get tired of all the reconfiguring. Of course Windows doesn't update its kernel often so the host is more stable.

I have hopes for QEMU lately. And KVM now that I have a chip with virtualization. I've gotten QEMU running with Solaris as a host, but the networking was a bit tricky. I'd love to see VMware hosted on Solaris but I doubt the port will be done. Hardware is inexpensive enough that having a Linux box is doable.

Today's Links

Linux KVM Wiki
Google 411
ZFS updates

Sunday, July 22, 2007

Cheap storage

Right now it's SATA. If you're running RAID (and you should), SCSI isn't going to buy you much more reliability. It's not going to give you much more speed on the low end either. Anyways, this is about cheap.

You have a PC with slots in it. It's going to be your server. I suggest having the OS on a RAID1 mirror. Don't have data on this so the OS can go as fast as possible. The RAID buys some reliability. If it's IDE, no slaves. Just masters. Mixing them slows things down quite a bit. Ok, you can put the CD/DVD on it.

Get a 4 port SATA card. You do not care about the RAID in it. Most of them are really software RAID anyways and each card does it differently. By doing software RAID, you can use any adapter. Hardware RAID means your data is at the mercy of a brand or even model of that card if it fails. Besides, you're going to be running an iSCSI SAN or an NFS/Samba file server here. Your CPU is dedicated to storage. Your bottleneck will be gigabit ethernet to your clients.

Get 4 SATA drives of the same size. I like to have a standard size that will be available in the future, like 500GB.

Now the hard part: how to power and cool them? If you have a case like the Antec PB180, there's a fan and a 4 drive chamber. Just get some SATA cables and maybe some 4 pin to SATA power adapters.

If not, you need to get an external drive. There are some nice ones under $150 with power and cooling for 4 SATA drives. You can also build your own with a PC power supply, a fan, power adapters and something to bolt the drives to that the fan pulls air through.

Now get some long SATA cables. I've used 42" ones. They do not have to be eSATA. I've used internal SATA from the drives through a card slot to the internal SATA card and they work fine. Using short SATA extenders lets you have a nice disconnection outside the PC or drive case.

Hook it all up, install your OS and make it a software RAID5 setup. Cheap.

Which OS? Do you want a NAS or SAN device? I really like ZFS on my servers and that means Solaris right now. Solaris 10u3 doesn't have an iSCSI target so that means NAS only. Newer versions of OpenSolaris have iSCSI targer. Linux has an iSCSI target also and I don't think it's as close to bleeding edge as OpenSolaris, but there are many that feel comfortable with it. There's a linux distribution called OpenFiler that does iSCSI target, Samba and NFS with a web admin interface for you non unix types.

Speeds? On a Dual PIII 500MHz I got 20-25MB/s. On a Xeon 2.4 or on an AMD 4000+ I get about 120MB/s. The PIII can't keep up with Gigabit. The other 2 can do 90MB/s over gigabit. That's faster then local disk on older systems.

4 * 500GB @$120/ea -> $480 for disk. Put a $500 PC with gigabit ethernet under that and you have 1.5TB or RAID5 for under $1000.

Storage rant #1

I've been doing sysadmin for awhile now. I've seen disk space go from > $4 per megabyte(!) to less then $240 for a terabyte. Space requirements have gone up too. You used to be able to have your compiler, editor and source on a floppy.

My general rule has been local disk for databases and other things that need locking, NAS for everything else. The local store can be a SAN of course. Centralize storage as much as possible (but no more so) to keep backups from going over the network. Because it's centralized, you can do RAID to increase reliability.

Backups are not archives! Backups are so you can recover your setup as close as possible to the latest good state if the hardware fails completly. If you want to go back to a point in time, that's an archive.

Backups have changed dramatically over the years. I don't think there's such a thing as an inexpensive tape anymore. At least something that's dramatically cheaper then disk. I once bought a 2GB 4mm DAT to backup my home systems. I probably had 1GB at the time. Now you're going to need multiple tapes to span your disks. Because of that, you probably also want an automatic tape system as well.

After you figure out your backup cycle (how far back a backup goes (archives are forever)) and how much data you have, you arrive at the total data storage. If you go tape, figure out the automated drive cost with a full compliment of tapes. You might find that is cheaper to buy a disk farm of some sort for your backup store. Remember, backups are not archives.

You'll still probably want to put data offsite. But the disks do have the advantage of matching the speed of incoming data. Your backups will always be much quicker to disk then tape so your window will be larger. When you create media for offsite, it'll be done on the disk farm so it will go quicker.

LVM notes

Commands

ls /sbin/vg* /sbin/lv*/sbin/pv*
Xscan, Xdisplay, Xcreate, Xremove, Xextend

Initialize for lvm
  • pvcreate -v /dev/md3
  • pvdisplay /dev/md3
Scan & build /etc/lvmtab stuff
  • vgscan
  • create the volume group /dev/vg from /dev/md3
  • vgcreate vg /dev/md3
  • vgdisplay
  • show Allocated and Free space
  • sudo vgdisplay | egrep '^[AF].*Size'
Create a partition
  • lvcreate --size 2048m vg
  • ls -l /dev/vg
  • mke2fs -j /dev/vg/lvol
  • mkdir /test
  • mount -t ext3 /dev/vg/lvol /test
  • lvdisplay /dev/vg/lvol[n]
expand it!
  • umount /dev/vg/lvol?
  • lvextend -L + /dev/vg/lvol?
  • e2fsck -f /dev/vg/lvol?
  • resize2fs -p /dev/vg/lvol?
  • mount it
remove it!
  • umount /dev/vg/lvol?
  • lvremove /dev/vg/lvol?
  • rmdir mountpoint
  • vi /etc/fstab