woensdag 28 september 2011

ZFS data corruption and CRC checksum errors

It's been a while since I took some time to analyze the following problem:

server ~ # zpool status -v
  pool: mir-2tb
 state: ONLINE
status: One or more devices has experienced an error resulting in data
    corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
    entire pool from backup.
   see: http://www.sun.com/msg/ZFS-8000-8A
 scrub: scrub completed after 9h26m with 1 errors on Mon Sep 26 09:01:29 2011
config:

    NAME                                                       STATE     READ WRITE CKSUM
    mir-2tb                                                    ONLINE       0     0     1
      mirror-0                                                 ONLINE       0     0     2
        disk/by-id/ata-Hitachi_HDS5C3020ALA632_ML0220F30SHWWD  ONLINE       0     0     2
        disk/by-id/ata-SAMSUNG_HD204UI_S2H7J9BZC05894          ONLINE       0     0     2
        disk/by-id/ata-Hitachi_HDS5C3020ALA632_ML0220F30U4RKD  ONLINE       0     0     2

errors: Permanent errors have been detected in the following files:

        /mnt/origin/mir-2tb/diskfailure/sdb1.image.dd



So there is a problem here. What this is saying is that the same block of data is not readable from 3 different disks. And the file sdb1.image.dd experienced data loss because of this.

Every week on monday, the counters for the amount of checksum errors will go up. That's because I'm running a scrub every monday morning. And once the scrub is done, the counter is 2 checksum errors higher for each disk, and one checksum error higher for the pool. So it's not like there are more and more sections of data that can't be read.


How did this happen? How small is that chance, that 3 drives are unreadable for the same specific block of data? It's infinitesimally small. And that's not what's going on here. There are 2 possible reasons for this:
 - What's going on here is that the ZFS pool was created initially with one source disk, with apparantly 2 bad sectors or bad blocks. Data was copied to that pool before there was a 2nd disk to mirror the data. Once the data was copied and the source disk was empty, the source disk was added to the pool as a mirror. But because of the bad blocks/sectors, 2 blocks could not be copied to the other disk. Those 2 sectors therefore also contain invalid data. Later a 3rd disk was added, but is has the same problem as the 2nd disk. Because the bad sectors can not be read, this will not resolve itself. The data is lost.
- What's going on here is that there is a very specific type of data that triggers a bug in ZFS where the data cannot be stored or the checksum cannot correctly be calculated. The data is there but the checksum fails because of this bug. There is no actual data lost.


I can't remember whether I started this pool with the scenario of having 1 disk and adding more later. I no longer have the source file so I cannot overwrite the data with it. The only option I have is to try and find the block that is broken and force a write to it. This will force the disk to overwrite the data, making it readable again or in case of not being able to write will force the disk to re-allocate the sector.

However, the disks are reporting no problems at all regarding pending sector re-allocations, the only thing I notice is:


SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x002f   100   100   051    Pre-fail  Always       -       0
  2 Throughput_Performance  0x0026   055   055   000    Old_age   Always       -       19396
  3 Spin_Up_Time            0x0023   068   066   025    Pre-fail  Always       -       9994
  4 Start_Stop_Count        0x0032   100   100   000    Old_age   Always       -       35
  5 Reallocated_Sector_Ct   0x0033   252   252   010    Pre-fail  Always       -       0
  7 Seek_Error_Rate         0x002e   252   252   051    Old_age   Always       -       0
  8 Seek_Time_Performance   0x0024   252   252   015    Old_age   Offline      -       0
  9 Power_On_Hours          0x0032   100   100   000    Old_age   Always       -       5924
 10 Spin_Retry_Count        0x0032   252   252   051    Old_age   Always       -       0
 11 Calibration_Retry_Count 0x0032   252   252   000    Old_age   Always       -       0
 12 Power_Cycle_Count       0x0032   100   100   000    Old_age   Always       -       70
181 Program_Fail_Cnt_Total  0x0022   098   098   000    Old_age   Always       -       48738283
191 G-Sense_Error_Rate      0x0022   100   100   000    Old_age   Always       -       37
192 Power-Off_Retract_Count 0x0022   252   252   000    Old_age   Always       -       0
194 Temperature_Celsius     0x0002   064   052   000    Old_age   Always       -       31 (Min/Max 14/48)
195 Hardware_ECC_Recovered  0x003a   100   100   000    Old_age   Always       -       0
196 Reallocated_Event_Count 0x0032   252   252   000    Old_age   Always       -       0
197 Current_Pending_Sector  0x0032   252   252   000    Old_age   Always       -       0
198 Offline_Uncorrectable   0x0030   252   252   000    Old_age   Offline      -       0
199 UDMA_CRC_Error_Count    0x0036   200   200   000    Old_age   Always       -       0
200 Multi_Zone_Error_Rate   0x002a   100   100   000    Old_age   Always       -       86
223 Load_Retry_Count        0x0032   252   252   000    Old_age   Always       -       0
225 Load_Cycle_Count        0x0032   100   100   000    Old_age   Always       -       70



And that's not very helpfull either. There is one counter (Program_Fail_Cnt_Total) that's huge but some googling turns up that many users of this disk report that counter and it's not a problem for any of them. That doesn't worry me.

I wish that zfs had some more tools to make this analysis easier. Why does the CRC occur? What block exactly is having the problem.

So I'm at a loss here... there is a checksum error, but the disks don't report a problem with data on the disks. Is the problem therefore with ZFS, or with one of the disks that has (via ZFS) propagated to the other disks ? I would be very interested to hear anyones feedback on this.

choosing disks and controllers for ZFS


I looked at my ZFS setup and realised that I have 2 raid-1 setups, both with the same type of disk:

server ~ # zpool status
  pool: mir-2tb
 state: ONLINE
 scrub: none requested
config:

        NAME                                                       STATE     READ WRITE CKSUM
        mir-2tb                                                    ONLINE       0     0     0
          mirror-0                                                 ONLINE       0     0     0
            disk/by-id/ata-Hitachi_HDS5C3020ALA632_ML0220F30SHWWD  ONLINE       0     0     0
            disk/by-id/ata-Hitachi_HDS5C3020ALA632_ML0220F30U4RKD  ONLINE       0     0     0

errors: No known data errors

  pool: mir-2tb2
 state: ONLINE
 scrub: none requested
config:

        NAME                                                       STATE     READ WRITE CKSUM
        mir-2tb2                                                   ONLINE       0     0     0
          mirror-0                                                 ONLINE       0     0     0
            disk/by-id/ata-SAMSUNG_HD204UI_S2H7J9BZC05894          ONLINE       0     0     0
            disk/by-id/ata-SAMSUNG_HD204UI_S2H7J9BZC05884          ONLINE       0     0     0

errors: No known data errors
server ~ #

So that's one time ZFS with two hitachi disks and one time ZFS mirror with two samsung disks. That's bad because if there is a problem with a certain type of disk (such as a firmware problem with samsung HD204UI's, see here) there is a high chance the mirror will fail. ZFS can then detect the problem, but possibly not recover.I quickly bought another hitachi disk and added it to the samsung mirror. That allowed me to swap a samsung disk to the hitachi mirror pool. So now I have:

server ~ # zpool status  pool: mir-2tb
 state: ONLINE
 scrub: none requested
config:


        NAME                                                       STATE     READ WRITE CKSUM
        mir-2tb                                                    ONLINE       0     0     0
          mirror-0                                                 ONLINE       0     0     0
            disk/by-id/ata-Hitachi_HDS5C3020ALA632_ML0220F30SHWWD  ONLINE       0     0     0
            disk/by-id/ata-SAMSUNG_HD204UI_S2H7J9BZC05894          ONLINE       0     0     0
            disk/by-id/ata-Hitachi_HDS5C3020ALA632_ML0220F30U4RKD  ONLINE       0     0     0
errors: No known data errors

  pool: mir-2tb2
 state: ONLINE
 scrub: none requested
config:

        NAME                                                       STATE     READ WRITE CKSUM
        mir-2tb2                                                   ONLINE       0     0     0
          mirror-0                                                 ONLINE       0     0     0
            disk/by-id/ata-Hitachi_HDS5C3020ALA632_ML0220F30TG22D  ONLINE       0     0     0
            disk/by-id/ata-SAMSUNG_HD204UI_S2H7J9BZC05884          ONLINE       0     0     0

errors: No known data errors
server ~ #


So now one pool has a raid-1 of 3 disks and the other a raid-1 of 2 disks. Availability thus even improved a bit.

Great, problem fixed. Not quite! Just making sure the disks in a pool are of mixed species is not enough.
It's not visible in the zpool status but the disks of the same pool were all on the same sata controller. I have an onboard sata controller and a promise sata controller. Here is a selective lspci:

00:11.0 SATA controller: ATI Technologies Inc SB700/SB800 SATA Controller [AHCI mode] (rev 40)
05:05.0 Mass storage controller: Promise Technology, Inc. PDC40718 (SATA 300 TX4) (rev 02)

So if a controller were to either die, corrupt data or have intermittant problems that could again result in the pool being lost because ZFS wouldn't be able to fix the problem using a source that still has correct data. Easy fix, just make sure that the disks in a zfs pool are spread out over different controllers. That way the chance of problems again diminishes.

Too paranoid? Not really. Before I replaced my server a couple of months ago I was using an onboard sata controller that was corrupting the data transferred to the disks. I never knew until I switched to ZFS.

Don't blow your bits. Spread them around.

zondag 25 september 2011

real world usb3 disk performance

So a while ago I bought a new motherboard. It took me a while to make a choice because I wanted it to be cheap, while still having at least 6 sata connections, pata too and then be able to use ECC type memory. Having ZFS is great to keep your data correctly on your disks but if the data gets corrupted in memory you're still lost, hence the ECC.
I finally choose the asus M4A88TDM-EVO because it had all the specs I wanted AND included usb3. Great I thought, just what I need... this is future proof. Too bad it turned out to be just a separate pci-e 1x card. But it's there and after compiling a kernel module for it (usb3 requires the xhci module, or xhci_hcd on gentoo) it worked.

Kinda.

It's very spammy in dmesg. Lots of debug info it seems. Or at least info I don't need. This is kernel "Linux server 2.6.36-gentoo-r8 #2 SMP Tue Apr 5 08:18:52 CEST 2011 i686 AMD Athlon(tm) II X2 250 Processor AuthenticAMD GNU/Linux" mind you, so newer kernels may no have this.

Time to test this puppy out. Since the theoretical limit of usb2 is 60MByte/sec there may be a change that my disks are sometimes hitting the limit of the bus, instead of the speed limit of the platters. Since usb3 is 10x as fast, the bus will never be the bottleneck.
So would this really matter, in real life?

I used a Hitachi desktstar 7k1000 HDT721075SLA380 to test with. It's supposed to be able to read at speeds between 86MB/sec (outer disk section) and 46MB/sec (inner sections). With the 60MB max speed of usb2 there is something to gain here. The disk is connected using a LaCie usb-sata external enclosure.
The usb3 controller is this one:
02:00.0 USB Controller: NEC Corporation Device 0194 (rev 03)

I performed some tests, first on usb2. Please don't mind the changing drive letters, every time I unplug and re-plug the device it may take up a new drive letter. It remains the same drive.

server ~ # hdparm -tT /dev/sdl

/dev/sdl:
 Timing cached reads:   2644 MB in  2.00 seconds = 1322.00 MB/sec
 Timing buffered disk reads:  92 MB in  3.06 seconds =  30.03 MB/sec
server ~ #

So theoretically this disk should be able to transfer 1.3GB/sec from its cache to the cpu. When reading from the platter this drops to 30MB/sec. It's an old disk ;-) Hdparm is a bit of a synthetic test, so what does a file transfer do for us:

write:
server maxx # dd if=bigfile of=/mnt/usb-backup-disk-750g-20081001/bigfile
9170564+1 records in
9170564+1 records out
4695328969 bytes (4.7 GB) copied, 211.23 s, 22.2 MB/s

server linux # dd count=9000000 if=/dev/zero of=/mnt/usb-backup-disk-750g-20081001/bigfile
9000000+0 records in
9000000+0 records out
4608000000 bytes (4.6 GB) copied, 133.498 s, 34.5 MB/s
server linux #


read :
server linux # dd if=/mnt/usb-backup-disk-750g-20081001/bigfile of=/dev/null
9170564+1 records in
9170564+1 records out
4695328969 bytes (4.7 GB) copied, 138.333 s, 33.9 MB/s

read while taking the filesystem out of the loop:
server linux # dd count=10000000 if=/dev/sdk of=/dev/null
10000000+0 records in
10000000+0 records out
5120000000 bytes (5.1 GB) copied, 155.335 s, 33.0 MB/s
[odd, that's actually slower then when reading from the fs... ]


then on usb3:

server ~ # hdparm -tT /dev/sdn

/dev/sdn:
 Timing cached reads:   2434 MB in  2.00 seconds = 1216.78 MB/sec
 Timing buffered disk reads: 104 MB in  3.00 seconds =  34.63 MB/sec
server ~ #

that's a small improvement to read from the platters. Interestingly the cache read is slightly slower

write:
server maxx # dd if=bigfile of=/mnt/usb-backup-disk-750g-20081001/bigfile
mount
9170564+1 records in
9170564+1 records out
4695328969 bytes (4.7 GB) copied, 123.346 s, 38.1 MB/s

write zero's
server maxx # dd count=9000000 if=/dev/zero of=/mnt/usb-backup-disk-750g-20081001/bigfile
9000000+0 records in
9000000+0 records out
4608000000 bytes (4.6 GB) copied, 122.208 s, 37.7 MB/s


read:
server maxx # dd if=/mnt/usb-backup-disk-750g-20081001/bigfile of=/dev/null
9170564+1 records in
9170564+1 records out
4695328969 bytes (4.7 GB) copied, 119.702 s, 39.2 MB/s

read from disk without fs:
server maxx # dd count=9000000 if=/dev/sdk of=/dev/null
9000000+0 records in
9000000+0 records out
4608000000 bytes (4.6 GB) copied, 122.362 s, 37.7 MB/s


Next up was bonnie. When running the usb3 test my machine catched a kernel panic! Odd.. I reset the machine and run bonnie again. No crash this time. This was a trigger for me to implement
`bonnie++ -d /mnt/usb-backup-disk-750g-20081001/tmp -s 6576 -c 4 -m usb220110925 -x 3 -u 1000:100`

the result:


Version 1.96Sequential OutputSequential InputRandom
Seeks

Sequential CreateRandom Create
ConcurrencySizePer CharBlockRewritePer CharBlockNum FilesCreateReadDeleteCreateReadDelete

K/sec% CPUK/sec% CPUK/sec% CPUK/sec% CPUK/sec% CPU/sec% CPU
/sec% CPU/sec% CPU/sec% CPU/sec% CPU/sec% CPU/sec% CPU
usb246576M62293316047142495121590344176172.18161057846++++++++31734503139634++++++++++++++++
Latency54816us599ms633ms34401us47854us646msLatency3091us2151us2125us17656us150us6314us
usb346576M61595323267144345157987348275172.88161970942++++++++++++++++3203436++++++++++++++++
Latency44231us673ms643ms41676us64893us655msLatency6150us530us806us4056us283us2863us
usb246576M67996329727145475149094349366172.98161068146++++++++20249611518465++++++++2096462
Latency29018us438ms649ms31497us30470us587msLatency2858us2295us3218us3825us5738us2266us
usb346576M58394367098174496102599437038195.410161372256++++++++++++++++1830471++++++++++++++++
Latency38191us519ms538ms27632us28568us539msLatency547us509us706us211us15us36us
usb346576M64598368688174726197198435968198.310161482960++++++++++++++++++++++++++++++++++++++++
Latency40016us520ms584ms24271us27662us474msLatency558us509us555us84us8us32us
usb346576M78297368388174396149098434798196.49162292733++++++++++++++++++++++++++++++++++++++++
Latency10200us583ms562ms28055us37441us538msLatency434us514us559us87us8us35us

The above results are rather clear. The usb3 controller is resulting in better performance according to bonnie too. Just to be sure I re-ran the test and the results were slightly less pronounced.

In conclusion:
Using a usb3 controller resulted in a 10%-20% increase in performance on the same disk while using a usb2 connection. At times the difference is smaller, rarely it is bigger.

Too bad this usb controller only has 2 connections. I wonder how much performance remains when I start daisy-chaining multiple devices on those connectors...

vrijdag 23 september 2011

SSD temperature problems

Some (most) SSD's don't have a temperature sensor. The result is that temperatures can be reported that are just wrong. Like 127 degrees, or -1. For servers this can be a problem because the server can't be sure the sensor is defective or whether there is a real problem. So it will spin the fans at maximum speed in order to try and cool the disk. That increases the power usage slightly and increases the noise level drastically. Within a DC that may not be a problem (if you have proper hearing protection)

I've tried 3 SSDs and 2 happened to work fine:
60GB OCZ Vertex 2 (OCZSSD2-2VTXE60G) - broken (firmware 1.33)
60GB Corsair Force (CSSD-F60GB2-BRKT) - fine
60GB Corsair Nova (CSSD-V60GB2) - fine

Some disks don't have a sensor but report a fixed temperature. That also works. Intel's 710 SSD range seems to have an actual temperature sensor.

So if you plan on using such a consumer grade SSD for a solution that bases fan speed on disk temperatures (like servers) this can help you figure out just slightly better what to buy, and what not to buy.

2011-10-18 update:
OCZ vertex 3 also does not have a (working) temperature sensor according to smart

maandag 19 september 2011

4k sectors and aligning for LVM


I've been running an LVM volume spread out over a slowly growing number of disks for a couple of years now. Recently a disk broke and I RMA'd it. It was a western digital wd10eads. I got back a wd10ears. note the tiny differenc. Same size disk. But now using 4k sectors instead of 512byte sectors.
Interesting!

You know how new harddrives these days work with 4k sectors, right? And how many disks 'lie' about this and say that they (still) have a sector size of 512 bytes?
I have read about the 4k sector size a while ago and became aware of a number of possible problems that can come up when using those kinds of drives that 'lie'.

In general, what's the biggest problem?
If you are using ext3, then data is stored in chunks of 4096bytes (4k) by default. But the because the disk is stating it stores data in 512 byte chunks the OS will split up 4k writes into 8 writes of 512 bytes. That's sub optimal, but the real problem happens when you use partitions. By default, the 1st partition start at sector 63. That means that sectors 0-62 are blocks of 512 bytes used for the MBR and other things, and then sector 63 is the 1st data for your 1st partition. And because the ext3 filesystem stores data in 4k chunks [AFAIK!], the 1st ext3 inode fills sectors 63-71 (8x 512byte sectors). However.... sector 63 is not the start of a 4096 byte sector on the disk itself. The 4k blocks start at sectors 0, 8, 16, 24, 32, 40, 48, 56, 64
This means that when the OS wants to read the 1st inode on the 1st partition, it will send 8 requests for 512 byte blocks of data. The disk then needs to read 2 sectors, because these 8 blocks are split over 2 actual 4k sectors physically.
If however the 1st partition would have started at the 64th sector, the 4k inode would be exactly aligned with the actual sector on the disk. Then the OS would request 8 blocks and the disk would only need to read 1 4k sector. To make that 1st partition start at the right place is called 'alignment'.

This isn't too bad when you're just requesting 1 small block of data. But when doing big file operations or the server is very heavy on IO, you're in trouble.


For writes the problem is much worse. When doing a write of a 4k inode the disk will first need to read both sectors before it can write, because only part of the sectors will be updated. The 1st physical 4k sector will get 512 bytes overwritten, leaving 3584bytes untouched. The next sector will get 3584 bytes overwritten, with 512 bytes untouched. Those untouched bytes need to 1st be read by the disk, because it can't do a partial write to a sector. Disks can only handle entire sectors at once. So for a write the disk first does 2 reads. Then it does 2 writes.
In the aligned situation it only needs to do one write. Because there are no bytes left untouched in a particular sector.

That is the situation when you use partitions. Luckilly my disk is used in an LVM volume group and I just gave the entire disk to LVM. I did not create a partition. There is no need to, when you use LVM.
What does this mean for alignment and performance for this disk? It means I'm probably ok.
Probably? I couldn't find any information on this on the net. Which means I want to go and take a look at the actual data on the disk, and at the LVM source code to figure out how it stores the data on disk.
A quick performance test on the disk also showed that there is most likely no alignment problem. Otherwise the write performance would have been much lower.

More will follow....

zondag 18 september 2011

"windows dynamic disks - mirroring" useless

I run Windows 7 on my desktop. I'm a big Linux fan but the Linux desktop experience is IMHO still not there yet. Although I must admit the last time I tried is 3 years ago and things have improved. Guess I'm going to give ubuntu a shot soon :)

Because my aversion to data loss and downtime I run Windows on a pair of disks in a mirrored setup. One of the disks is pata and the other is sata. It's what I had lying around. Why spend money when you have parts on the shelf? My on-board ata controller doesn't support setting up a mirror over pata and sata. They must be on the same type of connection. So instead I configured windows to do the mirroring for me. Most on-board hardware 'raid' controllers are just a bios call that's handled on the cpu anyway.
Therefore there shouldn't be that much of a performance difference between doing a mirror action in the OS versus doing it via the bios on the cpu. That's the way it is under Linux in my experience.
In Windows 7 (and older versions of windows) disks need to be converted to a special partition format to be able to partake in a raid set. Be it mirror or otherwise. This can be done through the  'computer management' application, in the 'storage' section. So I set it up and all was fine. It spent some time copying the data from the source to the mirror disk and all was done. Everything is fine, right?
Not so. Every reboot the sync starts over. All of it. Not just the bits that are out of sync, everything.
That doesn't happen under Linux. There it keeps track so that when you reboot the raid is still in sync.
Not the same for Windows 7.
So what? Well it means when I boot windows my disk performance goes down the drain for a couple of hours until it has re-synced everything. And that sucks.
There are advantages too... if I was using a 'fake' raid controller and the controller would die, I run the risk of losing my data if the configuration is in the raid controller and it uses some propriatary format on the disks. Windows mirror will maintain integrity. you could even move a windows disk to a new machine and it will run. no configuration needed. Nice, yet not what I'm looking for.

Time to kick Windows mirroring out of the door and find some other way to do this.

But... you can't. Not while it's syncing. Crazy right? So first I have to wait for an entire sync to be complete, only to break the mirror and remove the sync configuration.
Too bad there are no alternative filesystems for windows. It's just fat or ntfs. Linux has many filesystems. Another reason to just choose open source. Or apple ;-)


zaterdag 17 september 2011

harddisk firmware warnings

A while ago I bought 2 new harddisks. They were Samsung EcoGreen HD204UI's. Great value in my opinion. Since I don't trust harddisks I put ZFS on them. That will give me a bit more faith to keep my data. Faith... no guarantees. Software bugs could still kill all data on there. But I disgress.
How big was my surprise when, a couple of weeks later, I just happened to read a forum post about these drives having some problems when using smart functionality (check here for details: http://sourceforge.net/apps/trac/smartmontools/wiki/SamsungF4EGBadBlocks). I have these drives running under Linux. Smart checks my disks periodically. If data would happen to get written to the disk just at the moment that a smart command would be issued, I'd end up with a bad sector. WTF!
I was able to find the patch that Samsung put on their websites by googling for it. There was no link to it on the samsung product page for these drives. These was no announcement. I patched the drives and they have been humming happily ever since.

But what about my other drives? Are they ok? Do they have problems that warrant a firmware update? How do I find out when drive makers don't willingly publish such information in fear of such things hurting sales? I have no idea. The most logical way to find out is to run into trouble. I won't spend the time googling the internet every month to try and find this out. I should be notified. Isn't that what 'registering your product' is for? Registrations have never done anything for me.

Hard disk manufacturers should step up and provide at least some way of getting notified. If they don't, someone may do it for them.

vrijdag 16 september 2011

boring batteries made livelier

Back when I was in college I had a guy living in our dorm that was working on state of the art batteries. He was very enthousiastic about them, but I didn't really understand why. Sure it would be neat to have a bit more juice in my batteries. It would save me from recharging my phone every day. But there is a law on batteries and power that is similar to moore's 'law' on the number of transistors for cpu's; The more capacity something has, the more it will be used. The increase in usage offsets the increase in capacity.
Now, years later, I find myself still using AA batteries. And some AAA too, for things like remote controls. Environmentally conscious as I am I use rechargeable batteries. I've never really given it much thought. I bought a charger a while ago with some batteries and used them. Then I bought some more rechargeable batteries because I ran short on AA's. And somehow after a couple of months they didn't seem to work that well anymore. After charging the batteries they would only work for a short period of time. I assumed it was the charger so I bought a better one with better batteries. This went on for a while until I had 3 chargers and 20 batteries and a lot of frustration. What battery did I charge again? Which one is the new one? Is this battery fully charged? Bah, let me charge it again. and then waiting over 12 hours for a battery to charge that has a text on the side of it saying it charges in 7 hours.
So I started to do some research. It turns out I had 2 different types of chargers. One for NiCd batteries. And two for NiMH and NiMH/NiCd. Trying to charge the NiMH batteries in the NiCd didn't work that well apparantly. There also was no information on the chargers about their charge rate. Or why one charger would flag certain batteries as bad, while the other chargers would not.

I needed something better. I needed something that would test my batteries for proper functioning and would charge them properly.

I have found such a device. It's called the voltcraft ipc-1. The ipc-1 does it all:
It handles NiCd and NiMH.
Bad battery detection is automatic
It can charge faster than any charger I've ever had (1800mA/h!), although you probably don't want to (see below).
It can re-condition, aka refresh, batteries.

Wait... it can 'refresh' batteries? Well apparently batteries lose their mojo after either time or repeated usage. By discharging and charging them a number of times the battery gets back some of its mojo and is able to continue its work for you for a while longer, before you need to replace it. That's great! I never knew that. That was never mentioned on any of the batteries I bought. Doing this manually is a boring task. You need to keep count, and it takes for ever. The ipc does it by itself. Just give it 1-4 batteries, tell it to refresh and it will tell you when it's done. Horay for ease of use :)

Reading more about batteries and charging I found out that the faster you charge a battery the quicker it will wear down. This is very much dependent on the battery quality. Better brands allow for faster charging with less (or perhaps no) wear. So even though the ipc can charge at 1800mAh you probably shouldn't.

I hope this was somewhat informational.

As a parting thought: why are non-rechargeable batteries still being sold? The EU ruled that incandescent light bulbs can no longer be sold because there are many good alternatives these days. Why not do the same for batteries? This is a huge strain on the environment. Many people throw them in the trash instead of recycling them. It's a waste of your money too.....

woensdag 14 september 2011

harddisk data safety

When I use a harddisk I know I'm trusting my data, which I care about, to a devise that is guaranteed to give errors.

The rate at which it gives errors is listed in the harddisks specifications. For a 2TB disk the non-recoverable read error rate can be 1 sector in 10^15 bits. That's a good buy. But there are also disks out there that offer a higher error rate, at 1 in 10^14. What does that mean? It means the disk has more errors than one in 10^15 but most can be corrected (by re-reading, error correction calculations, etc). Once in every 10^15 bits it can't recover. With 8 bits per byte that translates to one sector in 133,7TB. That sounds like veeeeery little. But is it? If I read the disk 60 times, I'm sure to get a read error. One sector is still usually 512 (but this will very soon change to 4k). So with 60 full disk reads you risk losing 512 bytes of data. That's nothing, right?
Think again. If this happens to be in the middle of a file you store your adminstration in, you had better have a backup. You have backups right? And not on the same disk, right?

The situation gets much worse when disks get larger. When you but a 3TB disk you only need to do 40 full reads. 4TB disks and you're down to 30. See what I'm getting at? If you have a 10^14 error rate disk of 2TB you get a read error every 6 full reads. That's data loss people. Or is it? The read error was 'unrecoverable', but does that mean that all retries will fail also? I'd love some feedback on that.

Raid is one way to solve this. But beware: If you use raid-5 and even raid-6 you're making the problem worse! If one disk in your raid-5 setup fails, you replace it and a re-build takes place. That's a full read of your disks! That means if any read error occurs then *poof* your data is gone. Cheap read controllers then stop the rebuild and you have just lost everything.
Using raid-1 or raid-10 is a much better solution if you value your data. But more expensive too.

However, most of the time harddisks are not fully used. Especially new disks are mostly empty initially. So if we have this error rate but duplicate the data on disk? If all your data is on the disk twice then the chance of your data being lost is much lower. And lower even when even more copies are made. To do this yourself (saving 6 copies of the same file on your filesystem) is a lot of extra work.

There are filesystems that do this, somewhat. For instance on ZFS you can configure the number of copies that should be made. But I'm looking for something more intelligent. I'm calling out to harddisk makers to make as many copies of the existing data as the free space will allow. Then when an error occurs it can just go look at one of the copies and repair the problem area. Problem solved. Simple idea really. Can it be done? I hope so :)

maandag 12 september 2011

MS word and open-office

I don't like word. The microsoft product that is. For a program that's supposed to make my text look good, it's way too complicated. I think I use maybe 10% of its capabilities and yet I find myself always looking for buttons to try and get done what I need to get done. That's such bad user interaction design.
So I use open office. I don't like open office.
I like open. Just not the product. It doesn't read my MS word documents very well. Sure the text is copied and the markup is close, but it's not the same. And it looks awfull. This program also has me chasing buttons to figure out how to do things.
Both products suffer from me not knowing how the tool really works. It's like a mix between Windows and Linux. In this case the products are easy to get started with (like Windows) but then you hit a sudden and very steep learning curve (like Linux).

All I want is to do things that are obvious. Like moving tables visually. Or changing borders/spacing. I know I'm not alone. Nobody in the office I work in knows how to do those somewhat beyond the basics things either. How do you change the inter-line space and why does it suddenly change sometimes ?

How come there is no 'easy' tool that just does the basics? I'm thinking there is real market potential here. I'll just use VI in the mean time ;)

zondag 11 september 2011

SSL security - diginotar

In case you didn't know, everyone in between your computer and the site you're looking at can see everything you send to the site, and receive from that site. Including usernames and passwords. That is, if you're not using a secure connection to that websites. When is it secure? When the address in your URL bar starts with HTTPS instead of HTTP.

The S in httpS stands for secure. It's made secure using encryption. Anyone can make an encrytion key and certificate. But your browser won't trust them. It only trusts a set list of companies that make money by giving out certificates. Those companies are called certificate authorities (CA's).

Recently a Certificate Authority was hacked. The company is called diginotar and they hand out SSL certificates. Unfortunately the hackers got access to the system in such a way that they were able to hand themselves out certificates for any domain they wish. This means they can pretend to be amazon.com and handle payments. Or pretend to be gmail and get your username and password.
Ofcourse most browser makers (apple, microsoft, google, mozilla) have quickly removed the diginotar company from the list of trusted CA's.

But who knows which other CA's have also been hacked? Most hacks go un-announced. Is your 'secure' connection really secure? Goodbye safe feeling when doing online shopping! Goodbye feeling on privacy when reading your mail. Don't even think of being a political activist.

 Luckilly there is a solution: http://www.networknotary.org/firefox.html
This firefox plugin checks the validity of a received certificate not just from only your viewpoint, but from several viewpoints all over the world. That way a local 'Man in the Middle' attack becomes increasingly difficult.

Take note: the diginotar hack also had a certificate for the firefox addons domain. They could be sending you a different version of the plugin than the one you're really looking for. But if you're not in the middle east or China, I suspect the chances of that happening are somewhat slim.

The perspectives plugin only helps if you're on a website that's talking HTTPS. Most websites default to HTTP unless you specifically ask for HTTPS. That's somewhat annoying, having to type https all the time, forgetting, etc. There is also a plugin to help with this: http://www.eff.org/https-everywhere
It's not perfect: it only works for a specific list of websites. But you can add your own. It's a start :)

Feel safe, use perspectives! Be save, use https-everywhere!

zaterdag 10 september 2011

jumbo frames at home


I have a home network that runs partly on gigabit. With partly I mean that a section of the network is connected by a gigabit switch, and a section is connecting with a 'home router' that services my fiberoptic internet connection (yes, I have fiber to the home aka FttH).
When transferring a file between 2 machines on the gigabit section of the network I noticed it was taking a loooong time. Actually the transferspeed was  not or not much more than 100mbit.
Because it was one large file (10GB) I suspected that turning on jumboframes would help the situation.
So, I went into the windows 7 networking hardware settings and turned jumbo frames on with a 7K max frame size and did the same for my linux machine. The transfer died. I tried to re-start it, but it wouldn't work. I could still ping the linux box from the windows box, so there was still something working properly.
SSH worked too, but I couldn't get large outputs to be displayed all at once. Only small commands with small outputs would give a result. Odd! I suspected that large frames would not be transmitted/received correctly and proceeded to lower the MTU back in steps until everything would work again. I ended up at an MTU of 3000 bytes. That's only twice the size of the standard MTU of about 1500.
It didn't help the transferspeed at all.
I decided to look a bit deeper and found that linux warns of changing the MTU on that specific network card :
[1201852.913302] r8169: WARNING! Changing of MTU on this NIC may lead to frame reception errors!

Great. This was actually happening. That's not a 'may', it's a 'will'.

For reference, this is an onboard NIC for a motherboard that I bought a month or 2 ago.
04:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 06)

On the windows machine the onboard NIC is a RTL8167. Different NIC, no frame corruption? I hope so.

There are many other possible reasons for these 2 machines not being able to transfer files at high speeds. The gigabit switch I have is a cheap no-brand (actually it's an SMC switch), the NICs themselves may not be up to the task, the machines may be too busy doing other things and the windows filesharing protocol is notoriously inefficient. There may be further optimalizations to be done with offloading settings on these NICs.

I've set the MTU back to the default (1500) and later figured out the 100mbit was because the Linux machine was plugged into the wrong network segment (the 100mbit switch). *DUH*
Regardless it's still interesting to see the NIC having problems with a bigger MTU.