Posted by JTSF at Monday, July 08, 2013
Read our previous post
Setup 1 - FreeNAS 8.3.1 Test Results
2. Functionality Tests
Test 1. Motherboard failure
- Setup RAID-Z1 (RAID5) on ECS-NM70, migrated to AMD Phenom X2 550 + MSI 785GTM-E45 (version 1.4). All data preserved.
- Storage pool is automatically imported with no user intervention required.
- Storage HDDs do not need to follow any order during installation.
- Resilvering is dependent on data capacity.
- The more data you have, the longer you got to wait.
- Refer to lower half of this page for more tests details.
- Due to ZFS limitation, it is not possible to expand the storage pool by adding / replacing a single HDD. Eg, If I have 3 HDDs in my storage pool (vdev), I need to replace all 3 of them to scale the storage pool.
- FreeNAS 8.3+ supports ZFS version 28 which only allows a zpool to be expanded in 2 ways.
Option 1: Replace all of the hard disks in a vdev with larger hard drives.
Option 2: Add additional vdevs.
- No temperature monitoring utility
- Very limited apps
- Installation of apps is complex
- Requires many steps to add an iSCSI drive
- Granular control in iSCSI configuration
- Plain vanilla storage OS
- Fast resilver time
FreeNAS Apps
Officially, there are only 3 apps: Firefly, MiniDLNA, Transmission
Due to time constraint, I did not try out these apps.
Read this PDF for installation of FreeNAS apps (a.k.a PBI / Plugins Jail):
PDF: Customizing FreeNAS 8.3 Using the Plugins Jail
Test 2 - 1x HDD failure Details
I ran into some issues to replace a "failed" drive. Recalled that the test procedures involve removing one storage drive from the NAS, format it and install it back. This issue only arise if that particular harddisk is used by FreeNAS previously. Took me about 20mins to get it right.
Image are 1280x 996; click to enlarge ; if still unclear - right click \ View Image \ Zoom Icon
1. FreeNAS alerts you about the degraded pool the moment it detects an Offline HDD.
The RAID-Z1 volume which I have created earlier shows degraded status
2. The replacement drive is installed before I attempted to replace it via the GUI.
FreeNAS reports the replacement operation failed.
It detected that the replacement HDD is part of the active zpool in previous setups.
3. Next, I attempted the following from shell but didn't help:
- zpool online
- zpool replace -f
- write zeros to drive - finds the waiting time is too long and I cancelled it.
4. Only after I swapped the order of the harddisks, I managed to replace it.
Replacement successful.
The volume started resilvering (rebuilding) immediately. Notice the estimated time needed.
This is how I derived: Resilvering is dependent on data capacity.
5. After resilvering is completed, detach the Offline drive.
With resilvering completed and Offline drive detached, the volume then reported healthy status.
Resilvering Logs
[root@freenas] ~# zpool status
pool: RAID-Z1
state: DEGRADED
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scan: resilver in progress since Tue Jul 2 00:28:51 2013
177G scanned out of 301G at 211M/s, 0h10m to go
58.9G resilvered, 58.67% done
config:
NAME STATE READ WRITE CKSUM
RAID-Z1 DEGRADED 0 0 0
raidz1-0 DEGRADED 0 0 0
gptid/4cb31dd0-e1a7-11e2-900a-7427ea059779 ONLINE 0 0 0
replacing-1 OFFLINE 0 0 0
7267264605072970270 OFFLINE 0 0 0 was /dev/dsk/gptid/087cf83a-e267-11e2-997a-7427ea059779
gptid/4f905d72-e26b-11e2-a897-7427ea059779 ONLINE 0 0 0 (resilvering)
gptid/534714cf-e1a7-11e2-900a-7427ea059779 ONLINE 0 0 0
errors: No known data errors
[root@freenas] ~# zpool status
pool: RAID-Z1
state: DEGRADED
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scan: resilver in progress since Tue Jul 2 00:28:51 2013
297G scanned out of 301G at 211M/s, 0h0m to go
99.1G resilvered, 98.75% done
config:
NAME STATE READ WRITE CKSUM
RAID-Z1 DEGRADED 0 0 0
raidz1-0 DEGRADED 0 0 0
gptid/4cb31dd0-e1a7-11e2-900a-7427ea059779 ONLINE 0 0 0
replacing-1 OFFLINE 0 0 0
7267264605072970270 OFFLINE 0 0 0 was /dev/dsk/gptid/087cf83a-e267-11e2-997a-7427ea059779
gptid/4f905d72-e26b-11e2-a897-7427ea059779 ONLINE 0 0 0 (resilvering)
gptid/534714cf-e1a7-11e2-900a-7427ea059779 ONLINE 0 0 0
errors: No known data errors
[root@freenas] ~# zpool status -v
pool: RAID-Z1
state: ONLINE
scan: resilvered 100G in 0h24m with 0 errors on Tue Jul 2 00:53:14 2013
config:
NAME STATE READ WRITE CKSUM
RAID-Z1 ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
gptid/4cb31dd0-e1a7-11e2-900a-7427ea059779 ONLINE 0 0 0
gptid/4f905d72-e26b-11e2-a897-7427ea059779 ONLINE 0 0 0
gptid/534714cf-e1a7-11e2-900a-7427ea059779 ONLINE 0 0 0
errors: No known data errors
[root@freenas] ~# zpool list
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
RAID-Z1 5.44T 302G 5.14T 5% 1.00x ONLINE /mnt
[root@freenas] ~#
No comments:
Post a Comment