Trick Windows 8 into creating a RAID10 (stripped mirrors) array

I recently had to upgrade the storage for my desktop and I thought that since I have a few left-over disks I’d try to build a RAID10 array – RAID10 is very cool because it gives performance close to striped arrays but enjoys the reliability of mirrored arrays. Wikipedia has a nice write-up if you want it.

I had two options – using the motherboard’s Intel controller or software RAID. Coming from the Linux world I was expecting software raid to be easy enough to configure – I was wrong apparently in the Windows world options are more limited (mirrors and stripes) with only recently in Windows 8 Storage Spaces offering more options.

In a word Storage Spaces allow building standard (jbod), mirrored (raid1) or parity (raid5) pools and Disk Management allows, using Dynamic Discs allows for the creation of striped or mirrored volumes.

And idea sprang immediately to mind, I fired up a VM to test and it worked, so here they are the full steps to create a striped mirrors array on Windows 8(.1):

  1. Go to storage spaces and create a mirrored pool using two of the disks.
    Stoarge Spaces Mirrored PoolStoarge Spaces Creating a  Mirrored Pool
  2. Repeat using the other disks for another mirror
    Stoarge Spaces Creating a  Mirrored PoolStoarge Spaces Creating a Mirrored Pool
  3. You should have now two virtual disks each being essentially a RAID1 array of two drives.Storage spaces two mirrored pools
  4. Go to Disk Management and remove the volumes on each of the virtual disks
    CreateDelete volume on virtual disk
  5. Create a striped volume using the two, now free. virtual disks.
    Striped volume from mirror poolsImage [8]Image [9]
  6. Done !Image [10]

Warning: I didn’t test this on the long run, as following some performance tests I decided to go with my motherboard’s RAID option. I did test in the virtual machine how it fails when disks are missing and everything appeared to work fine. Though, since I bet this usage scenario is not certified by Microsoft you might encounter issues after updating Windows. As always backup, backup, backup!

Latest openssh disables arcfour and blowfish-cbc

If you move a lot of data using rsync/scp/ssh you probably found out by now that arcfour or blowfish ciphers are a lot faster than others. A comparison is found in this thread. (TL;DR arcfou256 is the fastest cipher).

Today I updated openssh on my slackware servers only to be greeted by this error from scripts doing rsync to my backup server:

[cce_bash]
no matching cipher found: client arcfour server aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com,chacha20-poly1305@openssh.com
rsync: connection unexpectedly closed (0 bytes received so far) [Receiver]
rsync error: unexplained error (code 255) at io.c(226) [Receiver=3.1.0]
[/cce_bash]

The openssh release notes are clear about this:

Potentially-incompatible changes

* sshd(8): The default set of ciphers and MACs has been altered to remove unsafe algorithms. In particular, CBC ciphers and arcfour* are disabled by default.

I added arcfour256 besides the default ciphers in /etc/ssh/sshd_config in order to fix this:

[cce_bash]
ciphers arcfour256,aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com,chacha20-poly1305@openssh.com
[/cce_bash]

Warning as I understand it  arcfour is not as secure as aes ciphers. On modern machines that support hw based aes instructions in theory you shouldn’t see differences. But I do. For example at home the machine that pushes the most data is an I7 and arcfour is still twice as fast (maybe the windows version of ssh doesn’t use hw aes acceleration). Since access to the servers where I push this data is limited to the local LAN and VPN I considered acceptable to use a lower quality cipher. It might not be the same for you.

Use nagios to track and graph your twitter followers

I’ve been using Nagios to monitor servers and devices for years now. Lately as an exercise to learn more about nagios I installed a private instance that I use to monitor everything I can think of – from my personal servers to the number of hours we watch TV.

A nice use I found is to monitor the number of followers my twitter bots and personal account have. For that I wrote a custom plugin, available in my small repository of plugins at: https://github.com/silviuvulcan/nagios-plugins

The plugin pulls the number from the public site – no API access required. Since the data is not changing frequently you could use a custom setting to check less often. My entry looks like this:

[cce_nagios]

define host{
use                     linux-server
host_name               twitter
alias                   twitter
address                 twitter.com
}

define service{
use                             local-service,graphed-service         ; Name of service template to use
host_name                       twitter
service_description             Twitter followers: @rtjolla
normal_check_interval           60                    ; check hourly
check_command                   check_twitterfollowers!rtjolla
}

[/cce_nagios]

This creates a host named twitter, and graphs (provided you have nagiosgrapher up and running) the number of followers. Of course you have to install the plugin and create a custom command for it. Mine looks like this:

[cce_nagios]
define command{
command_name check_twitterfollowers
command_line $USER1$/check_twitterfollowers.sh -u $ARG1$
}
[/cce_nagios]

Below is the code of the plugin if you are not interested in cloning the repo:

[github file=”/silviuvulcan/nagios-plugins/blob/master/check_twitterfollowers.sh”]

Twitter followers graphed by nagios sample
Twitter followers graphed by nagios sample

Can you mirror a stripe? (a.k.a. 3 disks raid 10 or 1+0)

TL;DR Yes you can assemble a md device from a stripe and a disk.

Please don’t shoot! 🙂 I’m doing this on a testing server, I don’t have any idea about the long term stability and/or reliability and I don’t care this is a personal testing server. I wouldn’t do this (at least yet) on a production server.

The why, what and how below:

I am looking to assemble a “Frankenstein” virtualization server (i.e. use various existing devices). Among many issues to regard is the fact that I will be using multiple size drives, and some of the times these would be old and not that fast.

I thought of using a RAID0 to speed-up VMs but this means it would have to be backed up and it would add complexity. I don’t have enough same size disks to create a RAID10 and I don’t want to manage sync scripts for a testing server. By chance I have two 320G disks and one 640G and two 500GB disks and one 1TB. So I thought of the following. Create a RAID0 and mirror that with a double-the-size disk. I tested that in a virtual machine and it turns out it works just fine:

[cce_bash]
root@v_slackware64c:~# mdadm –create –verbose /dev/md0 –level=stripe –raid-devices=2 /dev/sdb /dev/sdc
mdadm: chunk size defaults to 512K
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
root@v_slackware64c:~#  mdadm –create –verbose /dev/md1 –level=mirror –raid-devices=2 –bitmap=internal –write-behind /dev/md0 –write-mostly /dev/sdd
mdadm: /dev/md0 appears to be part of a raid array:
level=raid1 devices=2 ctime=Fri Oct 17 11:26:15 2014
mdadm: Note: this array has metadata at the start and
may not be suitable as a boot device.  If you plan to
store ‘/boot’ on this device please ensure that
your boot-loader understands md/v1.x metadata, or use
–metadata=0.90
mdadm: size set to 8383424K
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md1 started.
root@v_slackware64c:~# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
md1 : active raid1 sdd[1](W) md0[0]
8383424 blocks super 1.2 [2/2] [UU]
[==========>……….]  resync = 51.2% (4296704/8383424) finish=0.3min speed=204604K/sec
bitmap: 1/1 pages [4KB], 65536KB chunk

md0 : active raid0 sdc[1] sdb[0]
8387584 blocks super 1.2 512k chunks

unused devices: <none>

[/cce_bash]

Let’s see what I did above:

mdadm –create –verbose /dev/md0 –level=stripe –raid-devices=2 /dev/sdb /dev/sdc

Creates the md0 stripe (raid 0) device using whole disks sdb and sdc

mdadm –create –verbose /dev/md1 –level=mirror –raid-devices=2 –bitmap=internal –write-behind /dev/md0 –write-mostly /dev/sdd

Creates the md1 mirror (raid 1) device using the md0 array created before and the whole disk sdd.

–bitmap=internal – creates an internal (stored with the metadata) write intent bitmap

–write-behind – specify that write-behind mode should be enabled (valid for RAID1 only).

–write-mostly – yhis is only valid for RAID1 and means that the ‘md’ driver will avoid reading from these devices if possible; i.e. try to use the raid0 stripe for reading (faster) as much as possible

I will let you know how secure/robust this proves in the end. Considering this is a testing server I’m not that worried.