Western Digital My Book Live

Continuing the discussion from [Solved] Poweroff My Book Live (instead of reboot) that turned into a somewhat generic "My Book Live" thread before getting kicked in the kneecap for not matching its topic anymore.

For reference, see this now-locked thread on info about

  • powering off the MBL to safely spin down disks (the original topic)
  • the improvements in recent snapshots: non-destructive sysupgrade and functioning crypto

We left off talking about the new-and-improved sysupgrade retaining MBR partition layouts, but failing to retain GPT. Which is a problem on disks larger than 2TiB.

Exactly my thought; x86 will at some point have to deal with this, too, I guess I was asking if anything is already underway in that direction.

Very interesting. Although the thought of a "fake MBR" has been rummaging in the back of my head, I did not know about the proper existence of "hybrid MBRs", it's not something you stumble upon, at least I didn't.

Edit, quite some time later: While "Hybrid MBR" was a workaround of sorts for some time, it is not a solution anymore and should not be used. See below.

What I do is:

opkg install gdisk
gdisk /dev/sda


#GPT fdisk (gdisk) version 1.0.1
#
#Type device filename, or press <Enter> to exit: /dev/sda
#Caution: invalid main GPT header, but valid backup; regenerating main header
#from backup!
#
#Caution! After loading partitions, the CRC doesn't check out!
#Warning! Main partition table CRC mismatch! Loaded backup partition table
#instead of main partition table!
#
#Warning! One or more CRCs don't match. You should repair the disk!
#
#Partition table scan:
#  MBR: MBR only
#  BSD: not present
#  APM: not present
#  GPT: damaged
#
#Found valid MBR and corrupt GPT. Which do you want to use? (Using the
#GPT MAY permit recovery of GPT data.)
# 1 - MBR
# 2 - GPT
# 3 - Create blank GPT
#
#Your answer: 2

If we updated using another dd image into the harddrive then we could choose option 2, since the partition table on GPT is correct and we could mount later and see data from data partition.
I have done that a couple of times with 3TB, 4TB, and 5TB disks and not data loss at all.

Sure, that's also a way to do it, it always was an option. The whole discussion about a nondestructive sysupgrade really does not apply to you if you accept a broken MBR and repair the GPT after each upgrade.

[Edit: Removed the paragraph about "Hybrid MBR" since that's not a viable option anymore.]

As of current snapshots, however, gdisk is not available, with no word on when it will return. (And I can't find the commit that removed it.) fdisk has recently become GPT-aware but seems not to offer the same functionality as gdisk.

Edit: gdisk (and its siblings) has been permanently removed. This poses a bit of a problem, not just for MBL and for both our solutions, since there's virtually no documentation on how to use fdisk to convert/repair GPT tables or create a "hybrid MBR."

My thoughts are to disassemble the MBL then connect HDD to my Desktop, then create GPT partition table via e.g. Gparted with the following layout
/dev/sda1 for BOOT = 8mb
/dev/sda2 for SYSTEM = 256mb
/dev/sda3 for SWAP = 256 or 512mb
/dev/sda4 for STORAGE = 2.9TB
Am I correct with layout and partitioning? Do I have to do any additional steps installing LEDE?

I'm not sure if you implied this or not, but for the initial install you dd the ext4-rootfs.img image file to the disk. After that it will have an MBR partition table containing the first two partitions, and it will be already bootable in the MBL enclosure.

How you then continue to partition and use the remaining disk space is entirely up to you. Disks smaller or equal to 2TiB can remain on MBR partitions, in your case you need to convert the disk to GPT to use the disk space beyond 2TiB.

At of this time, I would recommend two things:
a) using a snapshot build -- it comes with improvements for the MBL, most notably a nondestructive sysupgrade and working crypto (17.01.4 has neither)
b) not only converting MBR to GPT, but creating a "hybrid MBR" as outlined in this post. This saves you from recreating the GPT table after the aforementioned nondestructive sysupgrade.

Both go hand in hand. If you don't care about crypto or the nondestructive sysupgrade or don't want to go for the snapshot build, the MBL also works perfectly fine with 17.04 proper. Just keep in mind that you will have some minor headache recreating the partition table when updating in the future.

Thanks for your help. I installed snapshot build to my spare 1TiB hdd for testing (3TiB is hard to backup)
And one more question please.
Can you show: dmesg | grep NCQ? Mine shows: [0.709251] ata1.00: 1953525168 sectors, multi 0: LBA48 NCQ (depth 1/32)

Here you go. This is my test/staging machine with a measly 80 GiB disk (as you can see from 156301488 sectors Ă  512 bytes):

[    0.762151] ata1.00: 156301488 sectors, multi 0: LBA48 NCQ (depth 1/32)

If you don't mind me asking, what do you hope to find out here? NCQ queueing support?

I try to figure out why NCQ doesn't work.
NCQ (depth 0/32) means driver problem.
NCQ (depth 1/32) means NCQ supported but disabled.
NCQ (depth 31/32) means NCQ works fine.

I tried echo 31 > /sys/block/sda/device/queue_depth and no luck.
Here is a commit about our sata controller from @chunkeey. Maybe you or he can explain how to make NCQ work

Don't do that, don't modify the default partitions, just add the aditional for data...
The default table will be:

Number  Start (sector)    End (sector)  Size       Code  Name
   1            8192           24575   8.0 MiB     8300  Linux filesystem # starts at 4MiB
   2           32768          557055   256.0 MiB   8300  Linux filesystem # starts at 12MiB

those are on MBR table but to avoid problems your GPT it should be the same (except for your data partition that will be the last and can fill al the free space on the disk then using GPT)
then you could add your swap and data partitions as well

I had mine as this:

Number  Start (sector)    End (sector)  Size       Code  Name
   1            8192           24575   8.0 MiB     8300  boot  # starts at 4MiB
   2           32768          557055   256.0 MiB   8300  rootfs # starts at 12MiB
   3          557056         8945663   4.0 GiB     8200  swap # starts at 268MiB
   4         8945664     11721045134   5.5 TiB     8300  Data # starts at 4268MiB

Are those gaps essential?

it is better to have it aligned to sectors, there is no gap, just on the first one start at 4mib (the 2nd is 8mib plus 4mib equals 12mib) ... since the mbr starts at the beggiining of the disk it seems there is needed some space after just in case to protect in case of problem of head (that problem was usual on old harddrives, a power failure destroys the mbr, etc... but those are the configured by the person that made the Make config file for those images.

I have a My Book Live Duo 8TB.
The current partition scheme is this (on both disks):

(parted) p free
Model: ATA WDC WD40EFRX-68W (scsi)
Disk /dev/sda: 7814037168s
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number  Start        End          Size         File system     Name     Flags
        34s          30719s       30686s       Free Space
 3      30720s       1032191s     1001472s     linux-swap(v1)  primary  raid
 1      1032192s     5031935s     3999744s     ext3            primary  raid
 2      5031936s     9031679s     3999744s     ext3            primary  raid
 4      9031680s     7814035455s  7805003776s                  primary  raid
        7814035456s  7814037134s  1679s        Free Space

Is there any way to install OpenWrt/LEDE without destroying the data? (i.e. the raid device on /dev/sda4 and /dev/sdb4)

As far as I know GPT is fully supported by OpenWrt, so only the install image (containing the MBR and boot/root partitions) is the problem here.

Someone CMIIW, but I'm inclined to say no. Even if you could convince the partition table to accomodate the OpenWrt partitions and re-use the RAID, which I think would be fiddly but doable: The stock MBL disks use an odd block size of 64k.

Thank you for the quick answer.
By the way, the block size would be my next question... I think the strange block size is for performance reasons. Did anyone do any tests with OpenWrt? It would be nice to have a continuously supported OS on MBLD, but if it fails to perform, then it's not an option...

Exactly.

Me ... not for a while. I think I can do some tests later today.

Edit: I did some more tests and I have to extensively edit my numbers regarding the original MBL.
Edit 2: I repeated the tests on OpenWrt with a fully initialized ext4 file system (disabled ext4 lazy init), getting slightly better results at writing and much better results at reading.

This is my testing process:

root@OpenWrt:~# hdparm -tT /dev/sda

/dev/sda:
 Timing cached reads:   452 MB in  2.00 seconds = 225.80 MB/sec
 Timing buffered disk reads: 242 MB in  3.00 seconds =  80.65 MB/sec

root@OpenWrt:~# hdparm -tT --direct /dev/sda

/dev/sda:
 Timing O_DIRECT cached reads:   178 MB in  2.02 seconds =  88.26 MB/sec
 Timing O_DIRECT disk reads: 258 MB in  3.01 seconds =  85.69 MB/sec

root@OpenWrt:/mnt/sda3# time dd if=/dev/zero of=tempfile bs=1M count=1024
1024+0 records in
1024+0 records out
real    0m 37.94s
user    0m 0.01s
sys     0m 10.82s

root@OpenWrt:/mnt/sda3# time dd if=tempfile of=/dev/null bs=1M count=1024
1024+0 records in
1024+0 records out
real    0m 12.36s
user    0m 0.01s
sys     0m 5.94s

root@OpenWrt:/mnt/sda3# hdparm -W0 /dev/sda
/dev/sda:
 setting drive write-caching to 0 (off)
 write-caching =  0 (off)

root@OpenWrt:/mnt/sda3# time dd if=/dev/zero of=tempfile bs=1M count=1024
1024+0 records in
1024+0 records out
real    0m 48.12s
user    0m 0.01s
sys     0m 10.51s

root@OpenWrt:/mnt/sda3# time dd if=tempfile of=/dev/null bs=1M count=1024
1024+0 records in
1024+0 records out
real    0m 12.36s
user    0m 0.01s
sys     0m 5.90s

It works out like this:

OpenWrt on 4 TB WD Red (WD40EFRX):

  • hdparm direct disk read: 86 MB/s
  • dd write on ext4, write-caching on: 27 MB/s
  • dd write on ext4, write-caching off: 21 MB/s
  • dd read on ext4: 82 MB/s

Original MBL 1TB (presumably WD Green):

  • hdparm direct disk read: 121 MB/s
  • dd write on ext3, write-caching on: 94 MB/s
  • dd write on ext3, write-caching off: 5 MB/s
  • dd read on ext3: 98.9 MB/s

Original MBL 2TB (presumably WD Green):

  • hdparm direct disk read: 117 MB/s
  • dd write on ext3, write-caching on: 40 MB/s
  • dd write on ext3, write-caching off: 5 MB/s
  • dd read on ext3: 42 MB/s

My 1 TB MBL shows the performance I'm used to, but my 2 TB MBL is clearly not performing very well, I assume that's because it has been in use for a long time and is rather full. I will have to look into that, but that drive is scheduled for OpenWrt conversion anyway. Of note is that the original MBL's write performance consistently and significantly breaks down when write-caching is disabled. At any rate, take the comparison values to the original MBL with a grain of salt, the drives are of course of varying degrees of use and vintage.

In comparison, writing to disk under OpenWrt is quite a bit slower, which I assume is owed to the smaller block size. Reading speeds seem to be much less affected by block sizes and turn out to be roughly in the same ballpark as with the original MBL.

My fisrt post here, just to thank you for this valuable information !
My old my book live is running latest snapshot, I have done the "hybrid MBR GPT" advice for my 3TB disk, and my old mbl is now up and running :slight_smile:

I also find useful information here for the LEDS

@takimata
Could you please indicate how you are producing your own sysupgrade compatible images ?
As the snapshots don't contain any *sysupgrade.bin" file, i guess we have to build our own ?
Thanks !

JFTR, I've been working on a lightweight "replacement" for the original MBL's monitorio.sh, i.e. a script that constantly checks for hard disk activity, puts the drive to sleep, and sets the LEDs accordingly. My plan for the next few days (I should never give estimates) is to clean it up a little bit, and I really want to make it into an opkg-compatible package, then I'll put it on my github.

What do you mean with "sysupgrade compatible"? You can just sysupgrade using the wd_mybooklive-ext4-rootfs.img.gz image file, you don't even have to untar it. (Personally, I build my own images using the Image Generator, mainly to include samba and hdparm, but that's about it.)

I assumed that the image must end by sysupgrade.bin to indicate that it can be applied.
When I was on 17.01.04, I tried to install a snapshot (through the web gui) but it failed, so It makes me feel that I was needing a sysupgrade.bin file :slight_smile:
Thanks for the clarification !

It would be very nice to have a package to enable the LEDs, put the drive to sleep and also provides the softoff.sh script.