I use BD-RE disks as a backup storage (combined with HDDs and cloud storage).

Because the usual way of using optical media implies wiping the whole disk to remove/add/change a file, I decided to experiment with different solutions.

Actually, you can just:

mkfs.vfat /dev/sr0

And have a FAT32 fs with read/write access on a DVD-RW/BD-RE. But FAT32 has limitations and using anything other than FAT is quite slow.

So I decidet to try using tar. Originally developed for magnetic tape storage, it seemed like a great solution.

And turns out it really is a great solution. It is very fast, yet flexible. You can even use compression if you want.

Here are some commands I use:

Use optical media as tar archive path (without the need to specify -f option):

export TAPE=/dev/sr0

List files on disk:

tar -tvnR

Create new archive on disk:

tar -cnvb 2048 <files_to_add>

Add files to disk:

tar -rnvb 2048 <files_to_add>

Delete files from disk:

tar -nvb 2048 --delete <files_to_delete>

Check used space on disk:

tar -tvnR | grep "Block of NULs" | awk '{val=substr($2, 1, length($2)-1); print val/2/1024 " MB "}'

Check awailable space on disk in MB:

tar -tvnR | grep "Block of NULs" | awk "{val=substr(\$2, 1, length(\$2)-1); print ($(lsblk -bno SIZE $TAPE)-val*512)/1024/1024}"

Extract to a folder:

tar -xn -C <output_folder>

Parameter -b 2048 is used to increase the size of portions of data that are fed to the disk drive. The default size (20x512 bytes, equivalent to -b 20) is too small, which leads to very slow write speeds, caused by constand buffer uderruns. Alternatively you can use --record-size=1M. Anything above 64kb should work, but 1M is my recommendation.

Update!

Turns out F2FS works a lot better than any other option, even tar. It doesn't thrash the optical head on writing lots of small files, yet provides a very convenient interface for managing files. After formatting, you can just mount it with the following flags, and use like any flash drive.

mkfs.f2fs -l empty -O extra_attr,inode_checksum,sb_checksum /dev/sr0 -f
mount -o noatime,nobarrier,fsync_mode=nobarrier /dev/sr0 /mnt/

Note the nobarrier mount option. In essence, a barrier forbids the writing of any blocks after the barrier until all blocks written before the barrier are committed to the media. By using barriers, filesystems can make sure that their on-disk structures remain consistent at all times. As optical disks perform better with uninterrupted linear writes and also produce less errors, we don't want our caches to be fully flushed after every single atomic transaction. Even though Blu-ray has lossless linking, every time you begin writing to the disk, there is a small imprecision, which is corrected by the built-in error correction. Barriers are mostly important in case of a sudden power loss, so use a UPS.

You can also experiment with async and flush_merge options, for even better performance.


0 Comments latest

No comments.