Storage Management | Unraid Docs (2024)

Table of Contents
Help! I can't start my array!​ Too many disks missing from the array​ Too many attached devices​ Invalid or missing key​ Cannot contact key-server​ This Unraid Server OS release has been withdrawn​ Adding disks​ Configuring Disks​ Clear v Pre-Clear​ Data Disks​ Parity Disks​ Upgrading parity disk(s)​ Replacing disks​ Replacing a disk to increase capacity​ Replacing failed/disabled disk(s)​ A disk failed while I was rebuilding another​ Removing disks​ Removing parity disk(s)​ Removing data disk(s)​ Alternative method​ Notes​ Checking array devices​ Parity check​ Read check​ Check history​ Spin up and down disks​ Reset the array configuration​ Notifications​ Status Reports​ SMART Monitoring​ Why use a Pool?​ Cache​ Docker application Storage​ VM vdisks​ Pool Modes​ Single device mode​ Multi-Device mode​ Backing up the pool to the array​ Switching the pool to multi-device mode​ Adding disks to a pool​ Removing disks from a multi-device pool​ Change Pool RAID Levels​ Replace a disk in a pool​ Remove a disk from a pool​ Minimum Free Space for a Pool​ Moving files between a Pool and the array​ Moving files from pool to array​ Moving files from array to pool​ Multiple Pools​ Moving files between pools​ Selecting a File System type​ Setting a File System type​ Creating a File System (Format)​ Drive shows as unmountable​ Checking a File System​ Preparing to test​ Running the Test using the WebGUI​ Running the Test using the command line​ Repairing a File System​ Preparing to repair​ Running the Repair using the WebGUI​ Running the Repair using the command line​ Changing a File System type​ Converting to a new File System type​ Reformatting a drive​ Reformatting a cache drive​ BTRFS Operations​ Balance​ Scrub​ Array Write Modes​ Setting the Write mode​ Read/Modify/Write mode​ Turbo write mode​ Ramifications​ Using a Cache Drive​ Read Modes​ (Cache) Pools​

Storage Management | Unraid Docs (1)

To assign devices tothe array and/or cache, first login to the server's WebGUI. Click onthe Main tab and select the devices to assign to slots for parity,data, and cache disks. Assigning devices to Unraid is easy! Justremember these guidelines:

  • Always pick the largest storage device available to act as yourparity device(s). When expanding your array in the future (addingmore devices to data disk slots), you cannot assign a data disk thatis larger than your parity device(s). For this reason, it is highlyrecommended to purchase the largest HDD available for use as yourinitial parity device, so future expansions aren't limited to smalldevice sizes. If assigning dual parity disks, your two parity diskscan vary in size, but the same rule holds true that no disk in thearray can be larger than your smallest parity device.

  • SSD support in the array is experimental. Some SSDs may not beideal for use in the array due to how TRIM/Discard may beimplemented. Using SSDs as data/parity devices may haveunexpected/undesirable results. This does NOT apply to the cache /cache pool. Most modern SSDs will work fine in the array, and evenNVMe devices are now supported, but know that until these devicesare in wider use, we only have limited testing experience using themin this setting.

  • Using a cache will improve array performance. It does this byredirecting write operations to a dedicated disk (or pool of disksin Unraid 6) and moves that data to the array on a schedule that youdefine (by default, once per day at 3:40AM). Data written to thecache is still presented through your user shares, making use ofthis function completely transparent.

  • Creating a cache-pool adds protection for cached data. If youonly assign one cache device to the system, data residing therebefore being moved to the array on a schedule is not protected fromdata loss. To ensure data remains protected at all times (both ondata and cache disks), you must assign more than one device to thecache function, creating what is called a cache-pool. Cache poolscan be expanded on demand, similar to the array.

  • SSD-based cache devices are ideal for applications and virtualmachines. Apps and VMs benefit from SSDs as they can leveragetheir raw IO potential to perform faster when interacting with them.Use SSDs in a cache pool for the ultimate combination offunctionality, performance, and protection.

  • Encryption is disabled by default. If you wish to use thisfeature on your system, you can do so by adjusting the file systemfor the devices you wish to have encrypted. Click on each disk youwish to have encrypted and toggle the filesystem to one of theencrypted options. Note, however, that using encryption cancomplicate recovering from certain types of failure so do not usethis feature just because it is available if you have no need forit.

Unraid recognizes disks by their serial number (and size). This meansthat it is possible to move drives between SATA ports without having tomake any changes in drive assignments. This can be useful fortroubleshooting if you ever suspect there may be a hardware-relatedissue such as a bad port or a think a power or SATA cable may besuspect.

NOTE: Your array will not start if you assign or attach more devicesthan your license key allows.

Normally following system boot up the array (complete set of disks) isautomatically started (brought on-line and exported as a set of shares).But if there's been a change in disk configuration, such as a new diskadded, the array is left stopped so that you can confirm theconfiguration is correct. This means that any time you have made a diskconfiguration change you must log in to the WebGUI and manually startthe array. When you wish to make changes to disks in your array, youwill need to stop the array to do this. Stopping the array means all ofyour applications/services are stopped, and your storage devices areunmounted, making all data and applications unavailable until you onceagain start the array. To start or stop the array, perform the followingsteps:

  1. Log into the Unraid WebGUI using a browser (e.g. http://tower;http://tower.local from Mac)
  2. Click on Main
  3. Go to the Array Operation section
  4. Click Start or Stop (you may first need to click the "Yes Iwant to do this" checkbox)

Help! I can't start my array!

If the array can't be started, it may be for one of a few reasons whichwill be reported under the Array Operation section:

  • Too many wrong and/or missing disks
  • Too many attached devices
  • Invalid or missing registration key
  • Cannot contact key-server
  • This Unraid Server OS release has been withdrawn

Too many disks missing from the array

Storage Management | Unraid Docs (2)

If you have no parity disks, this message won't appear.

If you have a single parity disk, you can only have up to one diskmissing and still start the array, as parity will then help simulate thecontents of the missing disk until you can replace it.

If you have two parity disks, you can have up to two disks missing andstill start the array.

If more than two disks are missing / wrong due to a catastrophicfailure, you will need to perform the New Config procedure.

Too many attached devices

Storage Management | Unraid Docs (3)

Storage devices are anydevices that present themselves as a block storage device EXCLUDING theUSB flash device used to boot Unraid Server OS. Storage devices can beattached via any of the following storage protocols:IDE/SATA/SAS/SCSI/USB. This rule only applies prior to starting thearray. Once the array is started, you are free to attach additionalstorage devices and make use of them (such as USB flash devices forassignment to virtual machines). In Unraid Server OS 6, the attachedstorage device limits are as follows:

Invalid or missing key

Storage Management | Unraid Docs (4)

Missing key

A valid registration key is required in order to start the array. Topurchase or get a trial key, perform the following steps:

  1. Log into the Unraid webGui using a browser (e.g. http://tower frommost device, http://tower.local from Mac devices)
  2. Click on Tools
  3. Click on Registration
  4. Click to Purchase Key or Get Trial Key and complete thesteps presented there
  5. Once you have your key file link, return to the Registration andpaste it in the field then click Install Key.

Expired trial

If the word "expired" is visible at the top left of the WebGUI, thismeans your trial key has expired. Visit the registration page to requesteither an extension to your trial or purchase a valid registration key.

Blacklisted USB flash device

If your server is connected to the Internet and your trial hasn'texpired yet, it is also possible that your USB flash device contains aGUID that is prohibited from registering for a key. This could bebecause the GUID is not truly unique to your device or has already beenregistered by another user. It could also be because you are using an SDcard reader through a USB interface, which also tends to be provisionedwith a generic GUID. If a USB flash device is listed as blacklisted,this is a permanent state and you will need to seek an alternativedevice to use for your Unraid Server OS installation.

Cannot contact key-server

This message will only occur if you are using a Trial license. If youare using a paid-for license then the array can be started without theneed to contact the Unraid license server.

If your server is unable to contact our key server to validate yourTrial license, you will not be able to start the array. The server willattempt to validate upon first boot with a timeout of 30 sec. If itcan't validate upon first boot, then the array won't start, but eachtime you navigate or refresh the WebGUI it will attempt validation again(with a very short timeout). Once validated, it won't phone-home forvalidation again unless rebooted.

This Unraid Server OS release has been withdrawn

If you receive this message, it means you are running a beta or releasecandidate version of Unraid that has been marked disabled from activeuse. Upgrade the OS to the latest stable, beta, or release candidateversion in order to start your array.

There are a number of operations you can perform against your array:

  • Add disks
  • Replace disks
  • Remove disks
  • Check disks
  • Spin disks up/down
  • Reset the array configuration

NOTE: In cases where devices are added/replaced/removed, etc., theinstructions say "Power down" ... "Power up". If your server'shardware is designed for hot/warm plug, Power cycling is not necessaryand Unraid is designed specifically to handle this. All servers built byLimeTech since the beginning are like this: no power cyclenecessary.

Adding disks

Configuring Disks

TBD

Clear v Pre-Clear

Under Unraid a 'Clear disk is one that has been completely filled withzeroes and contains a special signature to say that it is in this state.This state is needed before a drive can be added to a parity-protectedarray without affecting parity. If Unraid is in the process of writingzeroes to all of a drive then this is referred to as a 'Clear'operation. This Clear operation can take place as a background operationwhile using the array, but the drive in question cannot be used to storedata until the Clear operation has completed and the drive beenformatted to the desired File System type.

A disk that is being added as a parity drive or one that is to be usedto rebuild a failed drive does not need to be in a 'Clear' stateas those processes overwrites every sector on the drive with newcontents as part of carrying out the operation. In addition, if you areadding an additional data drive to an array that does not currently havea parity drive there is no requirement for the drive to be clear beforeadding it.

You will often see references in the forum or various wiki pages to'Preclear'. This refers to getting the disk into a 'Clear' statebefore adding it to the array. The Preclear process requires the use ofa third-party plugin. Prior to Unraid v6, this was highly desirable asthe array was offline while Unraid carried out the 'Clear' operation.but Unraid v6 now carries out 'Clear' as a background process with thearray operational while it is running so it is now completely optional.Many users still like to use the Preclear process as in addition toputting the disk into a clear state it also performs a level of 'stresstest' on the drive which can be used as a confidence check on thehealth of the drive. The Preclear as a result takes much longer thanUnraid's more simplistic 'clear' operation. Many users like toPreclear new disks as an initial confidence check and to reduce thechance of a drive suffering from 'what is known as infant mortality'where one of the most likely times for a drive to fail is when it isfirst used (presumably due to a manufacturing defect).

It is also important to note that after completing a 'Preclear' youmust not carry out any operation that will write to the drive (e.g.format it) as this will destroy the 'Clear' state.

Data Disks

This is the normal case of expanding the capacity of the system byadding one or more new hard drives.

The capacity of any new disk(s) added must be the same size or smallerthan your parity disk. If you wish to add a new disk that is larger thanyour parity disk, then you must instead first replace your parity disk.(You could use your new disk to replace parity, and then use your oldparity disk as a new data disk).

The procedure is:

  1. Stop the array.
  2. Power down the server.
  3. Install your new disk(s).
  4. Power up the server.
  5. Assign the new storage device(s) to a disk slot(s) using the UnraidWebGUI.
  6. Start the array.
  7. If your array is parity protected then Unraid will now automaticallybegin to clear the disk as this is required before it can be addedto the array.
    • This step is omitted if you do not have a parity drive.
    • If a disk has been pre-cleared before adding it Unraid willrecognize this and go straight to the next step.
    • The clearing phase is necessary to preserve the fault tolerancecharacteristic of the array. If at any time while the newdisk(s) is being cleared, one of the other disks fails, you willstill be able to recover the data of the failed disk.
    • The clearing phase can take several hours depending on the sizeof the new disks(s) and although the array is available duringthis process Unraid will not be able to use the new disk(s) forstoring files until the clear has completed and the new disk hasbeen formatted.
    • The files on other drives in the array will be accessible duringa clear operation, and the clear operation should not degradeperformance in accessing these other drives.
  8. Once the disk has been cleared, an option to format the disk willappear in the WebGUI. At this point, the disk is added to the arrayand shows as unmountable and the option to format unmountable disksis shown.
    • Check that the serial number of the disk(s) is what you expect.You do not want to format a different disk (thus erasing itscontents) by accident.
  9. Click the check box to confirm that you want to proceed with theformat procedure.
    • A warning dialog will be given warning you of the consequencesas once you start the format the disks listed will have anyexisting contents erased and there is no going back. Thiswarning may seem a bit like over-kill but there have been timesthat users have used the format option when it was not theappropriate action.
  10. The format button will now be enabled so you can click on it tostart the formatting process.
  11. The format should only take a few minutes and after the formatcompletes the disk will show as mounted and ready for use.
    • You will see that a small amount of space will already show asused which is due to the overheads of creating the empty filesystem on the drive.

You can add as many new disks to the array as you desire at one time,but none of them will be available for use until they are all clearedand formatted with a filesystem

Parity Disks

It is not mandatory for an Unraid system to have a parity disk, but itis normal to provide redundancy. A parity disk can be added at any time,Each parity disk provides redundancy against one data drive failing.

Any parity disk you add must be at least as large as the largest datadrive (although it can be larger). If you have two parity drives then itis not required that they be the same size although it is required thatthey both follow the rule of being at least as large as the largest datadrive.

The process for adding a parity disk is identical to that for adding adata disk except that when you start the array after adding it Unraidwill start to build parity on the drive that you have just added.

While parity is being rebuilt the array will continue to function withall existing files being available, but the performance in accessingthese files will normally be degraded due to contention with the paritybuild process.

NOTE:

You cannot add a parity disk(s) and data disk(s) at the same time in asingle operation. This needs to be split into two separate steps, one toadd parity and the other to add additional data space.

Upgrading parity disk(s)

You may wish to upgrade your parity device(s) to a larger one(s) so youcan start using larger sized disks in the array or to add an additionalparity drive

CAUTION: If you take the actions below and only have a singleparity drive then you need to bear the following in mind:

  • The array will be unprotected until the parity rebuild occurs. Thismeans that if a data drive fails during this process you are likelyto suffer loss of the data on the failing drive.
  • If you already have a failed data drive then this will remove theability to rebuild that data drive. In such a situation the ParitySwap procedure is the correct way to proceed.

The procedure to remove a parity drive is as follows:

  1. Stop the array.
  2. Power down the server.
  3. Install new larger parity disks. Note if you do this as your firststep then steps 2 & 4 listed here are not needed.
  4. Power up the server.
  5. Assign the larger disk to the parity slot (replacing the formerparity device).
  6. Start the array.

When you start the array, the system will once again perform a paritybuild to the new parity device and when it completes the array will onceagain be in a protected state. It is recommended that you keep the oldparity drives contents intact until the above procedure completes as ifan array drive fails during this procedure so you cannot completebuilding the contents of the new parity disk, then it is possible to usethe old one for recovery purposes (ask on the forum for the stepsinvolved). If you have a dual parity system and wish to upgrade both ofyour parity disks, it is recommended to perform this procedure oneparity disk at a time, as this will allow for your array to still be ina protected state throughout the entire upgrade process.

Once you've completed the upgrade process for a parity disk, the formerparity disk can be considered for assignment and use in the array as anadditional data disk (depending on age and durability)

Replacing disks

There are two primary reasons why you may wish to replace disks in thearray:

  • A disk needs to be replaced due to failure or scheduled retirement(out of warranty / support / serviceability).
  • The array is nearly full and you wish to replace existing datadisk(s) with larger ones (out of capacity).

In either of these cases, the procedure to replace a disk is roughly thesame, but one should be aware of the risk of data loss during a diskreplacement activity. Parity device(s) protect the array from data lossin the event a disk failure. A single parity device protects against asingle failure, whereas two parity devices can protect against losingdata when two disks in the array fail. This chart will help you betterunderstand your level of protection when various disk replacementscenarios occur.

Data Protection During Disk Replacements

Replacing a single diskReplacing two disks

Replacing a disk to increase capacity

With modern disks rapidly increasing in capacity you can replace anexisting data drive with a larger one to increase the available space inthe array without increasing the total count of drives in the array.

Points to note are:

  • If a disk is showing as unmountable then you should resolve thisbefore attempting to upgrade the drive as the rebuild process doesnot clear an unmountable status
  • If you have single parity then you are not protected against adifferent drive failing during the upgrade process. If this happensthen post to the forums to get advice on the best way to proceed toavoid data loss.
  • If you have dual parity and you are upgrading a single data drivethen you are still protected against another data drive failingduring the upgrade process.
  • If you have dual parity you can upgrade two drives simultaneouslybut you would then not be protected against another drive failingwhile doing the upgrade. If this happens then post to the forums toget advice on the best way to proceed to avoid data loss. It is upto you to decide on whether to take the route of upgrading twodrives one at a time or taking the faster but riskier route of doingthem at the same time.
  • Keep the disk that you are replacing with its contents unchangeduntil you are happy that the upgrade process has gone as planned.This gives a fallback capability if the upgrade has gone wrong forany reason.

To perform the upgrade proceed as follows:

  • Run a parity check if you have not done so recently and make surethat zero errors are reported. Attempting an upgrade if parity isnot valid will result in the file system on the upgraded disk beingcorrupt.
  • Stop the array.
  • Unassign the disk you want to upgrade.
  • Start the array to commit this change and make Unraid 'forget' thecurrent assignment.
    • Unraid will now tell you that the missing disk is beingemulated. It does this using the combination of the remainingdata drives and a parity drive to dynamically reconstruct thecontents of the emulated drive. From a user perspective thesystem will act as if the drive was still present albeit with areduced level of protection against another drive failing.
    • If you started the array in Maintenance mode then this willensure no new files can be written to the drive during theupgrade process
    • If you started the drive in Normal mode then you will be able toread and write to the Emulated drive as if it was still physicallypresent
  • Stop the array.
    • At this point the array is in the same state as it would be ifthe drive you have stopped using had failed instead of beingunassigned as part of the upgrade process.
  • Assign the (larger) replacement drive to the slot previously usedfor the drive you are upgrading.
  • Start the array to begin rebuilding the contents of the emulateddrive on to the upgraded drive.
    • Since the replacement drive is larger than the one it isreplacing when the contents of the emulated drive have been putonto the replacement drive Unraid will automatically expand thefile system on the drive so the full capacity of the drivebecomes available for storing data.

Replacing failed/disabled disk(s)

Storage Management | Unraid Docs (5)

Storage Management | Unraid Docs (6)

As noted previously, with a single parity disk, you can replace up toone disk at a time, but during the replacement process, you are at riskfor data loss should an additional disk failure occur. With two paritydisks, you can replace either one or two disks at a time, but during atwo disk replacement process, you are also at risk for data loss.Another way to visualize the previous chart:

Array Tolerance to Disk Failure Events

Without ParityWith Single ParityWith Dual Parity
A single disk failureData from that disk is lostData is still available and the disk can be replacedData is still available and the disk can be replaced
A dual disk failureData on both disks are lostData on both disks are lostData is still available and the disks can be replaced

Storage Management | Unraid Docs (7)

Storage Management | Unraid Docs (8)

Storage Management | Unraid Docs (9)

NOTE: If more disk failures have occurred than your parity protectioncan allow for, you are advised to post in the General Support forum forassistance with data recovery on the data devices that have failed.

What is a 'failed' (disabled) drive

It is important to realize what is meant by the term failed drive:

  • It is typically used to refer to a drive that is marked with a red'x' in the Unraid GUI.
  • It does NOT necessarily mean that there is a physical problem withthe drive (although that is always a possibility). More often thannot the drive is OK and an external factor caused the write to fail.

If the syslog shows that resets are occurring on the drive then thisis a good indication of a connection problem.

The SMART report for the drive is a good place to start.

The SMART attributes can indicate a drive is healthy when in fact itis not. A better indication of health is whether the drive cansuccessfully complete the SMART extended test without error. If itcannot complete this test error-free then there is a high likelihoodthat the drive is not healthy.

CRC errors are almost invariably cabling issues. It is important torealize that this SMART attribute is never reset to 0 so if it stopsincreasing that is what you should be aiming to achieve.

  • If you have sufficient parity drives then Unraid will emulate thefailed drive using the combination of the parity drive(s) and theremaining 'good' drives. From a user perspective, this resultsin the system reacting as if the failed drive is still present.

This is one reason why it is important that you have enablednotifications to get alerted to such a failure. From the end-userperspective, the system continues to operate and the data remainavailable. Without notifications enabled the user may blithelycontinue using their Unraid server not realizing that their data maynow be at risk and they need to take some corrective action.

When a disk is marked as disabled and Unraid indicates it is beingemulated then the following points apply:

  • Unraid will stop writing to the physical drive. Any writes tothe 'emulated' drive will not be reflected on the physical drivebut will be reflected in parity so from the end-user perspectivethen the array seems to be updating data as normal.
  • When you rebuild a disabled drive the process will make the physicaldrive correspond to the emulated drive by doing asector-for-sector copy from the emulated drive to the physicaldrive. You can, therefore, check that the emulated drive containsthe content that you expect before starting the rebuild process.
  • If a drive is being emulated then you can carry out recoveryactions on the emulated drive before starting the rebuild process.This can be important as it keeps the physical drive untouched forpotential data recovery processes if the emulated drive cannot berecovered.
  • If an emulated drive is marked as unmountable then a rebuildwill not fix this and the rebuilt drive will have the sameunmountable status as the emulated drive. The correct handling ofunmountable drives is described in a later section. It isrecommended that you repair the file system before attempting arebuild as the repair process is much faster that the rebuildprocess and if the repair process is not successful the rebuiltdrive would have the same problem.

A replacement drive does not need to be the same size as the disk itis replacing. It cannot be smaller but it can be larger. If thereplacement drive is not larger than any of your parity drives then thesimpler procedure below can be used. In the special case where you wantto use a new disk that is larger than at least one of your parity drivesthen please refer to the Parity Swap procedure that follows instead.

If you have purchased a replacement drive, many users like to pre-clearthe drive to stress test the drive first, to make sure it's a gooddrive that won't fail for a few years at least. The Preclearing is notstrictly necessary as replacement drives don't have to be cleared sincethey are going to be completely overwritten., but Preclearing new drivesone to three times provides a thorough test of the drive, eliminates'infant mortality' failures. You can also carry out stress tests inother ways such as running an extended SMART test or using toolssupplied by the disk manufacturer that run on Windows or macOS.

Normal replacement

This is a normal case of replacing a failed drive where the replacementdrive is not larger than your current parity drive(s).

It is worth emphasizing that Unraid must be able to reliably read everybit of parity PLUS every bit of ALL other disks in order to reliablyrebuild a missing or disabled disk. This is one reason why you want tofix any disk-related issues with your Unraid server as soon as possible.

To replace a failed disk or disks:

  1. Stop the array.
  2. Power down the unit.
  3. Replace the failed disk(s) with a new one(s).
  4. Power up the unit.
  5. Assign the replacement disk(s) using the Unraid WebGUI.
  6. Click the checkbox that says Yes I want to do this
  7. (optional) Tick the box to start in Maintenance mode. If you startthe array in Maintenance mode you will need to press the Syncbutton to trigger the rebuild. The advantage of doing this inMaintenance mode is that nothing else can write to the array whilethe rebuild is running which maximises speed. The disadvantage isthat you cannot use the array in the meantime and until you returnto normal mode cannot see what the contents of the disk beingrebuilt will look like.
  8. Click Start to initiate the rebuild process.and the system willreconstruct the contents of the emulated disk(s) onto the newdisk(s) and, if the new disk(s) is/are bigger, expand the filesystem.
Notes
  • IMPORTANT: If at any point during the replacement process Unraidappears to offer an option to format a drive do not do so asthis will result in wiping all files belonging to the drive you aretrying to replace and updating parity to reflect this.
  • A 'good' rebuild relies on all the other array disks beingread without error. If during the rebuild process any of the otherarray disks start showing read errors then the rebuilt disk is goingto show corruption (and probably end up as unmountable) with somedata loss highly likely.
  • You must replace a failed disk with a disk that is as big or biggerthan the original and not bigger than the smallest parity disk.
  • If the replacement disk has been used before then remove anyexisting partitions. In theory this should not be necessary but ishas been known to sometimes cause problems so it is better to playsafe.
  • The rebuild process can never be used to change the format of adisk - it can only rebuild to the existing format.
  • The rebuild process will not correct a disk that is showing asunmountable when being emulated (as this indicates there is somelevel of file system corruption present) - it will still show asunmountable after the rebuild as the rebuild process simply makesthe physical disk match the emulated one.

Rebuilding a drive onto itself

There can be cases where it is determined that the reason a disk wasdisabled is due to an external factor and the disk drive appears to befine. In such a case you need to take a slightly modified process tocause Unraid to rebuild a 'disabled' drive back onto the same drive.

  1. Stop array
  2. Unassign disabled disk
  3. Start array so the missing disk is registered
  4. Important: If the drive to be rebuilt is a data drive then checkthat the emulated drive is showing the content you expect to bethere as the rebuild process simply makes the physical drive matchthe emulated one. If this is not the case then you may want to askin forums for advice on the best way to proceed.
  5. Stop array
  6. Reassign disabled disk
  7. (optional) Tick the box to start in Maintenance mode. If you startthe array in Maintenance mode you will need to press the Syncbutton to trigger the rebuild. The advantage of doing this inMaintenance mode is that nothing else can write to the array whilethe rebuild is running which maximises speed.The disadvantage isthat you cannot use the array in the meantime and until you returnto normal mode cannot see what the contents of the disk beingrebuilt will look like.
  8. Click Start to initiate the rebuild process and the system willreconstruct the contents of the emulated disk.

This process can be used for both data and parity drives that have beendisabled.

Parity Swap

This is a special case of replacing a disabled drive where thereplacement drive is larger than your current parity drive. Thisprocedure applies to both the parity1 and the parity2 drives. If youhave dual parity then it can be used on both simultaneously to replace 2disabled data drives with the 2 old parity drives.

NOTE: It is not recommended that you use this procedure forupgrading the size of both a parity drive and a data drive as the arraywill be offline during the parity copy part of the operation. In such acase it is normally better to first upgrade the parity drive and thenafterward upgrade the data drive using the drive replacement procedure.This takes longer but the array remains available for use throughout theprocess, and in addition, if anything goes wrong you have the justremoved drive available intact for recovery purposes

Why would you want to do this? To replace a data drive with a largerone, that is even larger than the Parity drive.

Unraid does not require a replacement drive to be the same size asthe drive being replaced. The replacement drive CANNOT be smallerthan the old drive, but it CAN be larger, much larger in fact. Ifthe replacement drive is the same size or larger, UP TO the samesize as the smallest parity drive, then there the simple procedureabove can be used. If the replacement drive is LARGER than theParity drive, then a special two-step procedure is required asdescribed here. It works in two phases: - The larger-than-existing-parity drive is first upgraded tobecome the new the parity drive - The old parity drive replaces the old data drive and the data ofthe failed drive is rebuilt onto it.

As an example, you have a 1TB data drive that you want to replace(the reason does not matter). You have a 2TB parity drive. You buy a4TB drive as a replacement. The 'Parity Swap' procedure will copythe parity info from the current 2TB parity drive to the 4TB drive,zero the rest, make it the new parity drive, then use the old 2TBparity drive to replace the 1TB data drive. Now you can do as youwish with the removed 1TB drive.

Important Notes
  • If you have purchased a replacement drive, we always recommend manyusers to pre-clear the drive to stress test the replacement drivefirst, to make sure it's a good drive that won't fail for a fewyears at least. The Preclearing is not strictly necessary, asreplacement drives don't have to be cleared, they are going to becompletely overwritten. But Preclearing new drives one to threetimes provides a thorough test of the drive, eliminates 'infantmortality' failures.
  • If your replacement drive is the same size or smaller than yourcurrent Parity drive, then you don't need this procedure. Proceedwith the Replacing a Data Driveprocedure.
  • This procedure is strictly for replacing data drives in an Unraidarray. If all you want to do is replace your Parity drive with alarger one, then you don't need the Parity Swap procedure. Justremove the old parity drive and add the new one, and start thearray. The process of building parity will immediately begin. (Ifsomething goes wrong, you still have the old parity drive that youcan put back!)
  • IMPORTANT!!! This procedure REQUIRES that the data drive beingreplaced MUST be disabled first. If the drive failed (has a redball), then it is already 'disabled', but if the drive is OK butyou want to replace it anyway, then you have to force it to be'failed', by unassigning it and starting and stopping the array.Unraid only forgets a drive when the array is started without thedrive, otherwise it still associates it with the slot (but'Missing'). The array must be started once with the driveunassigned or disabled. Yes, it may seem odd, but is required beforeUnraid will recognize that you are trying to do a 'Parity Swap'.It needs to see a disabled data disk with forgotten ID, a new diskassigned to its slot that used to be the parity disk, and a new diskassigned to the parity slot.
  • Obviously, it's very important to identify the drives forassignment correctly! Have a list of the drive models that will betaking part in this procedure, with the last 4 characters of theirserial numbers. If the drives are recent Toshiba models, then theymay all end in GS or S, so you will want to note thepreceding 4 characters instead.

The steps to carry out this procedure are:

Note: these steps are the general steps needed. The steps you takemay differ depending on your situation. If the drive to be replacedhas failed and Unraid has disabled it, then you may not need steps 1and 2, and possibly not steps 3 and 4. If you have already installedthe new replacement drive (perhaps because you have been Preclearingit), then you would skip steps 5 through 8. Revise the steps asneeded.

  1. Stop the array (if it's started)

  2. Unassign the old drive (if it's still assigned)If the drive was a good drive and notifications are enabled, youwill get error notifications for a missing drive! This is normal.

  3. Start the array (put a check in the Yes I want to do thischeckbox if it appears (older versions: Yes, I'm sure))Yes, you need to do this. Your data drive should be showing asNot installed.

  4. Stop the array again

  5. Power down

  6. [ Optional ] Pull the old driveYou may want to leave it installed, for Preclearing or testing orreassignment.

  7. Install the new drive (preclear STRONGLY suggested, but formattingnot needed)

  8. Power on

  9. Stop the array

    *If you get an "Array Stopping•Retry unmounting diskshare(s)..." message, try disabling Docker and/or VM in Settingsand stopping the array again after rebooting.

  10. Unassign the parity drive

  11. Assign the new drive in the parity slotYou may see more error notifications! This is normal.

  12. Assign the old parity drive in the slot of the old data drive beingreplacedYou should now have blue drive status indicators for both theparity drive and the drive being replaced.

  13. Go to the MainArray Operation sectionYou should now have a Copy button, with a statement indicating"Copy will copy the parity information to the new paritydisk".

  14. Put a check in the Yes I want to do this checkbox (olderversions: Yes, I'm sure), and click the Copy button_Now patiently watch the copy progress, takes a long time (~20hours for 4TB on a 3GHz Core 2 Duo). All of the contents of the oldparity drive are being copied onto the new drive, then the remainderof the new parity drive will be zeroed.

    The array will NOT be available during this operation!

    *If you disabled Docker and/or VM in settings earlier, go aheadand re-enable now.

    When the copy completes, the array will still be stopped("Stopped. Upgrading disk/swapping parity.").

    The Start button will now be present, and the description willnow indicate that it is ready to start a Data-Rebuild._

  15. Put a check in the Yes I want to do this checkbox (olderversions: Yes, I'm sure), and click the Start button_The data drive rebuild begins. Parity is now valid, and the arrayis started.

    Because the array is started, you can use the array as normal, butfor best performance, we recommend you limit your usage.

    Once again, you can patiently watch the progress, it takes a longtime too! All of the contents of the old data drive are now beingreconstructed on what used to be your parity drive, but is nowassigned as the replacement data drive._

That's it! Once done, you have an array with a larger paritydrive and a replaced data drive that may also be larger!

Note: many users like to follow up with a parity check, just tocheck everything. It's a good confidence builder (although notstrictly necessary)!

A disk failed while I was rebuilding another

If you only have a single parity device in your system and a diskfailure occurs during a data-rebuild event, the data rebuild will becancelled as parity will no longer be valid. However, if you have dualparity disks assigned in your array, you have options. You can either

  • let the first disk rebuild complete before starting the second, or
  • you can cancel the first rebuild, stop the array, replace the secondfailed disk, then start the array again

If the first disk being rebuilt is nearly complete, it's probablybetter to let that finish, but if you only just began rebuilding thefirst disk when the second disk failure occurred, you may deciderebuilding both at the same time is a better solution.

Removing disks

There may be times when you wish to remove drives from the system.

Removing parity disk(s)

If for some reason you decide you do not need the level of parityprotection that you have in place then it is always possible to easilyremove a parity disk.

  1. Stop the array.
  2. Set the slot for the parity disk you wish to remove to Unassigned.
  3. Start the array to commit the change and 'forget' the previouslyassigned parity drive.

CAUTION: If you already have any failed data drives in the array beaware that removing a parity drive reduces the number of failed drivesUnraid can handle without potential data loss.

  • If you started with dual parity you can still handle a singlefailed drive but would not then be able to sustain another drivefailing while trying to rebuild the already failed drive withoutpotential data loss.
  • If you started with single parity you will no longer be able tohandle any array drive failing without potential data loss.

Removing data disk(s)

Removing a disk from the array is possible, but normally requires you toonce again sync your parity disk(s) after doing so. This means thatuntil the parity sync completes, the array is vulnerable to data lossshould any disk in the array fail.

To remove a disk from your array, perform the following steps:

  1. Stop the array
  2. (optional) Make note if your disk assignments under the main tab(for both the array and cache; some find it helpful to take ascreenshot)
  3. Perform the Reset the arrayconfigurationprocedure. When doing this it is a good idea to use the option topreserve all current assignments to avoid you having to re-enterthem (and possibly make a mistake doing so).
  4. Make sure all your previously assigned disks are there and set thedrive you want removed to be Unassigned
  5. Start the array without checking the 'Parity is valid' box.

A parity-sync will occur if at least one parity disk is assigned anduntil that operation completes, the array is vulnerable to data lossshould a disk failure occur.

Alternative method

It is also possible to remove a disk without invalidating parity ifspecial action is taken to make sure that the disk only contains zeroesas a disk that is all zeroes does not affect parity. There is no supportfor this method built into the Unraid GUI so. it requires manual stepsto carry out the zeroing process. It also takes much longer than thesimpler procedure above.

There is no official support from Limetech for using this method so youare doing it at your own risk.

Notes

  1. This method preserves parity protection at all times.
  2. This method can only be used if the drive to be removed is a gooddrive that is completely empty, is mounted and can be completelycleared without errors occurring
  3. This method is limited to removing only one drive at a time(actually this is not technically true but trying to do multipledrives in parallel is slower than doing them sequentially due to thecontention that arises for updating the parity drive)
  4. As stated above, the drive must be completely empty as this processwill erase all existing content. If there are still any files on it(including hidden ones), they must be moved to another drive ordeleted.
    • One quick way to clear a drive of files is to reformat it! Toformat an array drive, you stop the array, and then on the Mainpage click on the link for the drive and change the file systemtype to something different than it currently is, then restartthe array. You will then be presented with an option to formatit. Formatting a drive removes all of its data, and the paritydrive is updated accordingly, so the data cannot be easilyrecovered.
    • Explanatory note: "Since you are going to clear the driveanyway, why do I have to empty it? And what is the purpose ofthis strange clear-me folder?" Yes, it seems a bit draconian torequire the drive to be empty since we're about to clear andempty it in the script, but we're trying to be absolutelycertain we don't cause data loss. In the past, some usersmisunderstood the procedure, and somehow thought we wouldpreserve their data while clearing the drive! This way, byrequiring the user to remove all data, and then add an oddmarker, there cannot be any accidents or misunderstandings anddata loss.

The procedure is as follows:

  1. Make sure that the drive you are removing has been removed from anyinclusions or exclusions for all shares, including in the globalshare settings.
  2. Make sure the array is started, with the drive assigned and mounted.
  3. Make sure you have a copy of your array assignments, especially theparity drive.
    • In theory you should not need this but it is a useful safety netin case if the "Retain current configuration" option under NewConfig doesn't work correctly (or you make a mistake using it).
  4. It is highly recommended to turn on reconstruct write, as the writemethod (sometimes called 'Turbo write'). With it on, the scriptcan run 2 to 3 times as fast, saving hours!
    • However when using 'Turbo Write' all drives must read withouterror so do not use it unless you are sure no other drive ishaving issues.
    • To enable 'turbo Write' in Settings → Disk Settings, changeTunable (md_write_method) to reconstruct write
  5. Make sure ALL data has been copied off the drive; drive MUST becompletely empty for the clearing script to work.
  6. Double check that there are no files or folders left on the drive.
    • Note: one quick way to clean a drive is to reformat it! (onceyou're sure nothing of importance is left of course!)
  7. Create a single folder on the drive with the name clear-me -exactly 7 lowercase letters and one hyphen
  8. Run the clear an arraydrivescript from the UserScriptsplugin (or run it standalone, at a command prompt).
    • If you prepared the drive correctly, it will completely andsafely zero out the drive. If you didn't prepare the drivecorrectly, the script will refuse to run, in order to avoid anychance of data loss.
    • If the script refuses to run, indicating it did not find amarked and empty drive, then very likely there are still fileson your drive. Check for hidden files. ALL files must beremoved!
    • Clearing takes a loooong time! Progress info will be displayed.
    • For best performance, make sure there are no reads/writeshappening to the array. The easiest way to do this is to bringthe array up in maintenance mode.
    • If running in User Scripts, the browser tab will hang for theentire clearing process.
    • While the script is running, the Main screen may show invalidnumbers for the drive, ignore them. Important! Do not try toaccess the drive, at all!
  9. When the clearing is complete, stop the array
  10. Follow the procedure for resetting the array making sure you electto retain all current assignments.
  11. Return to the Main page, and check all assignments. If any aremissing, correct them. Unassign the drive(s) you are removing.Double-check all of the assignments, especially the parity drive(s)!
  12. Click the check box for Parity is already valid, make sure it ischecked!
  13. Start the array! Click the Start button then the Proceed button (onthe warning popup that will pop up)
  14. (Optional) Start a correcting parity check to ensure parity reallyis valid and you did not make a mistake in the procedure. Ifeverything was done correctly this should return zero errors.

Alternate Procedure steps for Linux proficient users

If you are happy to use the Linux Command line then you can replacesteps 7 and 8 by performing the clearing commands yourself at a commandprompt. (Clearing takes just as long though!) If you would rather dothat than run the script in steps 7 and 8, then here are the 2 commandsto perform:

umount/mnt/diskX
ddbs=1Mif=/dev/zeroof=/dev/mdXstatus=progress

(where X in both lines is the number of the data drive being removed)Important!!! It is VITAL you use the correct drive number, or youwill wipe clean the wrong drive! That's why using the script isrecommended, because it's designed to protect you from accidentallyclearing the wrong drive.

Checking array devices

Storage Management | Unraid Docs (10)

When the array is started,there is a button under Array Operations labelled Check. Dependingon whether or not you have any parity devices assigned, one of twooperations will be performed when clicking this button.

It is also possible to schedule checks to be run automatically atUser-defined intervals under Settings → Scheduler. It is a good ideato do this as an automated check on array health so that problems can benoticed and fixed before the array can deteriorate beyond repair.Typical periods for such automated checks are monthly or quarterly andit is recommended that such checks should be non-correcting.

Parity check

If you have at least one parity device assigned, clicking Checkwill initiate a Parity-check. This will march through all data disks inparallel, computing parity and checking it against stored parity on theparity disk(s).

You can continue to use the array while a parity check is running butthe performance of any file operations will be degraded due to drivecontention between between the check and the file operation. The paritycheck will also be slowed while any such file operations are active.

By default, if an error is found during a Parity-check the parity diskwill be updated (written) with the computed data and the Sync Errorscounter will be incremented. If you wish to run purely a check withoutwriting correction, uncheck the checkbox that says Write correctionsto parity before starting the check. In this mode, parity errors willbe notated but not actually fixed during the check operation.

A correcting parity check is started automatically when starting thearray after an "Unsafe Shutdown". An "Unsafe Shutdown" is defined asany time that the Unraid server was restarted without having previouslysuccessfully stopped the array. The most common cause of Sync Errors isan unexpected power-loss, which prevents buffered write data from beingwritten to disk. It is highly recommended that users consider purchasinga UPS (uninterruptable power supply) for their systems so that Unraidcan be set to shut down tidily on power loss, especially if frequentoffsite backups aren't being performed.

It is also recommended that you run an automatic parity checkperiodically and this can be done under Settings->Scheduler. Thefrequency is up to the user but monthly or quarterly are typicalchoices. It is also recommended that such a check is set asnon-correcting as if a disk is having problems there is a chance ofyou corrupting your parity if you set such a check to be correcting. Theonly acceptable result from such a check is to have 0 errorsreported. If you do have errors reported then you should takepre-emptive action to try and find out what is causing them. If in doubtask questions in the forum.

Read check

Storage Management | Unraid Docs (11)

If you configure an arraywithout any parity devices assigned, the Check option will start aRead check against all the devices in the array. You can use this tocheck disks in the array for unrecoverable read errors, but know thatwithout a parity device, data may be lost if errors are detected.

A Read Check is also the type of check started if you have disableddrives present and the number of disabled drives is larger than thenumber of parity drives.

Check history

Any time a parity or read check is performed, the system will log thedetails of the operation and you can review them by clicking theHistory button under Array Operations. These are stored in a textfile under the config directory on your Unraid USB flash device.

Spin up and down disks

If you wish to manually control the spin state of your rotationalstorage devices or toggle your SSD between active and standby mode,these buttons provide that control. Know that if files are in theprocess of being accessed while using these controls, the disk(s) in usewill remain in an active state, ignoring your request.

When disks are in a spun-down state, they will not report theirtemperature through the WebGUI.

Reset the array configuration

Storage Management | Unraid Docs (12)

If you wish to remove a disk from the array or you simply wish to startfrom scratch to build your array configuration, there is a tool inUnraid that will do this for you. To reset the array configuration,perform the following steps:

  1. Navigate to the Tools page and click New Config
  2. You can (optionally) elect to have the system preserve some of thecurrent assignments while resetting the array. This can be veryuseful if you only intend to make a small change as it avoids youhaving to re-enter the details of the disks you want to leaveunchanged.
  3. Click the checkbox confirming that you want to do this and thenclick apply to perform the operation
  4. Return to the Main tab and your configuration will have beenreset
  5. Make any adjustments to the configuration that you want.
  6. Start the array to commit the configuration. You can start in Normalor Maintenance mode.

Notes:

  • Unraid will recognize if any drives have been previously used byUnraid, and when you start the array as part of this procedure thecontents of such disks will be left intact.
  • There is a checkbox next to the Start button that you can use to say'Parity is Valid'. Do not check this unless you are sure it is thecorrect thing to do, or unless advised to do so by an experiencedUnraid user as part of a data recovery procedure.
  • Removing a data drive from the array will always invalidate parityunless special action has been taken to ensure the disk beingremoved only contains zeroes
  • Reordering disks after doing the New Config without removing drivesdoes not invalidate parity1, but it DOES invalidate parity2.

Undoing a reset

If for any reason after performing a reset, you wish to undo it, performthe following steps:

  1. Browse to your flash device over the network (SMB)
  2. Open the Config folder
  3. Rename the file super.old to super.dat
  4. Refresh the browser on the Main page and your arrayconfiguration will be restored

Notifications

TBD

Status Reports

Unraid can be configured to send you status reports about the state ofthe array.

An important point about these reports is:

  • They only tell you if the array currently has any disks disabled orshowing read/write errors.
  • The status is reset when you reboot the system, so it does not tellyou what the status was in the past.
  • IMPORTANT: The status report does not take into account theSMART status of the drive. You can therefore get a status reportindicating that the array appears to be healthy even though theSMART information might indicate that a disk might not be toohealthy.

SMART Monitoring

Unraid can be configured to report whether SMART attributes for a driveare changing. The idea is to try and tell you in advance if drives mightbe experiencing problems even though they have not yet caused read/writeerrors so that you can take pre-emptive action before a problem becomesserious and thus might potentially lead do data loss. You should havenotifications enabled so that you can see these notifications even whenyou are not running the Unraid GUI.

SMART monitoring is currently only supported for SATA drives and is notavailable for SAS drives.

Which SMART attributes are monitored can be configured by the user, butthe default ones are:

  • 5: Reallocated Sectors count
  • 187: Reported uncorrected errors
  • 188: Command timeout
  • 197: Current / Pending Sector Count
  • 198: Uncorrectable sector count
  • 199: UDMA CRC error count

If any of these attributes change value then this will be indicated onthe Dashboard by the icon against the drive turning orange. You canclick on this icon and a menu will appear that allows you to acknowledgethat you have seen the attribute change, and then Unraid will stoptelling you about it unless it changes again.

You can manually see all the current SMART information for a drive byclicking on its name on the Main tab in the Unraid GUI.

Prior to Unraid 6.9.0 there was only one pool supported and it wasalways called cache. Starting with Unraid 6.9.0 multiple pools aresupported and the names of these pools are user defined. When multiplepools are present then any (or all) of them can have the functionalitythat was available with the cache in earlier Unraid releases.

If you are running Unraid 6.9.0 or later then any reference you find indocumentation to cache can be considered as applying to any pool, notjust one that is actually named cache.

Why use a Pool?

There are several reasons why a user might want to use a pool in Unraid.

It is worth pointing out that these uses are not mutually exclusive as asingle pool can be used for multiple Use Cases.

UnRaid 6.9 (or later) also supports multiple pools so it is possible tohave individual pools dedicated to specific Use Cases.

Cache

The way that Unraid handles parity means that the speed of writing to aparity protected array is lower than might be expected by the raw speedof the array disks. If a pool is configured to act as a cache for a UserShare then the perceived speed of writing to the array is that supportedby the pool rather than the speed of writing directly to the array.

A particular User Share can only be associated with one pool at a time,but it is not necessary for all User Shares to be associated with thesame pool.

Docker application Storage

Docker containers basically consist of 2 parts - the binaries that aretypically stored within the docker.img file that are static and onlyupdated when the container updates, and the working set that is meant tobe mapped to be external to the docker container (typically as acontainer specific subfolder within the appdata share. There are goodreasons to hold both categories on a Pool for several reasons:

  • Writes are much faster than when held on the array as they are notslowed down by the way in which UnRaid updates parity for a parityprotected array
  • The working set can be accessed and updated faster when stored on aPool.
  • It is not necessary to have array disks spun up when a container isaccessing its binaries or using its working set.

VM vdisks

Most VM's will have one (or more) vdisk files used to emulate a harddisk or iso files to emulate a CD-ROM.

Performance of VMs is much better if such files are on a Pool ratherthan on an array drive.

Pool Modes

There are two primary modes of operating a pool in Unraid:

Single device mode

When the number of disk slots for the pool is set to one, this isreferred to as running in single device mode. In this mode, you willhave no protection for any data that exists on the pool, which is whymulti-device mode is recommended. However, unlike in multi-device mode,while in single device mode, you are able to adjust the filesystem forthe cache device to something other than BTRFS. It is for this reasonthat there are no special operations for single mode. You can only addor remove the device from the system.

NOTE: If you choose to use a non-BTRFS file system for your pool deviceoperating in single mode, you will not be able to expand to amulti-device pool without first reformatting the device with BTRFS. Itis for this reason that BTRFS is the default filesystem for a pool, evenwhen operating in single device mode.

Multi-Device mode

When more than one disk is assigned to the pool, this is referred to asrunning in multi-device mode. This mode utilizes a BTRFS specificimplementation of RAID 1 in order to allow for any number of devices tobe grouped together in a pool. Unlike a traditional RAID 1, a BTRFSRAID1 can mix and match devices of different sizes and speeds and caneven be expanded and contracted as your needs change. To calculate howmuch capacity your BTRFS pool will have, check out this handy btrfsdisk usage calculator. Set thePreset RAID level to RAID-1, select the number of devices you have, andset the size for each. The tool will automatically calculate how muchspace you will have available.

Here are typical operations that are likely to want to carry out on thepool:

  • Back up the pool to the array
  • Switch the pool to run in multi-device mode
  • Add disks
  • Replace a disk

Backing up the pool to the array

The procedure shown assumes that there are at least some dockers and/orVMs related files on the cache disk, some of these steps are unnecessaryif there aren't.

  1. Stop all running Dockers/VMs
  2. Settings → VM Manager: disable VMs and click apply
  3. Settings → Docker: disable Docker and click apply
  4. Click on Shares and change to "Yes" all User Shares with "Usecache disk:" set to "Only" or "Prefer"
  5. Check that there's enough free space on the array and invoke themover by clicking "Move Now" on the Main page
  6. When the mover finishes check that your pool is empty

Note that any files on the pool root will not be moved as theyare not part of any share and will need manual attention

You can then later restore files to the pool by effectively reversingthe above steps:

  1. Click on all shares whose content you want on the pool and set "Usecache:" option to "Only" or "Prefer" as appropriate.
  2. Check that there's enough free space on the pool and invoke themover by clicking "Move Now" on the Main page
  3. When the mover finishes check that your pool now has the expectedcontent and that the shares in question no longer have files on themain array
  4. Settings → Docker: enable Docker and click apply
  5. Settings → VM Manager: enable VMs and click apply
  6. Start any Dockers/VMs that you want to be running

Switching the pool to multi-device mode

If you want a multi-device pool then the only supported format for thisis BTRFS. If it is already in BTRFS format then you can follow theprocedure below for adding an additional drive to a pool

If the cache is NOT in BTRFS format then you will need to do thefollowing:

  1. Use the procedure above for backing up any existing content you wantto keep to the array.
  2. Stop the array
  3. Click on the pool on the Main tab and change the format to be BTRFS
  4. Start the array
  5. The pool should how show as unmountable and offer the option toformat the pool.
  6. Confirm that you want to do this and click the format button
  7. When the format finishes you now have a multi-device pool (albeitwith only one drive in it)
  8. If you want additional drives in the pool you can (optionally) do itnow.
  9. Use the restore part of the previous procedure to restore anycontent you want on the pool

Adding disks to a pool

Notes:

  • You can only do this if the pool is already formatted as BTRFS

If it is not then you will need to first follow the steps in theprevious section to create a pool in BTRFS format.

To add disks to the BTRFS pool perform the following steps:

  1. Stop the array.
  2. Navigate to the Main tab.
  3. Scroll down to the section labeled Pool Devices.
  4. Change the number of Slots to be at least as many as the numberof devices you wish to assign.
  5. Assign the devices you wish to the pool slot(s).
  6. Start the array.
  7. Click the checkbox and then the button under Array Operations toformat the devices.

Make sure that the devices shown are those you expect - you do notwant to accidentally format a device that contains data you want tokeep.

Removing disks from a multi-device pool

Notes:

  • You can only do this if your pool is configured for redundancy atboth the data and metadata level.
  • You can check what raid level your pool is currently set to byclicking on it on the Main tab and scrolling down to the BalanceStatus section.
  • You can only remove one drive at a time
  1. Stop the array
  2. Unassign a pool drive.
  3. Start the array
  4. Click on the pool drive
  5. If you still have more than one drive in the pool then you cansimply run a Balance operation
  6. If you only have one drive left in the pool then switch the poolRAID level to single as described below

Change Pool RAID Levels

BTRFS can add and remove devices online, and freely convert between RAIDlevels after the file system has been created.

BTRFS supports raid0, raid1, raid10, raid5, and raid6 (but note thatraid5/6 are still considered experimental so use with care i.e. makesure you have good backups if using these modes), and it can alsoduplicate metadata or data on a single spindle or multiple disks. Whenblocks are read in, checksums are verified. If there are any errors,BTRFS tries to read from an alternate copy and will repair the brokencopy if the alternative copy succeeds.

By default, Unraid creates BTRFS volumes in a pool with data=raid1 andmetadata=raid1 to give redundancy.

For more information about the BTRFS options when using multiple devicessee the BTRFS wikiarticle.

You can change the BTRFS raid levels for a pool from the Unraid GUI by:

  • If the array is not started then start it in normal mode
  • Click on the Pool name on the Main tab
  • Scroll down to the Balance section
  • At this point information (including current RAID levels) will bedisplayed.
  • If using UnRaid 6.8.3 or earlier then add the appropriateadditional parameters added to the Options field.

As an example, the following screenshot shows how you might convertthe pool from the RAID1 to the SINGLE profile.

Storage Management | Unraid Docs (13)

  • If using UnRaid 6.9.0 or later this has been made even easier bygiving you a drop-down list of the available options to simplyselecting the one you want
  • Start the Balance operation.
  • Wait for the Balance to complete
  • The new RAID level will now be fully operational.

Replace a disk in a pool

Notes:

  • You can only do this if the pool is formatted as BTRFS AND it isset up to be redundant.
  • You can only replace up to one disk at a time from a pool.

To replace a disk in the redundant pool, perform the following steps:

  1. Stop the array.
  2. Physically detach the disk from your system you wish to remove.
  3. Attach the replacement disk (must be equal to or larger than thedisk being replaced).
  4. Refresh the Unraid WebGUI when under the Main tab.
  5. Select the pool slot that previously was set to the old disk andassign the new disk to the slot.
  6. Start the array.
  7. If presented with an option to Format the device, click thecheckbox and button to do so.

Remove a disk from a pool

There have been times when users have indicated they would like toremove a disk from a pool they have set up while keeping all the dataintact. This cannot be done from the Unraid GUI but is easy enough to dofrom the command line in a console session.

Note: You need to maintain the minimum number of devices for theprofile in use, i.e., you can remove a device from a 3+ device raid1pool but you can't remove one from a 2 device raid1 pool (unless it'sconverted to a single profile first), also make sure the remainingdevices have enough space for the current used pool space, or theremoval will fail.

With the array running type on the console:

btrfsdeviceremove/dev/sdX1/mnt/cache

Replace X with the correct letter for the drive you want to remove fromthe system as shown on the Main tab (don't forget the 1 after it).

If the device is encrypted, you will need slightly different syntax:

btrfs device remove /dev/mapper/sdX1 /mnt/cache

If the drive is an NVMe device, use nvmeXn1p1 in place of sdX1

Wait for the device to be deleted (i.e., until the command completes andyou get the cursor back).

The device is now removed from the pool, you don't need to stop thearray now, but at the next array stop you need to make Unraid forget thenow-deleted member, and to achieve that:

  • Stop the array
  • Unassign all pool devices
  • Start the array to make Unraid "forget" the pool config

If the docker and/or VMs services were using that pool best todisable those services before start or Unraid will recreate theimages somewhere else, assuming they are using /mnt/user paths)

  • Stop array (re-enable docker/VM services if disabled above)
  • Re-assign all pool member except the removed device
  • Start array

Done

You can also remove multiple devices with a single command (as long asthe above rule is observed):

btrfsdeviceremove/dev/sdX1/dev/sdY1/mnt/cache

but in practice this does the same as removing one device, then theother, as they are still removed one at a time, just one after the otherwith no further input from you.

Minimum Free Space for a Pool

This setting is used to help avoid the issue of a pool that is beingused for a User Share running out of free space and this then causingerrors to occur. The Minimum Free Space setting for a pool tells Unraidwhen to stop putting new files onto the pool for User Shares that have aUse Cache setting of Yes or Prefer. Unraid does not takeinto account file size when selecting a pool and once Unraid hasselected a pool for a file it will not change its mind, and if the filedoes not fit you get an out-of-space error. The purpose of the MinimumFree Space value is that when the free space falls below the level youset Unraid will start bypassing the pool and writing directly to thearray for any new files. You should therefore set this setting to belarger than the biggest file you intend to be cached on the pool. Inmany ways it is analogous to the setting of the same name for UserShares but it applies to the pool rather than the array disks. It isignored for User Shares which have a Use Cache setting of Only,and is not relevant if the setting is No.

For Unraid 6.8.3 (and earlier) which only supported a single pool(that was always called cache) this setting can be found underSettings → Global Share Settings.

For Unraid 6.9.0 (and later) which supports multiple pools (with thenames being user defined) this setting can be found by clicking on thepool name on the Main tab.

Moving files between a Pool and the array

A topic that seems to come up with some frequency is what is the processfor getting files that belong to shares (e.g. appdata, system) thatit is normally recommended are held on a pool device for performancereasons moved to or from the array if the need arises.

Moving files from pool to array

A typical Use Case for this action is to get files off the pool so thatyou can safely perform some action that you are worried might end uplosing any existing contents. The steps are:

  • Disable the Docker and VM services under Settings. This is done toensure that they will not hold any files open as that would stopmover from being able to move them.
  • Go to the Shares tab and for each share you want to be moved fromthe cache to the array ensure that the Use Cache setting is set toYes.
  • Go to the Main tab and manually start the mover process so thatit starts transferring the files from the cache to the array.
  • When moves completes the files should now be on the array. You canvalidate there are no files left behind by clicking on the'folder' icon at the right side of the cache entry on the Maintab.

Moving files from array to pool

The commonest Use Cases for this is when you have either used the abovesteps to get files off the cache and now want them back there or if youhave newly added a cache drive and want the files for selected shares(typically appdata and system) to be moved to the cache. The stepsare:

  • Disable the Docker and VM services under Settings. This is done toensure that they will not hold any files open as that would stopmover from being able to move them.
  • Go to the Shares tab and for each share you want to be moved fromthe array to the cache ensure that the Use Cache setting is set toPrefer.
  • Go to the Main tab and manually start the mover process so thatit starts transferring the files from the array to the cache.
  • When moves completes the files should now be on the cache.
  • Re-enable the Docker and/or VM services under Settings (if you usethem).
  • (optional) Go to the Shares tab and for each share you want allfiles always be on the cache set the Use Cache setting to Onlyto stop any new files for this share being created on the array inthe future.

Multiple Pools

As of version 6.9, you can create multiple pools and manage themindependently. This feature permits you to define up to 35 named pools,of up to 30 storage devices per pool. Pools are created and managed viathe Main page.

  • Note: A pre-6.9.0 cache disk/pool is now simply a pool named"cache". When you upgrade a server which has a cache disk/pooldefined, a backup of config/disk.cfg will be saved toconfig/disk.cfg.bak, and then cache device assignment settings aremoved out of config/disk.cfg and into a new file,config/pools/cache.cfg. If later you revert back to a pre-6.9.0Unraid OS release you will lose your cache device assignments andyou will have to manually re-assign devices to cache. As long asyou reassign the correct devices, data should remain intact.

When you create a user share or edit an existing user share, you canspecify which pool should be associated with that share. The assignedpool functions identically to the current cache pool operation.

Something to be aware of: when a directory listing is obtained for ashare, the Unraid array disk volumes and all pools which contain thatshare are merged in this order:

pool assigned to share

disk1

:

disk28

all the other pools instrverscmp()order.

A single-device pool may be formatted with either xfs, btrfs, or(deprecated) reiserfs. A multiple-device pool may only be formattedwith btrfs.

Note: Something else to be aware of: Let's say you have a 2-devicebtrfs pool. This will be what btrfs calls "raid1" and what most peoplewould understand to be "mirrored disks". Well, this is mostly true inthat the same data exists on both disks but not necessarily at theblock-level. Now let's say you create another pool, and what you do isun-assign one of the devices from the existing 2-device btrfs pool andassign it to this pool. Now you have x2 single-device btrfs pools.Upon array Start user might understandably assume there are now x2 poolswith exactly the same data. However, this is not the case. Instead,when Unraid OS sees that a btrfs device has been removed from anexisting multi-device pool, upon array Start it will do a wipefs onthat device so that upon mount it will not be included in the old pool.This of course effectively deletes all the data on the moved device.

Moving files between pools

There is no built-in support for moving files between pools. In theevent one wants to do this you can do it using the mover applicationusing the techniques mentioned during earlier by doing it in two steps

  • Move the files from pool1 to the main array
  • move the files from the array to pool2

The alternative is to it manually in which case you can move filesdirectly between the pools.

Do not forget that if any the files belong to a Docker containerand/or VM then the services must be disabled for the files to be movedsuccessfully.

Selecting a File System type

Each array drive in an Unraid system is set up as a self-contained filesystem. Unraid currently supports the following file system types:

  • XFS: This is the default format for array drives on a newsystem. It is a well-tried Linux file system and deemed to be themost robust.

    • XFS is better at recovering from file system corruption thanBTRFS or ZFS (which can happen after unclean shutdowns or systemcrashes).
    • If used on an array drive then each XFS format drive is a singleself-contained file system.
  • ZFS: This is a newer file system introduced with Unraid 6.12that supports advanced features not available with XFS.

    • It supports detecting file content corruption (oftencolloquially known as bit-rot) by internally using checksumtechniques
    • If used on array drives then each ZFS format drive is anindividual free-standing BTRFS file system.
    • It can support a single file system spanning multiple drives.Normally each drive would be of the same size, but if not thenonly the amount of space equivalent to that on the smallestdrive will be used.
    • In multi-drive mode various levels of RAID can be supported. Thedefault in Unraid for a cache pool is RAID1 so that data isstored redundantly to protect against drive failure.
    • It is an option supported when using a cache pool spanning multipledrives that need to run as a single logical drive as this needsthe multi-drive support.
    • In multi-drive mode in the cache pool the usable space is alwaysa multiple of the smallest drive (if they are not the samesize).
    • It is thought to be better at recovering from file systemcorruption than BTRFS, although not as good as XFS.
  • BTRFS: This is a newer file system that supports advancedfeatures not available with XFS. It is considered not quite asstable as XFS but many Unraid users have reported in seems as robustas XFS when used on array drives where each drive is aself-contained file system. Some of its features are:

    • It supports detecting file content corruption (oftencolloquially known as bit-rot) by internally using checksumtechniques.
    • If used on array drives then each BTRFS format drive is anindividual free-standing BTRFS file system.
    • It can support a single file system spanning multiple drives,and in such a case it is not necessary that the drives all be ofthe same size. It is better than ZFS at making use of availablespace in a multi-drive pool where the drives are of differentsizes.
    • In multi-drive mode various levels of RAID can be supported(although these are a BTRFS specific implementation and notnecessarily what one expects). The default in Unraid for a cachepool is RAID1 so that data is stored redundantly to protectagainst drive failure.
    • It is an option supported when using a cache pool spanningmultiple drives that need to run as a single logical drive asthis needs the multi-drive support.
    • In multi-drive mode in the cache pool the available space isalways a multiple of the smallest drive size.
  • ReiserFS: This is supported for legacy reasons for thosemigrating from earlier versions of Unraid where it was the onlysupported file system type.

    • There is only minimal involvement from Linux kernel developerson maintaining the ReiserFS drivers on new Linux kernel versionsso the chance of a new kernel causing problems with ReiserFS ishigher than for other Linux file system types.
    • It has a hard limit of 16TB on a ReiserFS file system andcommercial grade hard drives have now reached this limit.
    • Write performance can degrade significantly as the file systemstarts getting full.
    • It is extremely good at recovering from even extreme levels offile system corruption
    • It is now deprecated for use with Unraid and should not beused by new users. Support for ReiserFS is due to be removedfrom the Linux kernel by 2025 and at that point Unraid will alsolikely stop supporting ReiserFS so existing users should be lookingto move off using ReiserFS in their Unraid system.

These formats are standard Linux formats and as such any array drive caneasily be removed from the array and read on any Linux system. This canbe very useful in any data recovery scenario. Note, however, that theinitial format needs to be done on the Unraid system as Unraid hasspecific requirements around how the disk is partitioned that areunlikely to be met if the partitioning is not done on Unraid.Unfortunately, these formats cannot be read as easily on Windows ormacOS systems as these OS do not recognize the file system formatswithout additional software being installed that is not freelyobtainable.

A user can use a mixture of these file system types in their Unraidsystem without it causing any specific issues. In particular, the Unraidparity system is file system agnostic as it works at the physical sectorlevel and is not even aware of the file system that is in use on anyparticular drive.

In addition drives can beencrypted. A point tonote about using encryption is that if you get any sort of file systemcorruption then encryption can make it harder (and sometimes impossibleto recover data on the corrupted file system.

If using a cache pool (i.e multiple drives) then the supported types areBTRFS or ZFS and the pool is formatted as a single entity. By default,this will be a version of RAID1 to give redundancy, but other optionscan be achieved by running the appropriate btrfs command.

Additional file formats are supported by the Unassigned Devices andUnassigned Devices Plus plugins. There can be useful when you havedrives that are to be used for transfer purposes, particularly tosystems that do not support standard Linux formats.

Setting a File System type

The File System type for a new drive can be set in 2 ways:

  1. Under Settings → Disk Settings the default type for array drivesand the cache pool can be set.
    • On a new Unraid system this will be XFS for array drives andBTRFS for the cache.
  2. Explicitly for individual drives by clicking on a drive on the Maintab (with the array stopped) and selecting a type from thoseoffered.
    • When a drive is first added the file system type will show asauto which means use the setting specified underSettings → Disk Settings.
    • Setting an explicit type over-rides the global setting
    • The only supported format for a cache containing more than onedrive is BTRFS.

Creating a File System (Format)

Before a disk can be used in Unraid then an empty file system of thedesired type needs to be created on the disk. This is the operationcommonly known as "format" and it erases any existing content onthe disk.

WARNING:

If a drive has already been formatted by Unraid then if it now shows asunmountable you probably do NOT want to format it again unlessyou want to erase its contents. In such cases, the appropriateaction is usually instead to use the File System check/repair processdetailed later.

The basic process to format a drive once the file system type has beenset is:

  • Start the array
  • Any drives where Unraid does not recognize the format will be shownas unmountable and there will be an option to format unmountabledrives
  • Check that ALL the drives shown as unmountable are ones you wantto format. You do not want to accidentally format another drive anderase its contents
  • Click the check box to say you really want to format the drive.
  • Carefully read the resulting dialog that outlines the consequences
  • The Format button will now be enabled so if you want to go aheadwith the format click on it.
  • The format process will start running for the specified disks.
    • If the disk has not previously been used by Unraid then it willstart by rewriting the partition table on the drive to conformto the standard Unraid expects.
  • The format should only take a few minutes but if the progress doesnot automatically update you might need to refresh the Main tab.

Once the format has completed then the drive is ready to start beingused to store files.

Drive shows as unmountable

A drive can show as unmountable in the Unraid GUI for two reasons:

  • The disk has never been used in Unraid and you have just added isnew a new disk slot in the array. In this case, you want to followthe format procedure shown above to create a new empty file systemon the drive so it is ready to receive files.

  • File system corruption has occurred. This means that the file systemdriver has noticed some inconsistency in the file system controlstructures. This is not infrequent if a write to a disk fails forany reason and Unraid marks the disk as disabled, although it canoccur at other times as well.

Note: If a disk is showing as both unmountable and disabled(has a red 'x' against in in the GUI) then the check/repairprocess can be carried out on the disk that is being 'emulated' byUnraid prior to carrying out any rebuild process. It is always worthdoing the repair before any rebuild as if a disk is showing asunmountable while being emulated then it will also show asunmountable after the rebuild (as all the rebuild process doesis make the physical disk match the emulated one). The process forrepairing a file system is much faster than the rebuild process sothere is not much point in wasting time on a rebuild if the repairis not going to work. Also if there are any problems running therepair process on the emulated disk then the physical disk is stilluntouched giving a fall back data recovery path.

IMPORTANT: You do not want to format the drive in this caseas this will write an empty file system to the drive update parityaccordingly and you would therefore lose the contents of the drive.

It is worth noting that an Unmountable disk caused by file systemcorruption is not something that can be repaired using the parity driveas it is basically not a result of a write to a disk failing but ofincorrect data being written (apparently successfully) to the data driveand parity updated accordingly. Such corruption can be due to either asoftware issue, or something like bad RAM corrupting the in-memory databefore it is written.

The file system has a level of redundancy in the control structures soit is normally possible to repair the damage that has been detected.Therefore when you have an unmountable disk caused by file systemcorruption then you want to use the file system check/repair processdocumented below to get the disk back into a state where you can mountit again and see all its data.

If you are at all unsure on the best way to proceed it is often a goodidea to make a post in the forums and attach your system's diagnosticszip file (obtained via Tools → Diagnostics) so you can get feedback onyour issue.

Checking a File System

If a disk that was previously mounting fine suddenly starts showing asunmountable then this normally means that there is some sort ofcorruption at the file system level. This most commonly occurs after anunclean shutdown but could happen any time a write to a drive fails orif the drive ends up being marked as disabled (i.e. with ared ',' in the Unraid GUI). If the drive is marked as disable andbeing emulated then the check is run against the emulated drive and notthe physical drive.

IMPORTANT: At this point, the Unraid GUI will be offering an optionto format unmountable drives. This will erase all content on thedrive and update parity to reflect this making recovering the dataimpossible/very difficult so do NOT do this unless you are happy tolose the contents of the drive.

To recover from file system corruption then one needs to run the toolthat is appropriate to the file system on the disk. Points to note thatusers new to Unraid often misunderstand are:

  • Rebuilding a disk does not repair file system corruption
  • If a disk is showing as being emulated then the file system checkand/or repair are run against the emulated drive and not thephysical drive.

Preparing to test

The first step is to identify the file system of the drive you wish totest or repair. If you don't know for sure, then go to the Main page ofthe WebGUI, and click on the name of the drive (Disk 3, Cache, etc).Look for File system type, and you will see the file system formatfor your drive (should be xfs, btrfs or reiserfs).

If the file system is XFS or ReiserFS then you must start thearray in Maintenance mode, by clicking the Maintenance mode check boxbefore clicking the Start button. This starts the Unraid driver but doesnot mount any of the drives.

If the file system is BTRFS, then frequently you want to run ascrub rather than a repair as that both checks the BTRFS file systemand can also fix many BTRFS errors. A scrub operation is run with thearray started in Normal mode and NOT in Maintenance mode. If you wantto run a repair then you will need to start the array in Maintenancemode.

Note: Details will need to be added for ZFS file systems afterUnraid 6.12 is release with ZFS support built in.

Running the Test using the WebGUI

The process for checking a file system using the Unraid GUI is asfollows:

  1. Make sure that you have the array started in the correct mode. Ifnecessary stop the array and restart in the correct mode byclicking/unclicking the Maintenance Mode checkbox next to the Startbutton.
  2. From the Main screen of the WebGUI, click the name of the disk thatyou want to test or repair. For example, if the drive of concern isDisk 5, then click on Disk 5. If it's the Cache drive, thenclick on Cache. If in Maintenance mode then The disks will notbe mounted but the underlying /dev/mdX type devices thatcorrespond to each diskX in the Unraid GUI will have been created.This is important as any write operation against one of these 'md'type devices will also update parity to reflect that write hashappened.
  3. You should see a page of options for that drive, beginning withvarious partition, file system format, and spin down settings.
  4. The section following that is the one you want, titled CheckFilesystem Status. There is a box with the 2 words Not availablein it. This is the command output box, where the progress andresults of the command will be displayed. Below that is theCheck button that starts the test, followed by the options boxwhere you can type in options for the test/repair command.
  5. The tool that will be run is shown and the status at this point willshow as Not available. The Options field may include a parameterthat causes the selected tool to run in check-only mode so thatthe underlying drive is not actually changed. For more help, clickthe Help button in the upper right.
  6. Click on the Check button to run the file system check
  7. Information on the check progress is now displayed. You may need touse the Refresh button to get it to update.
  8. If you are not sure what the results of the check mean you shouldcopy the progress information so you can ask a question in theforum. When including this information as part of a forum postmark them as code (using the <?> icon) to preserve theformatting as otherwise it becomes difficult to read.

Running the Test using the command line

XFS and ReiserFS

You can run the file system check from the command line for ReiserFS andXFSxfs as shown below if the array is started in Maintenance mode byusing a command of the form:

xfs_repair-v/dev/mdX

or

reiserfsck-v/dev/mdX

where X corresponds to the diskX number shown in the Unraid GUI. Usingthe /dev/mdX type device will maintain parity. If the file system to berepaired as an encrypted XFS one then the command needs to be modifiedto use the /dev/mapper/mdX device

If you ever need to run a check on a drive that is not part of the arrayor if the array is not started then you need to run the appropriatecommand from a console/terminal session. As an example for an XFS diskyou would use a command of the form:

xfs_repair-v/dev/sdX1

where X corresponds to the device identifier shown in the Unraid GUI.Points to note are:

  • The value of X can change when Unraid is rebooted so make sure it iscorrect for the current boot
  • Note the presence of the '1' on the end to indicate the partitionto be checked.
  • The reason for not doing it this way on array drives is thatalthough the disk would be repaired parity would be invalidatedwhich can reduce the chances of recovering a failed drive untilvalid parity has been re-established.
  • if you run this form of the command on an array disk you willinvalidate parity so it is not recommended except in exceptionalcirc*mstances.

BTRFS

A BTRFS file systems will automatically check the data as part ofreading it so often there is no need to explicitly run a check. If youdo need to run a check you do it with the array started in Normal modeusing the scrub command that is covered in more detail in theScrub section.

You can run the file system check from the command line for BTRFS asshown below if the array is started in Maintenance mode by usingcommands of the form:

btrfscheck--readonly/dev/mdX1

where X corresponds to the diskX number shown in the Unraid GUI. Usingthe /dev/mdX type device will maintain parity. If the file system to berepaired is an encrypted XFS one then the command needs to be modifiedto use the /dev/mapper/mdX device.

If you ever need to run a check on a drive that is not part of thearray, if the array is not started, or the disk is part of a pool thenyou need to run the appropriate command from a console/terminal session.As an example you would use a command of the form:

btrfscheck--readonly/dev/sdX1

for pools which are outside the Unraid parity scheme

where X corresponds to the device identifier shown in the Unraid GUI.Points to note are:

  • The value of X can change when Unraid is rebooted so make sure it iscorrect for the current boot
  • Note the presence of the '1' on the end to indicate the partitionto be checked.
  • The reason for not doing it this way on array drives is thatalthough the disk would be repaired parity would be invalidatedwhich can reduce the chances of recovering a failed drive untilvalid parity has been re-established.
  • if you run this form of the command on an array disk you willinvalidate parity so it is not recommended except in exceptionalcirc*mstances.

ZFS

This section should be completed once Unraid 6.12 has beenreleased with ZFS support included as a standard feature.

Repairing a File System

You typically run this just after running a check as outlined above, butif skipping that follow steps 1-4 to get to the point of being ready torun the repair. It is a good idea to enable the Help built into the GUIto get more information on this process.

If the drive is marked as disabled and being emulated then the repair isrun against the emulated drive and not the physical drive. It isfrequently done before attempting to rebuild a drive as it is thecontents of the emulated drive that is used by the rebuild process.

  1. Remove any parameters from the Options field that would cause thetool to run in check-only mode.
  2. Add any additional parameters to the Options field required thatare suggested from the check phase. If not sure then ask in theforum.
    • The Help built into the GUI can provide guidance on what optionsmight be applicable.
  3. Press the Check button to start the repair process. You can nowperiodically use the Refresh button to update the progressinformation
  4. If the repair does not complete for any reason then ask in the forumfor advice on how to best proceed if you are not sure.
    • If repairing an XFS formatted drive then it is quite normal forthe xfs_repair process to give you a warning saying you needto provide the -L option to proceed. Despite this ominouswarning message this is virtually always the right thing to doand does not result in data loss.
    • When asking a question in the forum and when including theoutput from the repair attempt as part of your post useStorage Management | Unraid Docs (14) option to preserve the formattingas otherwise it becomes difficult to read
  5. If the repair completes without error then stop the array andrestart in normal mode. The drive should now mount correctly.

After running a repair you may well find that a lost+found folder iscreated on the drive with files/folders with cryptic names (this willthen show as a User Share of the same name). These are folders/files forwhich the repair process could not determine the name. If you have goodbackups then it is often nor worth trying to sort out the contents ofthe lost+found folder but instead restore from the backups. If youreally need to sort out the contents then the linux file command canbe used on a file to help determine what kind of data is in the file soyou can open it. If there are a lot of content in lost+found it may notbe worth the trouble unless it is important.

If at any point you do not understand what is happening then ask in theforum.

Preparing to repair

If you are going to repair a BTRFS, XFS or ReiserFS filesystem then you always want the array to be started in Maintenace mode

Running the Repair using the WebGUI

XFS and ReiserFS

The process for repairing a file system using the Unraid GUI is asfollows:

  1. Make sure that you have the array started in the correct mode. Ifnecessary stop the array and restart in the correct mode byclicking/unclicking the Maintenance Mode checkbox next to the Startbutton.
  2. From the Main screen of the WebGUI, click the name of the disk thatyou want to test or repair. For example, if the drive of concern isDisk 5, then click on Disk 5. If it's the Cache drive, thenclick on Cache. If in Maintenance mode then The disks will notbe mounted but the underlying /dev/mdX type devices thatcorrespond to each diskX in the unRaid GUI will have been created.This is important as any write operation against one of these 'md'type devices will also update parity to reflect that write hashappened.
  3. You should see a page of options for that drive, beginning withvarious partition, file system format, and spin down settings.
  4. The section following that is the one you want, titled CheckFilesystem Status. There is a box with the 2 words Not availablein it. This is the command output box, where the progress andresults of the command will be displayed. Below that is theCheck button that starts the repair
  5. This is followed by the options box where you can type in options.To run a repair you need to remove the -n option. If repairing aXFS system you often get prompted to also use the -L option andif that happens you rerun the repair adding that option here.
  6. The tool that will be run is shown and the status at this point willshow as Not available. The Options field may include a parameterthat causes the selected tool to run in check-only mode so thatthe underlying drive is not actually changed. For more help, clickthe Help button in the upper right.
  7. Click on the Check button to run the file system check
  8. Information on the check progress is now displayed. You may need touse the Refresh button to get it to update.
  9. If you are not sure what the results of the check means you shouldcopy the progress information so you can ask a question in theforum. When including this information as part of a forum postmark them as code (using the <?> icon) to preserve theformatting as otherwise it becomes difficult to read.

BTRFS

A lot of the time running the Scrub operation will be able to detect(and correct if you have a redundant pool) many errors.

In the event that you need more than this you need the array to bestarted in Maintenance mode and then the Check option can be used torun the btrfs check program to check file system integrity on thedevice.

The Options field is initialized with --readonly which specifiescheck-only. If repair is needed, you should run a second Check pass,setting the Options to --repair; this will permit btrfs check tofix the file system.

The BTRFS documentation suggests that its --repair option be used onlyif you have been advised by "a developer or an experienced user". Asof August 2022, the SLE documentation recommends using a Live CD,performing a backup and only using the repair option as a last resort.

After starting a Check, you should Refresh to monitor progress andstatus. Depending on how large the file system is, and what errors mightbe present, the operation can take a long time to finish (hours).Not much info is printed in the window, but you can verify the operationis running by observing the read/write counters increasing for thedevice on the Main page

There is another tool, named btrfs-restore, that can be used torecover files from an unmountable filesystem, without modifying thebroken filesystem itself (i.e., non-destructively) but that is notsupported by the Unraid GUI.

Running the Repair using the command line

XFS and ReiserFS

You can run the file system check from the command line for ReiserFS andXFS as shown below if the array is started in Maintenance mode byusing a command of the form:

xfs_repair/dev/mdX

or

reiserfsck/dev/mdX

where X corresponds to the diskX number shown in the Unraid GUI. Usingthe /dev/mdX type device will maintain parity. If the file system to berepaired is an encrypted XFS one then the command needs to be modifiedto use the /dev/mapper/mdX device

If you ever need to run a check on a drive that is not part of the arrayor if the array is not started then you need to run the appropriatecommand from a console/terminal session. As an example for an XFS diskyou would use a command of the form:

xfs_repair/dev/sdX1

where X corresponds to the device identifier shown in the Unraid GUI.Points to note are:

  • The value of X can change when Unraid is rebooted so make sure it iscorrect for the current boot
  • Note the presence of the '1' on the end to indicate the partitionto be checked.
  • The reason for not doing it this way on array drives is thatalthough the disk would be repaired parity would be invalidatedwhich can reduce the chances of recovering a failed drive untilvalid parity has been re-established.
  • if you run this form of the command on an array disk you willinvalidate parity so it is not recommended except in exceptionalcirc*mstances.

BTRFS

You can run the file system check from the command line for BTRFS asshown below if the array is started in Maintenance mode by using acommand of the form:

btrfscheck--readonly/dev/sdX1

where X corresponds to the device identifier shown in the Unraid GUI.Points to note are:

  • The value of X can change when Unraid is rebooted so make sure it iscorrect for the current boot
  • Note the presence of the '1' on the end to indicate the partitionto be checked.

In the event that you want to continue to try and actually repair thesystem you can run

btrfscheck--repair/dev/sdX1

but you are advised to only do this after getting advice in the forum assometimes the --repair option can damage a BTRFS system even further.

ZFS

This section should be completed once Unraid 6.12 has been released withZFS support included as a standard feature.

Changing a File System type

There may be times when you wish to change the file system type on aparticular drive. The steps are outlined below.

IMPORTANT: These steps will erase any existing content on the driveso make sure you have first copied it elsewhere before attempting tochange the file system type if you do not want to lose it.

  1. Stop the array
  2. Click on the drive whose format you want to change
  3. Change the format to the new one you want to use. Repeat ifnecessary for each drive to be changed
  4. Start the array
  5. There will now be an option on the main tab to format unmountabledrives and showing what drives these will be. Check that only thedrive(s) you expect show.
  6. Check the box to confirm the format and then press the Formatbutton.
  7. The format will now start. It typically only takes a few minutes.There have been occasions where the status does not update butrefreshing the Main tab normally fixes this.

If anything appears to go wrong then ask in the forum to add your systemdiagnostics zip file (obtained via Tools → Diagnostics) to your post.

Notes:

  • For SSDs you can erase the current contents using

    blkdiscard/dev/sdX

    at the console where 'X' corresponds to what is currently shown inthe Unraid GUI for the device. Be careful that you get it right asyou do not want to accidentally erase the contents of the wrongdrive.

Converting to a new File System type

There is the special case of changing a file system where you want tokeep the contents of the drive. The commonest reason for doing this isthose users who ran an older version of Unraid where the only supportedfile system type was reiserFS (which is now deprecated) and they want toswitch the drive to using either XFS or BTRFS file system instead.However, there may be users who want to convert between file systemtypes for other reasons.

In simplistic terms the process is:

  1. Copy the data off the drive in question to another location. Thiscan be elsewhere on the array or anywhere else suitable.
    • You do have to have enough free space to temporarily hold thisdata
    • Many users do such a conversion just after adding a new drive tothe array as this gives them the free space required.
  2. Follow the procedure above for changing the file system type of thedrive. This will leave you with an empty drive that is now in thecorrect format but that has no files on it.
  3. Copy the files you saved in step 1 back to this drive
  4. If you have multiple drives that need to be converted then do themone at a time.

This is a time-consuming process as you are copying large amounts ofdata. However, most of this is computer time as the user does not needto be continually present closely watching the actual copying steps.

Reformatting a drive

If by any chance you want to reformat a drive to erase its contentskeeping the existing file system type then many users find that it maynot be obvious how to do this from the Unraid GUI.

The way to do this is to follow the above process for changing the filesystem typetwice. The first time you change it to any other type, and then once ithas been formatted to the new type repeat the process this time settingthe type back to the one you started with.

This process will only take a few minutes, and as you go parity isupdated accordingly.

Reformatting a cache drive

There may be times when you want to change the format used on the cachedrive (or some similar operation) and preserve as much of its existingcontents as possible. In such cases the recommended way to proceed thatis least likely to go wrong is:

  1. Stop array.
  2. Disable docker and VM services under Settings
  3. Start array. If you have correctly disabled these services therewill be NO Docker or VMstab in the GUI.
  4. Set all shares that have files on the cache and currently don'thave a Use Cache:Yes to BE Cache:Yes. Make a note of which sharesyou changed and what setting they had before the change
  5. Run mover from the Main tab; wait for completion (which can takesome time to complete if there are a lot of files); check cachedrive contents, should be empty. If it's not, STOP, postdiagnostics, and ask for help.
  6. Stop array.
  7. Set cache drive desired format to XFS or BTRFS, if you only have asingle cache disk and are keeping that configuration, then XFS isthe recommended format. XFS is only available as a selection ifthere is only 1 (one) cache slot shown while the array is stopped.
  8. Start array.
  9. Verify that the cache drive and ONLY the cache drive showsunformatted. Select the checkbox saying you are sure, and format thedrive.
  10. Set any shares that you changed to be Cache: Yes earlier to Cache:Prefer if they were originally Cache: Only or Cache: Prefer. If anywere Cache: No, set them back that way.
  11. Run mover from the Main tab; wait for completion; check cache drivecontents which should be back the way it was.
  12. Change any share that was set to Use Cache:Only back to that option
  13. Stop array.
  14. Enable docker and VM services.
  15. Start array

There are other alternative procedures that might be faster if you areLinux aware, but the one shown above is the one that has proved mostlikely to succeed without error for the average Unraid user.

BTRFS Operations

If you want more information BTRFS then theWikipedia BTRFS article is agood place to start

There are a number of operations that are specific to BTRFS formatteddrives that do not have a direct equivalent in the other formats.

Balance

Unlike most conventional filesystems, BTRFS uses a two-stage allocator.The first stage allocates large regions of space known as chunks forspecific types of data, then the second stage allocates blocks like aregular filesystem within these larger regions. There are threedifferent types of chunks:

  • Data Chunks: These store regular file data.
  • Metadata Chunks: These store metadata about files, including amongother things timestamps, checksums, file names, ownership,permissions, and extended attributes.
  • System Chunks: These are a special type of chunk which stores dataabout where all the other chunks are located.

Only the type of data that the chunk is allocated for can be stored inthat chunk. The most common case these days when you get a -ENOSPC erroron BTRFS is that the filesystem has run out of room for data or metadatain existing chunks, and can't allocate a new chunk. You can verify thatthis is the case by running btrfs fi df on the filesystem that threw theerror. If the Data or Metadata line shows a Total value that issignificantly different from the Used value, then this is probably thecause.

What btrfs balance does is to send things back through the allocator,which results in space usage in the chunks being compacted. For example,if you have two metadata chunks that are both 40% full, a balance willresult in them becoming one metadata chunk that's 80% full. Bycompacting space usage like this, the balance operation is then able todelete the now-empty chunks and thus frees up room for the allocation ofnew chunks. If you again run btrfs fi df after you run the balance, youshould see that the Total and Used values are much closer to each other,since balance deleted chunks that weren't needed anymore.

The BTRFS balance operation can be run from the Unraid GUI by clickingon the drive on the Main tab and running scrub from the resultingdialog. The current status information for the volume is displayed. Youcan optionally add parameters to be passed to the balance operation andthen start the scrub by pressing the Balance button.

Scrub

Scrubbing involves reading all the data from all the disks and verifyingchecksums. If any values are not correct and you have a redundant BTRFSpool then the data can be corrected by reading a good copy of the blockfrom another drive. The scrubbing code also scans on read automatically.It is recommended that you scrub high-usage file systems once a weekand all other file systems once a month.

You can initiate a check of the entire file system by triggering a filesystem scrub job. The scrub job scans the entire file system forintegrity. It automatically attempts to report and repair any bad blocksthat it finds along the way. Instead of going through the entire diskdrive, the scrub job deals only with data that is actually allocated.Depending on the allocated disk space, this is much faster thanperforming an entire surface scan of the disk.

The BTRFS scrub operation can be run from the Unraid GUI by clicking onthe drive on the Main tab and running scrub from the resulting dialog.

Unassigned drives are drives that are present in the server runningUnraid that have not been added to the array or to a cache pool.

It is important to note that all such drives that are plugged into theserver at the point you start the array count towards the UnraidAttached Devices license limits.

Typical uses for such drives are:

  • Plugging in removable drives for the purposes of transferring filesor backing up drives.
  • Having drives dedicated to specific use (such as running VMs) whereyou want higher performance than can be achieved by using arraydrives.

It is strongly recommended that you install the Unassigned Devices (UD)plugins via the Apps tab if you want to use Unassigned Drives onyour system. There are 2 plugins available:

  1. The basic Unassigned Devices plugin provides support for filesystem types supported as standard in Unraid.
  2. The Unassigned Devices Plus plugin extends the file systemsupport to include options such as ExFat ant HFS+.

You should look at the Unassigned Devices supportthreadfor these plugins to get more information about the veryextensive facilities offered and guidance on how to use them.

Array Write Modes

Unraid maintains real-time parity and the performance of writing to theparity protected array in Unraid is strongly affected by the method thatis used to update parity.

There are fundamentally 2 methods supported:

  • Read/Modify/Write
  • Turbo Mode (also known as reconstruct write)

These are discussed in more detail below to help users decide whichmodes are appropriate to how they currently want their array to operate.

Setting the Write mode

The write mode is set by going Settings → Disk Settings, and look forthe Tunable (md_write_method) setting. The 3 options are:

  • Auto: Currently this operates just like setting theread/modify/write option but is reserved for future enhancement
  • read/modify/write
  • reconstruct write (a.k.a.Turbo write)

To change it, click on the option you want, then the Apply button. Theeffect should be immediate so you can change it at any time

The different modes and their implications are discussed in more detailbelow

Read/Modify/Write mode

Historically, Unraid has used the "read/modify/write" method to updateparity and to keep parity correct for all data drives.

Say you have a block of data to write to a drive in your array, andnaturally you want parity to be updated too. In order to know how toupdate parity for that block, you have to know what is the differencebetween this new block of data and the existing block of data currentlyon the drive. So you start by reading in the existing block andcomparing it with the new block. That allows you to figure out what isdifferent, so now you know what changes you need to make to the parityblock, but first, you need to read in the existing parity block. So youapply the changes you figured out to the parity block, resulting in anew parity block to be written out. Now you want to write out the newdata block, and the parity block, but the drive head is just past theend of the blocks because you just read them. So you have to wait a longtime (in computer time) for the disk platters to rotate all the way backaround until they are positioned to write to that same block. Thatplatter rotation time is the part that makes this method take so long.It's the main reason why parity writes are so much slower than regularwrites.

To summarize, for the "read/modify/write" method, you need to:

  • read in the parity block and read in the existing data block (can bedone simultaneously)
  • compare the data blocks, then use the difference to change theparity block to produce a new parity block (very short)
  • wait for platter rotation (very long!)
  • write out the parity block and write out the data block (can be donesimultaneously)

That's 2 reads, a calc, a long wait, and 2 writes.

The advantages of this approach are:

  • Only the parity drive(s) and the drive being updated need to be spunup.
  • Minimises power usage as array drives can be kept spun down when notbeing accessed
  • Does not require all the other array drives to be working perfectly

Turbo write mode

More recently Unraid introduced the Turbo write mode (often called"reconstruct write")

We start with that same block of new data to be saved, but this time wedon't care about the existing data or the existing parity block. So wecan immediately write out the data block, but how do we know what theparity block should be? We issue a read of the same block on all of theother data drives, and once we have them, we combine all of themplus our new data block to give us the new parity block, which we thenwrite out! Done!

To summarize, for the "reconstruct write" method, you need to:

  • write out the data block while simultaneously reading in the datablocks of all other data drives
  • calculate the new parity block from all of the data blocks,including the new one (very short)
  • write out the parity block

That's a write and a bunch of simultaneous reads, a calc, and a write,but no platter rotation wait! The upside is it can be much faster.

The downside is:

  • ALL of the array drives must be spinning, because they ALL areinvolved in EVERY write.
  • Increased power draw due to the need to keep all drives spinning
  • All drives must be reading without error.

Ramifications

So what are the ramifications of this?

  • For some operations, like parity checks and parity builds and driverebuilds, it doesn't matter, because all of the drives are spinninganyway.
  • For large write operations, like large transfers to the array, itcan make a big difference in speed!
  • For a small write, especially at an odd time when the drives arenormally sleeping, all of the drives have to be spun up before thesmall write can proceed.
  • And what about those little writes that go on in the background,like file system housekeeping operations? EVERY write at any timeforces EVERY array drive to spin up. So you are likely to besurprised at odd times when checking on your array, and expectingall of your drives to be spun down, and finding every one of themspun up, for no discernible reason.
  • So one of the questions to be faced is, how do you want your variouswrite operations to be handled. Take a small scheduled backup ofyour phone at 4 in the morning. The backup tool determines there'sa new picture to back up, so tries to write it to your Unraidserver. If you are using the old method, the data drive and theparity drive have to spin up, then this small amount of data iswritten, possibly taking a couple more seconds than Turbo writewould take. It's 4am, do you care? If you were using Turbo write,then all of the drives will spin up, which probably takes somewhatlonger spinning them up than any time saved by using Turbo write tosave that picture (but a couple of seconds faster in the save).Plus, all of the drives are now spinning, uselessly.
  • Another possible problem if you were in Turbo mode, and you arewatching a movie streaming to your player, then a write kicks intothe server and starts spinning up ALL of the drives, causing thatwell-known pause and stuttering in your movie. Who wants to dealwith the whining that starts then?

Currently, you only have the option to use the old method or the new(currently the Auto option means the old method). The plan is to add thetrue Auto option that will use the old method by default, *unless* allof the drives are currently spinning. If the drives are all spinning,then it slips into Turbo. This should be enough for many users. It wouldnormally use the old method, but if you planned a large transfer or abunch of writes, then you would spin up all of the drives - and enjoyfaster writing.

The auto method has the potential of the system automaticallyswitching modes depending on current array activity but this has nothappened so far. The problem is knowing when a drive is spinning, andbeing able to detect it without noticeably affecting write performance,ruining the very benefits we were trying to achieve. If on every writeyou have to query each drive for its status, then you will noticeablyimpact I/O performance. So to maintain good performance, you needanother function working in the background keeping near-instantaneoustrack of spin status, and providing a single flag for the writer tocheck, whether they are all spun up or not, to know which method to use.

Many users would like tighter and smarter control of which write mode isin use. There is currently no official way of doing this but you couldtry searching for "Turbo Write" on the Apps tab for unofficial ways toget better control.

Using a Cache Drive

It is possible to use a Cache Drive/Pool to improve the perceivedspeed of writing to the array. This can be done on a share-by-sharebasis using the Use Cache setting available for each share by clickingon the share name on the Shares tab in the GUI. It is important torealize that using the cache has not really sped up writing files to thearray - it is just that such writes now occur when the user is notwatching them

Points to note are:

  • The Yes setting for Use Cache causes new files for the shareto initially be written to the cache and later moved to the parityprotected array when mover runs.
  • Writes to the cache run at the full speed the cache is capable of.
  • It is not uncommon to use SSDs in the cache to get maximumperformance.
  • Moves from cache to array are still comparatively slow but sincemover is normally scheduled to run when the system is otherwise idlethis is not visible to the end-user.
  • There is a Minimum Free Space setting under Settings → GlobalShare settings and if the free space on the cache falls below thisvalue Unraid will stop trying to write new files to the cache. Sincewhen Unraid first creates a file it does not know the final size itis recommended that the value for this setting should be as large(or larger) as the biggest file you expect to write to the share asyou want to stop Unraid selecting the cache for a file that will notfit in the space available. This will stop the write failing with an'out of space' error when the free space gets exhausted.
  • If there is not sufficient free space on the cache then writes willstart by-passing the cache and revert to the speeds that would beobtained when not using the cache.

Read Modes

Normally read performance is determined by the maximum speed that a filecan be read off a drive. Unlike some other forms of RAID an Unraidsystem does not utilize striping techniques to improve performance asevery file is constrained to a single drive.

If a disk is marked as disabled and being emulated then Unraid needs toreconstruct its contents on the fly by reading the appropriate sectorsof all the good drives and the parity drive(s). In such a case the readperformance is going to be determined primarily by the slowest drives inthe system.

It is also worth emphasizing that if there is any array operation goingon such as a parity check or a disk rebuild then read performance willbe degraded significantly due to drive head movements caused by diskcontention between the two operations.

(Cache) Pools

Unraid supports the use of (cache pools) that are separate from the mainarray and work differently from a performance perspective and theseshould be considered when performance is a prime criteria. If a poolconsists of multiple drives then Unraid mandates that is is formattedusing the BTRFS file system.

BTRFS supports a variety of RAID profiles and these will perform morelike a traditional RAID system giving much higher throughput than themain Unraid array.

Recovery after drive failure tends to be harder and more prone to leadto data loss which is one disadvantage of using pools for everything.

Storage Management | Unraid Docs (2024)
Top Articles
Pvschools Infinite Campus
How Much Do I Get Paid To Donate Plasma At Grifols
Corgsky Puppies For Sale
Guidelines & Tips for Using the Message Board
5 Fastest Ways To Become Rich by Investing in the Stock Market
799: The Lives of Others - This American Life
Rs3 Bring Leela To The Tomb
Frank 26 Forum
Promiseb Discontinued
Deshaun Watson Timeline: What Has Occurred Since First Lawsuit Filed
Lox Club Gift Code
Osage actor talks Scorsese, 'Big Uncle Energy' and 'Killers of the Flower Moon'
Www.1Tamilmv.con
Craigslist Worcester Massachusetts: Your Guide to the City's Premier Marketplace - First Republic Craigslist
Craigslist Cars And Trucks For Sale Private Owners
Www. Kdarchitects .Net
Minor Additions To The Bill Crossword
B Corp: Definition, Advantages, Disadvantages, and Examples
Test Nvidia GeForce GTX 1660 Ti, la carte graphique parfaite pour le jeu en 1080p
Dirt Devil Ud70181 Parts Diagram
Evil Dead Rise Showtimes Near Cinemark Movies 10
Covenant Funeral Service Stafford Obituaries
Birkenstock Footprints Lawrence Ks
Redgifs.comn
Loss Payee And Lienholder Addresses And Contact Information Updated Daily Free List Bank Of America
Beetrose 'Planten un Blomen' - Rosa 'Planten un Blomen' ADR-Rose
Zwei-Faktor-Authentifizierung (2FA) für Ihre HubSpot-Anmeldung einrichten
Violent Night Showtimes Near Santikos Entertainment Mayan Palace
Az511 Twitter
Horned Stone Skull Cozy Grove
3 Hour Radius From Me
Antique Wedding Favors
209-929-1099
The Abduction of Heather Teague
Jan Markell Net Worth
Ixl Ld Northeast
Strange World Showtimes Near Amc Hoffman Center 22
The "Minus Sign (−)" Symbol in Mathematics
Victor Predictions Today
Donald Vacanti Obituary
Harpel Hamper
When His Eyes Opened Chapter 3021
Viewfinder Mangabuddy
Unveiling The &quot;Little Princess Poppy Only Fans Leak&quot;: Discoveries And Insights Revealed
Breitling ENDURANCE PRO X82310E51B1S1 für 2.885 € kaufen von einem Trusted Seller auf Chrono24
Alj Disposition Data
Equine Trail Sports
What Time Does The Chase Bank Close On Saturday
Trivago Anaheim California
Knock At The Cabin Showtimes Near Alamo Drafthouse Raleigh
Papitop
[US/EU] ARENA 2v2 DF S4 Rating Boost 0-1800 / Piloted/Selfplay / ... | ID 217616976 | PlayerAuctions
Latest Posts
Article information

Author: Domingo Moore

Last Updated:

Views: 6408

Rating: 4.2 / 5 (53 voted)

Reviews: 84% of readers found this page helpful

Author information

Name: Domingo Moore

Birthday: 1997-05-20

Address: 6485 Kohler Route, Antonioton, VT 77375-0299

Phone: +3213869077934

Job: Sales Analyst

Hobby: Kayaking, Roller skating, Cabaret, Rugby, Homebrewing, Creative writing, amateur radio

Introduction: My name is Domingo Moore, I am a attractive, gorgeous, funny, jolly, spotless, nice, fantastic person who loves writing and wants to share my knowledge and understanding with you.