Message boards :
Number crunching :
OS on a HDD and RAID 0 with two 6TB HDDs?
Message board moderation
Author | Message |
---|---|
![]() Send message Joined: 6 Apr 07 Posts: 7105 Credit: 147,663,825 RAC: 5 ![]() |
Soon I will build a new PC. With an ASRock Z170 Extreme4 board (with i7-6700K CPU). I read everything what's online available, but I didn't found the answers... It's possible to plug in one HDD and install just the OS (Windows 10 Pro x64) on it... additional plug in two 6TB HDDs (for data) which run in RAID 0 (for to speed up the HDD performance)? The OS can use the whole 12TB, or is somewhere a limit? I guess I'll never reach this 12TB, but the '6TB WesternDigital Red Pro (WD6001FFWX)' is very fast (~ 190 MByte/s (read/write), website test). So with RAID 0 it will be ~ 380 MByte/s. Is RAID 0 with 3 HDDs possible, 18TB... - and the OS can work with this size? It's needed to install the 'Intel Rapid Storage' driver for RAID 0? (additional in the OS I must configure the RAID 0?) From my experiences, more tools more worse for/work the OS. Maybe I could add a PCIe (3.0) x1 RAID controller card - and it work just without a driver? Thanks. (BTW. No, no SSDs. ;-) ![]() |
OzzFan ![]() ![]() ![]() ![]() Send message Joined: 9 Apr 02 Posts: 15691 Credit: 84,761,841 RAC: 28 ![]() ![]() |
It's possible to plug in one HDD and install just the OS (Windows 10 Pro x64) on it... Yes. additional plug in two 6TB HDDs (for data) which run in RAID 0 (for to speed up the HDD performance)? Yes. The OS can use the whole 12TB, or is somewhere a limit? If we're talking Windows, as long as it is not the boot partition, and as long as the array is initialized as GPT (and not MBR), you can access the entire size of the array. Every Windows OS since XP x64 (including Server 2003 x64) can handle GPT drives greater than 2TB. Only 64bit Windows OSes support booting from larger than 2TB arrays in UEFI. Is RAID 0 with 3 HDDs possible, 18TB... - and the OS can work with this size? Yes. See same note above about GPT vs MBR. It's needed to install the 'Intel Rapid Storage' driver for RAID 0? (additional in the OS I must configure the RAID 0?) In most cases, yes, you need to install the software driver. The onboard RAID function of most chipsets out there are "fake" RAID in that all the calculations are handled via the driver, and hence use up more CPU time. Besides even that, you will need to create the RAID array in the software if the RAID firmware is BIOS and not UEFI. RAID firmware that are still BIOS-based are still limited to 2TB arrays. Maybe I could add a PCIe (3.0) x1 RAID controller card - and it work just without a driver? No, you will still need to install a driver so the OS knows how to properly initialize and use the hardware. Then the conversation goes back to "fake" RAID vs. hardware RAID (depends on where the calculations are being handled), and if the RAID card supports UEFI or BIOS. |
Cosmic_Ocean ![]() Send message Joined: 23 Dec 00 Posts: 3027 Credit: 13,516,867 RAC: 13 ![]() ![]() |
With NTFS, the limit is 16 exabytes (~16,000,000 TB). 2^64 -1 clusters, and if you use 64KB clusters, then you can do 64kb*(2^64 -1). (source). You'll be fine. If you put the three disks in raid0 and then go to install windows, when you get to the point of the installation where it asks where you want to install it, you'll likely have to click the button to load drivers that are needed in order to be able to access the disks. So make sure you find the x64 RAID drivers for that Intel controller (the actual driver files: .ini, .dll, etc... not the EXE installer that you would use once you're already in Windows) on a flash drive, so that you can just point to the flash drive, and then it should work fine. Linux laptop: record uptime: 1511d 20h 19m (ended due to the power brick giving-up) |
Cruncher-American ![]() ![]() ![]() Send message Joined: 25 Mar 02 Posts: 1513 Credit: 370,893,186 RAC: 340 ![]() ![]() |
In most cases, yes, you need to install the software driver. The onboard RAID function of most chipsets out there are "fake" RAID in that all the calculations are handled via the driver, and hence use up more CPU time. True, but much less relevant these days with multicore CPUs. Who cares if it uses a CPU if it is one core out of 4 or more? And a CPU core is probably a lot more capable than the chip on the RAID card. |
![]() Send message Joined: 25 Nov 01 Posts: 21688 Credit: 7,508,002 RAC: 20 ![]() ![]() |
Note that for RAID0, you will lose all data for any one disk fail. Hope you keep backups?... If you can afford RAID0, then you can afford the additional disks to go RAID10. (Also, look up Linux mdraid, and/or dedicated RAID controllers.) Why not go with SSDs? Happy super fast crunchin, Martin See new freedom: Mageia Linux Take a look for yourself: Linux Format The Future is what We all make IT (GPLv3) |
OzzFan ![]() ![]() ![]() ![]() Send message Joined: 9 Apr 02 Posts: 15691 Credit: 84,761,841 RAC: 28 ![]() ![]() |
In most cases, yes, you need to install the software driver. The onboard RAID function of most chipsets out there are "fake" RAID in that all the calculations are handled via the driver, and hence use up more CPU time. On a RAID 0 or 1 configuration, I would agree with you since both operations are simple and require no parity calculations. It mattered a lot to me when using RAID 5 and 6. The speed at which the hardware RAID was able to process parity for writes was amazing; night and day difference. There is definitely still a need for hardware XOR engines if you want the absolute best in speed. |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13913 Credit: 208,696,464 RAC: 304 ![]() ![]() |
And a CPU core is probably a lot more capable than the chip on the RAID card. Not really, as it's under the control of the OS and the OS has to watch over and control everything. A dedicated hardware RAID controller is much, much faster. A CPU can do lots of things very quickly. It can do one thing very, very quickly. But dedicated hardware/software even at much lower clock speeds can do the one thing they're designed to do, very, very, very, very quickly. That's why towards the end of the bitcoin mining frenzy, instead of using as many AMD GPUs as they could in multiple PCs, the big operators moved to ASICs(Application Specific ICs). They used a lot less power & pumped out a lot more work than a PC system stuffed full of video cards. EDIT- OzzFan beat me to it. Grant Darwin NT |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13913 Credit: 208,696,464 RAC: 304 ![]() ![]() |
Why not go with SSDs? Doesn't like them. *shrug* Grant Darwin NT |
![]() ![]() Send message Joined: 8 Dec 05 Posts: 630 Credit: 59,973,836 RAC: 0 ![]() |
Seconded - anyone using RAID0 on such large drives must be nuts unless the data really is temporary and expendable. I use an array of 6x1TB drives as RAID10 using mdadm software raid. A disk fails about once a year, and it's fairly trivial to replace and resync. I'd go for more smaller drives than fewer larger drives because of the time taken to resync is quicker and the data rate is faster. I also keep it synced with another machine that has similar capacity, in case something really bad happens. An array of 6TB drives could take a half a day or more to resync, and bigger drives mean bigger storage and more time in housekeeping. Cannot stress enough that RAID is not a substitute for keeping backups! Also disk speed and capacity is practically of negligible benefit for SAH because the work is all FPU/GPU based. A couple of SSDs and a dedicated NAS for media can make quite a decent system. |
![]() ![]() Send message Joined: 24 Nov 06 Posts: 7489 Credit: 91,093,184 RAC: 0 ![]() |
RAID-10 has saved my bacon on so many occasions now, I will do it every time, whether HDDs, SSDs, for gaming or precious development. Also looking at NAS options in addition to existing offsite backup, just because never know when some crazy person will burn your house down, or Crazed mother uses the development machine for facebook. "Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions. |
Cruncher-American ![]() ![]() ![]() Send message Joined: 25 Mar 02 Posts: 1513 Credit: 370,893,186 RAC: 340 ![]() ![]() |
OZZfan & Grant: Good points - I stand corrected. |
![]() ![]() Send message Joined: 11 Sep 99 Posts: 6534 Credit: 196,805,888 RAC: 57 ![]() ![]() |
According to Microsoft. Each GPT partition can have a maximum of 18 exabytes (~18.8 million terabytes) of space. Having 12+ GB in RAID0 configuration would make me very nervous. I do not understand how you would trust that configuration and not trust a SSD. However the Z170 chipset has RAID 0, 1, 5, & 10 support. So if you wanted to configure a RAID10 setup you have the option. Depending on the tools Intel gives us. You might be able to start with RAID0 and move to RAID10 later without losing any data. When configuring your RAID volume you will also have a block or stripe size to configure. This setting can greatly effect your disk performance. Much like the Allocation unit size when formatting a partition. SETI@home classic workunits: 93,865 CPU time: 863,447 hours ![]() |
![]() Send message Joined: 9 Jun 99 Posts: 15184 Credit: 4,362,181 RAC: 3 ![]() |
Note that for RAID0, you will lose all data for any one disk fail. Hope you keep backups?... I can attest that running RAID0 isn't a good setup when saving data. I just lost a complete array when the fan in my NAS died, the two 4TB HDDs heated to above 90 degrees Celsius and one of them completely died. I'm still in heated dispute with the reseller of the NAS over who is going to pay for two new 4TB drives. So just don't do it. The same goes for the JBOD option. If you want to run 3 drives anyway, go for RAID5. Although 4 drives is better to do that on. In my case, I lost close to 7 terabyte of films, music, series and books overnight. We luckily did have quite a lot of it still surviving here and there on the various systems, but the huge ISO map we had is gone. The trouble with RAID0 is that you cannot just plug any surviving disk into a docking station and read the contents under Linux, or so. They're unreadable, because the data is striped over the two disks. So parts of one file are written to disk 1, parts to disk 2. Why there is an R in this array's name is anyone's guess, because there is no redundancy when it goes wrong. :( |
OzzFan ![]() ![]() ![]() ![]() Send message Joined: 9 Apr 02 Posts: 15691 Credit: 84,761,841 RAC: 28 ![]() ![]() |
Why there is an R in this array's name is anyone's guess, because there is no redundancy when it goes wrong. :( That's why there's a 0 in RAID 0. No redundancy. |
![]() Send message Joined: 25 Nov 01 Posts: 21688 Credit: 7,508,002 RAC: 20 ![]() ![]() |
Note that for RAID0, you will lose all data for any one disk fail. Hope you keep backups?... For RAID0 the "R" can be bacronymed to "Rapid" to give: RAID0: Rapid Array of Inexpensive Data Keep searchin', Martin ps: "RAID" was originally read as "Redundant Array of Inexpensive Disks". Marketing re-spun that to: "Redundant Array of Independent Disks". The Marketing re-spin completely misses the reason why RAID was originally developed... Originally RAID was purely due to a cost-saving motivation to take advantage of the then newly available low cost small format HDDs. All back in the days where 'normal' HDDs were the size of washing machines! RAID is also another beautiful example of how in IT you really can have your cake and eat it, all in freedom and low cost, until Marketing 'sells' it at you! (pps: And then there is the argument about using local distributed storage vs a ridiculously expensive pair of SANs... But that is for another thread!!!) See new freedom: Mageia Linux Take a look for yourself: Linux Format The Future is what We all make IT (GPLv3) |
![]() ![]() Send message Joined: 8 Dec 05 Posts: 630 Credit: 59,973,836 RAC: 0 ![]() |
RAID0 Risky Array of Irretrievable Data! |
AMDave Send message Joined: 9 Mar 01 Posts: 234 Credit: 11,671,730 RAC: 0 ![]() |
|
![]() Send message Joined: 6 Apr 07 Posts: 7105 Credit: 147,663,825 RAC: 5 ![]() |
I found no answer in the web about the following... I will install Windows 10 Pro x64. I could buy 512n and 512e HDDs, the 4Kn are little bit costly. AFAIK, 512e HDDs are slower, because in it they must 'change' the 512 bytes to 4 KB (8x 512b sector) - this speed difference is still happen/correct (maybe newest HDDs (which?) do this faster?)? I can buy 512n HDDs and Win10 Pro x64 ((currently and coming software, tools, apps...) will work still with 512 bytes?) will work properly (with and without RAID 0?)? Thanks. ![]() |
![]() ![]() Send message Joined: 11 Sep 99 Posts: 6534 Credit: 196,805,888 RAC: 57 ![]() ![]() |
I found no answer in the web about the following... Windows has supported Advanced Format since the release of Windows 8.0/Server 2012. It looks like all of my current WD Green & Red drives have 4k sectors, but I am using windows 7. So they operate at 512 to the OS. ![]() I might have to pull one of my RAID drives and load Windows 10 on it to see if the drives are Advanced Format 512e or 4Kn. The logical sector size will not itself be faster or slower. It depends on the data size you are using. Read/writing lots of small files is actually slower on a 4Kn drive than with a 512 drive. SETI@home classic workunits: 93,865 CPU time: 863,447 hours ![]() |
![]() ![]() Send message Joined: 11 Sep 99 Posts: 6534 Credit: 196,805,888 RAC: 57 ![]() ![]() |
I found no answer in the web about the following... Did some test & drives still came up 512 logical. After doing some research I found out. Intel RST does not support 4k native sector size devices. So you will have to buy a controller with 4Kn support if you want to use it. If doing that might as well go with SAS instead of SATA as I mentioned in your last drive thread. SETI@home classic workunits: 93,865 CPU time: 863,447 hours ![]() |
©2025 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.