1-7 of 7 Answers
It's because the storage manufactures measures in base 10, where OS measures in base 2. HDD 1000 bytes vs OS 1024 bytes (2^10). For TB drives, the conversion/correction factor is 0.90949. So 20TB HDD * 0.90949 = 18.189TB actual usable in OS.
Sorry, there was a problem. Please try again later.All hard drives have only about 90% usable capacity from advertised size. How computers interpret storage and how HDD manufactures calculate it is different. This is true for all storage mediums and not particular to this drive
Sorry, there was a problem. Please try again later.Adding to Pantydropper's answer, it boils down to how HDD manufacturers calculate data storage capacity versus the way that computers define it. That is, the use of decimal prefixes vs binary prefixes. However, since 1999, unique labels (units of data measurement) were created to specifically address the problem of binary vs decimal-based measurement of data storage and ever since, the way that manufacturers continue to advertise storage capacity has been duplicitous at best and outright false advertising at worst. Technically, if they wish to continue using gigabyte (GB) and/or terabyte (TB) as a preferred unit of measure, OEMs should either advertise the actual usable amount of storage as defined by one's computer (regardless of OS, as Paulson rightly pointed out) OR they should use the ISO-8000 certified equivalents originally created by the IEC in 1999, i.e., gibibyte (GiB), tebibyte (TiB), etc. 20 TB = 18.19 TiB = 20,000 GB = 18,626.5 GiB HDD manufacturers would rather use the lesser 18.19 figure and round it up to 20, while keeping the more familiar TB unit of measure. The problem is that they're largely allowed to, as legal challenges of the current, misleading practice have been unsuccessful, as there are other standards in place that favor the "human-friendly" decimal based definition/approach (namely gov't), but these standards were also formed prior to the existence of more accurate labels created to specifically address the inherent confusion of the old standards. That, and the use of fine print as CYA (i.e., coming clean in impossibly small font or buried in the documentation somewhere that an antiquated and misleading definition of storage capacity is being used)
Sorry, there was a problem. Please try again later.It's a difference in how manufacturers vs. OSes measure capacity, aka GB (gigabyte, or 1000MB) vs. GiB (gibibyte, or 1024MB - I know it's a stupid term, easiest to accept it for the sake of the explanation and then never use the term again). That and filesystem overhead. Scale both up to 20TB and you do "lose" 1.8TB overall. Chalk it up to overzealous marketing to get those nice round numbers.
Sorry, there was a problem. Please try again later.The only discrepancy is the wrong units being used by your OS. 20 TB (terabytes) is roughly 18.2 TiB (Tebibytes). So the advertised capacity is almost exactly right-- my SSD is reporting 2,000,381,014,016 bytes of usable space. That easily calculates to about 2 TB. But if you want the 1.81 TB my OS says it has, you have to divide by 1099511627776 (the number of bytes in a Tebibyte). In a way it's a loophole where storage manufacturers realized-- "hey-- we can make our drives look a little bit bigger AND bonus! We're actually correct!" I think if you look at the drive stats in the MacOS, it actually gives you the correct units--i don't remember if they would report 1.81 TiB or 2 TB though. Anyway, it's the same thing.
Sorry, there was a problem. Please try again later.Every hdd loses around 10% capacity this has always been the case.
Sorry, there was a problem. Please try again later.ask seagate
Sorry, there was a problem. Please try again later.
