It wasn't sensible, given the short life of DNA. One of those sci-fi ideas that caught media and technophile attention, but wasn't ever going to go anywhere.
Project Silica appears to be attempting very high density, very long life storage, though.
Im most excited about the potential for crystal based storage. Right now there is work being done to etch silica glass internally, allowing for incredibly long term preservation and durability. It can even be rewritten, though the tech is definetly best for achival purposes and is being pursued primarily by movie companies wanting high quality storage.
DNA also sounds interesting, though it doesnt seem like a good way of preserving data long term. DNA is very fragile, and seems like an odd route to take for long term archiving.
DNA also sounds interesting, though it doesnt seem like a good way of preserving data long term. DNA is very fragile, and seems like an odd route to take for long term archiving.
Yeah the 5D quartz disk is very cool.
Anyways if you think about storage density DNA isn't that "odd". With DNA you can store dozens of copies of the data and parity checks in a very small space so even if some gets corrupted you can still get it. I get that organic stuff has its limits but the density is just mind blowing.
Density is defintly amazing in DNA, its just so fragile. Even our own bodies have a constant degridation of our DNA... I wonder if they could take that concept and make something sturdier by using slightly different molecules to make up the chains.
Maybe shorter chains with stronger cross bonding & a gentle method of reading the chain could also help?
Its definetly an interesting route & itll be cool to see what happens with it over the next 10-15 years.
I recall watching a documentary (on Curiosity Stream maybe? I'm no longer subscribed) on data storage longevity. It covered DNA storage, which I think this PBS video w/ transcript provides more recent coverage of its developments. As well as holographic storage, which I could only find the Wikipedia page for.
As for which one I think might be the future, it's tough to say. Tape is pretty good and cheap but slow for offline storage. Archival media will probably end up all being offline storage, although I could see a case for holographic/optical storage being near line. Future online storage will probably remain a tough pickle: cheap, plentiful, fast; select at most two, maybe.
There's no authoritative list of instances since federation isn't required, but tools like lemmyverse.net will give you a solid list of the ones discoverable from the most well known federations.
Excited for you! I'm going from 1x 12tb USB drive to 4x internal 18tb drives. I'm building the NAS from scratch and keeping my other server for its current services (mostly Plex). My parts have been defective though, so it's all just sitting waiting for a replacement mobo.
It's a data maintenance feature that amends data in storage pools that are incorrect or incomplete. It works on BTRFS volumes or RAID 5/6 storage pools. It's scheduled to run monthly on my NAS. I guess it started now as I upgraded my drives from 4x4TB to 4x18TB.
The amazing thing about those are that they are halfing the rebuild time. With large drives you get rebuild time of over 24 hours which is actually frightening.
Setup is a one time thing and yes you need to be carefull about it but i bet software support will come as soon as those get more mainstream.
Never ever going to buy Seagate again after the crap they've pulled on their Exos drives.
They simply decided to completely trash SMART and spin down commands. The drives simply won't give you useful SMART data nor they won't ever actually spin down, you can't force it, the drive will report is as if it was spun down but in reality its still spinning.
Timd to update your criteria, friend. Seagate hasn't been top of the failure stack for like 8 years now. The 3TB scandal era is long since passed. Now it's WD who has been shitting on quality control, sending out faulty SSD's that wipe user data, bait-and-switching HDD customers with a cheaper, much worse performing technology (SMR) WITHOUT TELLING THEM, them basically blowing corporate raspberries at everyone when people complain.
While i agree they were the best, HGST also hasn't even existed as a non-WD product for years....
Care to elaborate? Seagate is one of my favorite brand. And i read a lots of reviews and tech articles before purchasing any components. I am curious to learn about what i have missed about them. Thx
A lot of people have very strong opinions of brands based on a woefully inadequate sample size. Typically this comes from a higher than expected failure rate, possibly even much higher than expected. It could've been a bad model, a bad batch at manufacturing, improper handling from the retailer, or even an improper running environment. But even the greediest data hoarders only have a few dozen drives, often in just a couple of environments and use-cases.
Very few of these results are actually meaningful trends. For every person that swears by WD and will never touch a Seagate, there's someone else that swears by Seagate and will never touch another WD. HGST and Toshiba seem to have a very slight edge on reliability, but it's very small. And there are still people that refuse to touch them because of the "Death Star" drives many years ago.
It's also very difficult to predict which models will have high failure rates. By the time it becomes clear one is a lemon, they're already EoL.
I avoid buying WD new because of their (IMHO completely illegal) stance on warranty, but I'm comfortable buying their stuff used.
Don't worry too much about brand. Instead go for specs and needs. Follow a good backup strategy and you'll be fine
HGST is a part of WD and has been for quite a while.
But a big part of why the average consumer drive kind of sucks is that there is way more money in enterprise level drives so very little resources get put toward client drives.
Owned by, yes. Have their operations actually been integrated though? I haven't checked in a long time, but it was still a separate division last time I did.
ZFS and BTRFS could update their codebase to account for these (if they haven't already), but I agree that their extra mechanical parts worry me. I really don't care about speed - if you run enough HDDs in your RAID then you get enough speed by proxy. If you need better speeds then you should start looking into RAM/SSD-caching etc. I'd rather have better reliability than speed, because I hate spinning rust's short lifespan as-is.
If they want it so much why don’t they pay him? Sounds like if it weren’t for him (and the others he seems to allude to) we wouldn’t have this opportunity.
datahoarder
Hot