SSDs, and why Solid State isn’t Always Solid

In the year 2020, I am a huge skeptic of SSDs or Solid State Drives. There are many applications they are good, such as embedded devices, SD cards, Compact Flash, etc. When it comes to replacing the hard disk drive on computers, I have a lot of doubt, to this day. I break it up below

  1.  They Make No Noise: Bullshit! In fact last night, when live streaming to YouTube, I heard what I thought was birds chirping. This happened a few times in my attic level bedroom and I was in the basement. Coincidentally every time I tried to tape it with my iPhone the EMF between the phone and the SSD must’ve canceled each other out. I don’t mind hearing traditional drive noises, but if it’s something closer to birds… that’s unraveling.
  2. They Make Computers Faster. No difference for me, I believe its theoretical. In reality, it depends on what type of hard drive interface/bus (that is the software that goes between the actual drive and the CPU)  it has. You could have a really fast drive, but if it’s up against a crappy bus or the throughput is lousy, you’re not gaining anything
  3. The fear of static electricity: I’d fret more about an ESD mistake over a hard drive crash. Just even a dry day where I zap myself on any of my systems, I’d fear it would be that one instance where I would never trust an SSD ever again.
  4. Speed my ass: a constant issue I have with faster SATA-grade hard drives is the speed, and the amount of storage (last point below.) I’ve noticed modern drives that spin the magnetic disks faster are more prone to having drive corruption (that is caused by a software glitch, originating from hardware) or worse a complete drive fail, where something inside literally stopped working.
  5. And More to Store (SATA under the ol magnets, bullshit under SSD): In the mid 2000s, SATA brought the power users the ability to store any media over 500gigs on it’s predecessor (the PATA or IDE.) So with more to store, with a larger disk format, with a speed that is as fast as USB2.0, all it boils down to is they can have failures more than IDE drives, of which more time to play recovery mode, by figuring out if data can be recovered. While SSDs would only crash by the means of a software mishap, the concern is SSDs would probably be worse to recover. In reality, who can live on 250gigs of data on the end level?  Power users are to be expected to work with at least 500 gigs, a terabyte if generous. Apple has lead the way for years for shorting the user’s inexpierence in a way to pander to consumers need for speed; at the price of power users

SCSI? Of course I’d still defend SCSI, or the SATA-like drive, that uses SATA connectors that needs the “brain” in the CPU to make that SAS drive work like one. Yes, while many of my Lacie drives within are SATA, doesn’t mean they are always reliable. In fact except for the FireWire one, I had to either replace a disk, repurpose or just wipe out the drive. The reason why the FireWire works, is a) I only used the USB 3.0 once and b) forcing it to go to a max of 800 mbps, perhaps helped the drive’s sanity by not going to it’s theoretical ability of 3gigs a second. Never believe in the cute claims in the packaging; that’s where the line between consumers and power users gets drawn.

Knocking on wood, except for mobile drives for notebooks, all my desktop ones from ones I have owned, to ones I had got second-hand, the ol fashioned PATA works much better than the SATAs I own. Right now, I have to recover cruital media that defined my 20s that got moved temporarily on a SATA drive that has failed, not once, but twice, and it lacks the message of the  S.M.A.R.T. prefail warning. And it’s a Deskstar branded drive. Go figure.

#

Leave a Reply

Your email address will not be published. Required fields are marked *