Did I just brick my SAS drive?
I was trying to make a pool with the other 5 drives and this one kept giving errors. As a completer beginner I turned to gpt…
What can I do? Is that drive bricked for good?
Don’t clown on me, I understand my mistake in running shell scripts from Ai…
EMPTY DRIVES NO DATA
The initial error was:

Edit: sde and SDA are the same drive, name just changed for some reason And also I know it was 100% my fault and preventable 😞
**Edit: ** from LM22, output of sudo sg_format -vv /dev/sda (broken link)
BIG EDIT:
For people that can help (btw, thx a lot), some more relevant info:
Exact drive model: SEAGATE ST4000NM0023 XMGG
HBA model and firmware: lspci | grep -i raid 00:17.0 RAID bus controller: Intel Corporation SATA Controller [RAID mode] Its an LSI card Bought it here
Kernel version / distro: I was using Truenas when I formatted it. Now trouble shooting on other PC got (6.8.0-38-generic), Linux Mint 22
Whether the controller supports DIF/DIX (T10 PI): output of lspci -vv (broken link)
Whether other identical drives still work in the same slot/cable: yes all the other 5 drives worked when i set up a RAIDZ2 and a couple of them are exact same model of HDD
COMMANDS This is what I got for each command: (broken link)
Solved by y0din! Thank you soo much!
Thanks for all the help 😁



That’s good news — what you’re seeing now is the expected state.
A quick clarification first:
Power cycle means exactly what you did: shut the machine down completely and turn it back on. There is no command involved. You did the right thing.
Regarding the current status:
The drive showing up in Disks but marked as unknown is normal
At this point the disk has:
No partition table
No filesystem
“Unknown” here does not indicate a problem, only that nothing has been created on it yet
About sg_readcap:
sg_readcap -l is correctThere is no direct “comparison” mode; running it separately on sda and sdb is exactly what was intended
The important thing is that both drives now report sane, consistent values (logical block size, capacity, no protection enabled)
Next steps:
Yes, the next step is normal disk setup, just like with any new drive:
Create a partition table (GPT is typical)
Create one or more partitions
Create a filesystem (or add it back into ZFS if that’s your goal)
At this stage the drive has transitioned from “unusable” to functionally recovered. From here on, you’re no longer fixing a problem — you’re just provisioning storage.
If you plan to put it back into TrueNAS/ZFS, it’s usually best to let TrueNAS handle partitioning and formatting itself rather than doing it manually on Linux.
Nice work sticking with the process and verifying things step by step.
Oh my what a ride! I got everything up and running in a RAIDZ2 with the 6 x 4TB drives! (soon i will add another 4 x 1tb in an icy dock as a separate vdev)
Everything works now with no errors! 🥳
I could not have fixed this without your help. You are a lifesaver and probably saved this drive from the landfill lol. I honestly can’t thank you enough for your continuous support throughout many days!
You are the light that shows that there are still good people on the internet that want to help, and not just lurkers that laugh and move on and treat everything as content instead of a person on the other side sharing something that is important to them.
In my case I was in need of help, and like one comment put it: Out of the 50 messages of ridicule, one person will actually go out of their way and help.
I learned soo much and a good lesson too!
Thanks again for your help, and I will remember this interaction for the rest of my self-hosting journey! I’m serious.
Keep helping others and sharing your knowledge. I will pay this kind gesture forward in the new year, and help others more with the things that I know. 🫡
(Please don’t delete this convo, might help someone in the future)
Thanks again and Happy Holidays!
I wish you all the best in the New Year! 🤗 🎉
That’s genuinely great to hear, and I’m glad it worked out.
You did the hard part here: you kept testing methodically, provided solid data, and were willing to slow down and verify assumptions instead of guessing. That’s why this ended in a clean recovery instead of a dead drive.
For what it’s worth, I’ve hit more than a few of these bumps myself. I started out self-taught on an IBM XT back in 1987, when I was about six years old, and the learning process has never really stopped. Situations like this are just part of how you build real understanding over time.
This is also a good example of how enterprise hardware behaves very differently from consumer gear. Nothing here was “obvious” as a beginner, and the outcome reinforces an important lesson: unusable does not mean broken. You handled it the right way.
I’m especially glad if this thread is kept around. These kinds of issues come up regularly, and having a complete, factual troubleshooting trail will help the next person who runs into the same thing.
Enjoy the RAIDZ2 setup, and good luck with the additional vdev. Paying this forward is exactly how these communities stay useful.
Happy holidays, and all the best in the new year. 🥳