Did I just brick my SAS drive?
I was trying to make a pool with the other 5 drives and this one kept giving errors. As a completer beginner I turned to gpt…
What can I do? Is that drive bricked for good?
Don’t clown on me, I understand my mistake in running shell scripts from Ai…
EMPTY DRIVES NO DATA
The initial error was:

Edit: sde and SDA are the same drive, name just changed for some reason And also I know it was 100% my fault and preventable 😞
**Edit: ** from LM22, output of sudo sg_format -vv /dev/sda (broken link)
BIG EDIT:
For people that can help (btw, thx a lot), some more relevant info:
Exact drive model: SEAGATE ST4000NM0023 XMGG
HBA model and firmware: lspci | grep -i raid 00:17.0 RAID bus controller: Intel Corporation SATA Controller [RAID mode] Its an LSI card Bought it here
Kernel version / distro: I was using Truenas when I formatted it. Now trouble shooting on other PC got (6.8.0-38-generic), Linux Mint 22
Whether the controller supports DIF/DIX (T10 PI): output of lspci -vv (broken link)
Whether other identical drives still work in the same slot/cable: yes all the other 5 drives worked when i set up a RAIDZ2 and a couple of them are exact same model of HDD
COMMANDS This is what I got for each command: (broken link)
Solved by y0din! Thank you soo much!
Thanks for all the help 😁



Thanks for the continued support! ❤
I’ve attached an identical Segate SAS drive from the server.
To confirm, it is the same LSI card that was in the TrueNAS server. I pulled it out of the server and put it into the trouble shooting machine, where I run the commands.
It is this one: 01:00.0 Serial Attached SCSI controller [0107]: Broadcom / LSI SAS2308 PCI-Express Fusion-MPT SAS-2 [1000:0087] (rev 05)
I did not see your other reply lol, I will also try this command that you recommended:
sudo sg_format –format –size=512 –fmtpinfo=0 –pfu=0 /dev/sdb
Also, the sg_format ran for less than 5 minutes, very quick. However, if I can recall, it did say it was completed.
**Note: ** “Bricked Drive” turned to sdb
Identical working drive installed as sda
Here is the dmesg -T > dmesg-full.txt with the identical drive
Here is the code from: (with the output for each drive, separately)
sudo lspci -nnkvv
sudo lsblk -o NAME,MODEL,SIZE,PHY-SeC,LOG-SeC,ROTA
sudo fdisk -l /dev/sdX
sudo sg_inq -vv /dev/sdX
sudo sg_readcap -ll /dev/sdX
sudo sg_modes -a /dev/sdX
sudo sg_vpd -a /dev/sdX
Thanks again for all the help, I await your reply. :)
I will let you know the results of (sudo sg_format –format –size=512 –fmtpinfo=0 –pfu=0 /dev/sdb), as soon as it’s done.
Thanks for the update, that’s helpful.
Confirming that the controller is a Broadcom / LSI SAS2308 and that it’s the same HBA that was used in the original TrueNAS system removes one major variable. It means the drive is now being tested under the same controller path it was previously attached to.
The device mapping you described is clear:
sda = known-good identical drive
sdb = the problematic drive
Running:
sudo sg_format --format --size=512 --fmtpinfo=0 --pfu=0 /dev/sdbas you did is the correct next step to normalize the drive’s format and protection settings.
A few general notes while this is in progress:
At this point it makes sense to pause any further investigation until the current sg_format has fully completed and the system has been power-cycled.
Once that’s done, the next step will be a direct comparison between sdb and the known-good sda using:
sudo sg_readcap -llaReported logical and physical sector sizes
Protection / PI status
As a general note going forward: on Linux / FreeBSD it’s safer to reference disks by persistent identifiers (e.g. /dev/disk/by-id/ or UUID (this is safer but not so direct human readable) on Linux or glabel on FreeBSD) rather than /dev/sdX, as device names can change across boots or hardware reordering as you have had some experience with now.
Post the results when you’re ready and the sg_format complete and we can continue from there.
Great News!
Format completed and now the drive is viewable in “Disks” (however it is still unknown compared to the other one, it might just need a normal format.
The code for the comparison returns invalid option, I assumed you need just -l comparison:
sudo sg_readcap -l /dev/sdb and sudo sg_readcap -l /dev/sda
One question I have is: what do you mean by powercycle? Is that another command to run on the problematic drive? If you mean turn off the pc and turn it back on, I will do that right now, just after the drive has completed formatting.
After PowerCycle (turned pc off and on)
sudo sg_readcap -l /dev/sdb and sudo sg_readcap -l /dev/sda
Would the next step be formatting of some kind?
That’s good news — what you’re seeing now is the expected state.
A quick clarification first:
Power cycle means exactly what you did: shut the machine down completely and turn it back on. There is no command involved. You did the right thing.
Regarding the current status:
The drive showing up in Disks but marked as unknown is normal
At this point the disk has:
No partition table
No filesystem
“Unknown” here does not indicate a problem, only that nothing has been created on it yet
About sg_readcap:
sg_readcap -l is correctThere is no direct “comparison” mode; running it separately on sda and sdb is exactly what was intended
The important thing is that both drives now report sane, consistent values (logical block size, capacity, no protection enabled)
Next steps:
Yes, the next step is normal disk setup, just like with any new drive:
Create a partition table (GPT is typical)
Create one or more partitions
Create a filesystem (or add it back into ZFS if that’s your goal)
At this stage the drive has transitioned from “unusable” to functionally recovered. From here on, you’re no longer fixing a problem — you’re just provisioning storage.
If you plan to put it back into TrueNAS/ZFS, it’s usually best to let TrueNAS handle partitioning and formatting itself rather than doing it manually on Linux.
Nice work sticking with the process and verifying things step by step.
Oh my what a ride! I got everything up and running in a RAIDZ2 with the 6 x 4TB drives! (soon i will add another 4 x 1tb in an icy dock as a separate vdev)
Everything works now with no errors! 🥳
I could not have fixed this without your help. You are a lifesaver and probably saved this drive from the landfill lol. I honestly can’t thank you enough for your continuous support throughout many days!
You are the light that shows that there are still good people on the internet that want to help, and not just lurkers that laugh and move on and treat everything as content instead of a person on the other side sharing something that is important to them.
In my case I was in need of help, and like one comment put it: Out of the 50 messages of ridicule, one person will actually go out of their way and help.
I learned soo much and a good lesson too!
Thanks again for your help, and I will remember this interaction for the rest of my self-hosting journey! I’m serious.
Keep helping others and sharing your knowledge. I will pay this kind gesture forward in the new year, and help others more with the things that I know. 🫡
(Please don’t delete this convo, might help someone in the future)
Thanks again and Happy Holidays!
I wish you all the best in the New Year! 🤗 🎉
That’s genuinely great to hear, and I’m glad it worked out.
You did the hard part here: you kept testing methodically, provided solid data, and were willing to slow down and verify assumptions instead of guessing. That’s why this ended in a clean recovery instead of a dead drive.
For what it’s worth, I’ve hit more than a few of these bumps myself. I started out self-taught on an IBM XT back in 1987, when I was about six years old, and the learning process has never really stopped. Situations like this are just part of how you build real understanding over time.
This is also a good example of how enterprise hardware behaves very differently from consumer gear. Nothing here was “obvious” as a beginner, and the outcome reinforces an important lesson: unusable does not mean broken. You handled it the right way.
I’m especially glad if this thread is kept around. These kinds of issues come up regularly, and having a complete, factual troubleshooting trail will help the next person who runs into the same thing.
Enjoy the RAIDZ2 setup, and good luck with the additional vdev. Paying this forward is exactly how these communities stay useful.
Happy holidays, and all the best in the new year. 🥳