4

I Solved it

The solution is in my own answer (se my next post). This post only describes my original problem and what I've tried.

The may be some pointers in it for you though... or not.

I Solved it ends

First of all i'm pretty new to linux. Here's the deal. My old computer mainboard has failed me. That no problem I just bye a new one. However I had been stupid enough to use Intels RST, wich was onboard the old mainboard but not the new one. Now the question is if it is posible to recover the RST raid, without the Intel RST boot expantion? It dosn't look like the disks has automagicaly been assembled to one volume. It seems to my that it is posible, but when it comes to raid and disk/partion mangement, my knowledge pretty much stops at gparted.

So far I've found that blkid for both disks gives (and only gives):

/dev/sdb: TYPE="isw_raid_member"
/dev/sda: TYPE="isw_raid_member"

That looks allright.

mdadm -E gives me:

mdadm -E /dev/sdb /dev/sda
mdadm: /dev/sdb is not attached to Intel(R) RAID controller.
mdadm: /dev/sdb is not attached to Intel(R) RAID controller.
/dev/sdb:
          Magic : Intel Raid ISM Cfg Sig.
        Version : 1.0.00
    Orig Family : 3ad31c33
         Family : 3ad31c33
     Generation : 000006b7
     Attributes : All supported
           UUID : f508b5ef:ce7013f7:fcfe0803:ba06d053
       Checksum : 0798e757 correct
    MPB Sectors : 1
          Disks : 2
   RAID Devices : 1

  Disk00 Serial : 6VYCWHXL
          State : active
             Id : 00000000
    Usable Size : 488391680 (232.88 GiB 250.06 GB)

[Volume0]:
           UUID : 529ecb47:39f4bc8b:0f05dbe3:960195fd
     RAID Level : 0
        Members : 2
          Slots : [UU]
    Failed disk : none
      This Slot : 0
    Sector Size : 512
     Array Size : 976783360 (465.77 GiB 500.11 GB)
   Per Dev Size : 488391944 (232.88 GiB 250.06 GB)
  Sector Offset : 0
    Num Stripes : 1907780
     Chunk Size : 128 KiB
       Reserved : 0
  Migrate State : idle
      Map State : normal
    Dirty State : clean
     RWH Policy : off

  Disk01 Serial : W2A50R0P
          State : active
             Id : 00000004
    Usable Size : 488391680 (232.88 GiB 250.06 GB)
mdadm: /dev/sda is not attached to Intel(R) RAID controller.
mdadm: /dev/sda is not attached to Intel(R) RAID controller.
/dev/sda:
          Magic : Intel Raid ISM Cfg Sig.
        Version : 1.0.00
    Orig Family : 3ad31c33
         Family : 3ad31c33
     Generation : 000006b7
     Attributes : All supported
           UUID : f508b5ef:ce7013f7:fcfe0803:ba06d053
       Checksum : 0798e757 correct
    MPB Sectors : 1
          Disks : 2
   RAID Devices : 1

  Disk01 Serial : W2A50R0P
          State : active
             Id : 00000004
    Usable Size : 488391680 (232.88 GiB 250.06 GB)

[Volume0]:
           UUID : 529ecb47:39f4bc8b:0f05dbe3:960195fd
     RAID Level : 0
        Members : 2
          Slots : [UU]
    Failed disk : none
      This Slot : 1
    Sector Size : 512
     Array Size : 976783360 (465.77 GiB 500.11 GB)
   Per Dev Size : 488391944 (232.88 GiB 250.06 GB)
  Sector Offset : 0
    Num Stripes : 1907780
     Chunk Size : 128 KiB
       Reserved : 0
  Migrate State : idle
      Map State : normal
    Dirty State : clean
     RWH Policy : off

  Disk00 Serial : 6VYCWHXL
          State : active
             Id : 00000000
    Usable Size : 488391680 (232.88 GiB 250.06 GB)

So is it posible to safly reassmble theese two disks into a single volume? eg mdadmin --assemble

I'm in doubt about the workings of mdadm. So this is a good learning experience for me.

lsb_release -a

Distributor ID: Ubuntu
Description:    Ubuntu 19.10
Release:    19.10
Codename:   eoan

uname -a

Linux HPx64 5.3.0-51-generic #44-Ubuntu SMP Wed Apr 22 21:09:44 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux

Note that it was named HPx64 because i've reused the Ubuntu installation and that is a xUbuntu

--- Update 2020-05-15 ---

Found out that seting the IMSM_NO_PLATFORM=1 env.var. has two affects (so far). 1) Removes the "mdadm: /dev/sdb is not attached to Intel(R) RAID controller." warning output from:

mdadm -E /dev/sdb

2) Removes the "mdadm: /dev/sdb is not attached to Intel(R) RAID controller." output from:

mdadm --assemble /dev/md0 /dev/sdb /dev/sda

Status now after assemble is that md0 devices is created in dev:

cat /proc/mdstat

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : inactive sdb[1](S) sda[0](S)
      5488 blocks super external:imsm

unused devices: <none>

And

mdadm -E /dev/md0 
/dev/md0:
          Magic : Intel Raid ISM Cfg Sig.
        Version : 1.0.00
    Orig Family : 3ad31c33
         Family : 3ad31c33
     Generation : 000006b7
     Attributes : All supported
           UUID : f508b5ef:ce7013f7:fcfe0803:ba06d053
       Checksum : 0798e757 correct
    MPB Sectors : 1
          Disks : 2
   RAID Devices : 1

  Disk00 Serial : 6VYCWHXL
          State : active
             Id : 00000000
    Usable Size : 488391680 (232.88 GiB 250.06 GB)

[Volume0]:
           UUID : 529ecb47:39f4bc8b:0f05dbe3:960195fd
     RAID Level : 0
        Members : 2
          Slots : [UU]
    Failed disk : none
      This Slot : 0
    Sector Size : 512
     Array Size : 976783360 (465.77 GiB 500.11 GB)
   Per Dev Size : 488391944 (232.88 GiB 250.06 GB)
  Sector Offset : 0
    Num Stripes : 1907780
     Chunk Size : 128 KiB
       Reserved : 0
  Migrate State : idle
      Map State : normal
    Dirty State : clean
     RWH Policy : off

  Disk01 Serial : W2A50R0P
          State : active
             Id : 00000004
    Usable Size : 488391680 (232.88 GiB 250.06 GB)

And

mdadm --query --detail  /dev/md0

/dev/md0:
           Version : imsm
        Raid Level : container
     Total Devices : 2

   Working Devices : 2


              UUID : f508b5ef:ce7013f7:fcfe0803:ba06d053
     Member Arrays :

    Number   Major   Minor   RaidDevice

       -       8        0        -        /dev/sda
       -       8       16        -        /dev/sdb

So it's some of the way but something is still wrong. It seems that the volume isn't exposed to the system and the examine of md0 is simular to sdb. Any ideas and thoughts are welcome.

Arnefar
  • 127

2 Answers2

8

!!! Success!!!

Found it. I was trying to hard. All I had to do was:

IMSM_NO_PLATFORM=1 mdadm --assemble --scan --verbose

And wuuupti dooooo the raid volume was (re)assembled as /dev/md126:

mdadm --query --detail  /dev/md126p1
/dev/md126p1:
         Container : /dev/md/imsm0, member 0
        Raid Level : raid0
        Array Size : 488388608 (465.76 GiB 500.11 GB)
      Raid Devices : 2
     Total Devices : 2

             State : clean 
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

        Chunk Size : 128K

Consistency Policy : none


              UUID : 529ecb47:39f4bc8b:0f05dbe3:960195fd
    Number   Major   Minor   RaidDevice State
       1       8       16        0      active sync   /dev/sdb
       0       8        0        1      active sync   /dev/sda
Arnefar
  • 127
2

Confirmed mdadm --assemble --scan brought my RST array online

As this was (despite its simplicity) a terrifying exercise, here is my own experience, in hope of easing the minds of people with similar intentions.

4x drives in Raid 5, created in BIOS of Z270 mobo, for using in Windows 10. Drives disconnected when fresh installing Ubuntu 22.04 (and Win11 for dual boot) to avoid complications during their respective encryptions.

lsblk -o name,size,fstype,type,mountpoint

Initially included the following in its results (before assembly)

sda                7.3T isw_raid_member disk  
sdb                7.3T isw_raid_member disk  
└─sdb1             7.3T                 part

sdd 7.3T isw_raid_member disk
├─sdd1 16M part
└─sdd2 7.3T part
sde 7.3T isw_raid_member disk

Ran

sudo apt install mdadm
sudo mdadm --assemble --scan

with results

mdadm: Container /dev/md/imsm0 has been assembled with 4 drives
mdadm: /dev/md/Data_22.3R5 has been assembled with 4 devices and started.

Confirmed array now listed in the updated /etc/mdadm/mdadm.conf also checked lsblk to find the following changes

lsblk -o name,size,fstype,type,mountpoint
NAME               SIZE FSTYPE          TYPE  MOUNTPOINT
sda                7.3T isw_raid_member disk  
├─md126           21.8T                 raid5 
│ ├─md126p1         16M                 part  
│ └─md126p2       21.8T ntfs            part  /media/a/Data
└─md127              0B                 md    
sdb                7.3T isw_raid_member disk  
├─sdb1             7.3T                 part  
├─md126           21.8T                 raid5 
│ ├─md126p1         16M                 part  
│ └─md126p2       21.8T ntfs            part  /media/a/Data
└─md127              0B                 md    
sdd                7.3T isw_raid_member disk  
├─sdd1              16M                 part  
├─sdd2             7.3T                 part  
├─md126           21.8T                 raid5 
│ ├─md126p1         16M                 part  
│ └─md126p2       21.8T ntfs            part  /media/a/Data
└─md127              0B                 md    
sde                7.3T isw_raid_member disk  
├─md126           21.8T                 raid5 
│ ├─md126p1         16M                 part  
│ └─md126p2       21.8T ntfs            part  /media/a/Data
└─md127              0B                 md 

Array was automatically mounted as /dev/md126 and automatically showed up in Files (Nautalis Browser) under 'other locations'. Confirmed read/write access to array.

lloyd.icarus
  • 336
  • 1
  • 5