• Please be aware: Kaspersky Anti-Virus has been deprecated
    With the upgrade to Plesk Obsidian 18.0.64, "Kaspersky Anti-Virus for Servers" will be automatically removed from the servers it is installed on. We recommend that you migrate to Sophos Anti-Virus for Servers.
  • The Horde webmail has been deprecated. Its complete removal is scheduled for April 2025. For details and recommended actions, see the Feature and Deprecation Plan.
  • We’re working on enhancing the Monitoring feature in Plesk, and we could really use your expertise! If you’re open to sharing your experiences with server and website monitoring or providing feedback, we’d love to have a one-hour online meeting with you.

Plesk install on Software RAID / failed.

daanse

Regular Pleskian
Hi,
i've got a Software RAID 1 Debian 8.3 and i followed some "How To's" (Manually Installation from Plesk)
and now i've noticed that my Plesk is on a 30 GB Partition instead of 4TB
i have 2x4TB

What can i do? I need specific advice cause i'm a little new to this Topic.

Please see some Outputs:
Code:
Disklabel type: gpt
Disk identifier: 9406FA7C-9383-451F-BC0A-9189C3EAFC7F

Device          Start        End    Sectors  Size Type
/dev/sdb1        4096   67112959   67108864   32G Linux RAID
/dev/sdb2    67112960   68161535    1048576  512M Linux RAID
/dev/sdb3    68161536 4291825663 4223664128    2T Linux RAID
/dev/sdb4  4291825664 7814037134 3522211471  1.7T Linux RAID
/dev/sdb5        2048       4095       2048    1M BIOS boot

Partition table entries are not in disk order.
Disk /dev/sda: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: D0C48D77-3063-47FE-9F38-A8E5018F3063

Device          Start        End    Sectors  Size Type
/dev/sda1        4096   67112959   67108864   32G Linux RAID
/dev/sda2    67112960   68161535    1048576  512M Linux RAID
/dev/sda3    68161536 3710937500 3642775965  1.7T Linux RAID
/dev/sda4  4291825664 7814037134 3522211471  1.7T Linux RAID
/dev/sda5        2048       4095       2048    1M BIOS boot

and

Code:
Hardware data:

   CPU1: ...
   Memory:  ...
   Disk /dev/sda: 4000 GB (=> 3726 GiB)
   Disk /dev/sdb: 4000 GB (=> 3726 GiB)
   Total capacity 7452 GiB with 2 Disks

and a Screenshot.
I have no clue how to repair that ...
And either i didn't found a Manual for installing Plesk correctly on Software Raid
 

Attachments

  • Bildschirmfoto 2016-02-11 um 21.35.07.jpg
    Bildschirmfoto 2016-02-11 um 21.35.07.jpg
    35.1 KB · Views: 8
Hi,
thank you. I attached a Screenshot.
Is this right? Seems to be empty :/
 

Attachments

  • Bildschirmfoto 2016-02-12 um 08.33.01.jpg
    Bildschirmfoto 2016-02-12 um 08.33.01.jpg
    28.6 KB · Views: 10
@Daka,

First of all, I have to request that you post the output of the commands "fdisk -l" and "df -h", since that output can be helpful.

In response to your question "what can I do?", I can respond with the following.

Each of your harddisks is partitioned into 5 parts (read: "partitions"): that is a little bit surprising, there is really no need to do this OR something went wrong at partitioning time.

To illustrate: your sda2 and sdb2 devices do not really have added value AND your sda3 and sdb3 devices are not equal in size (i.e. making raid configuration a little bit difficult).

Furthermore, the before mentioned question can be explained by the fact that the OS will select the first "device", if mounts of (parts of) devices are improperly configured.

In your case, the /etc/fstab is surely indicating that something went wrong with the mounting of devices (note: mistakes at partitioning time can cause the same endresult).


In short, you resolve the problem by entering the proper lines for mounting devices in the /etc/fstab file.

After modifying the /etc/fstab file, just do a software reboot (i.e. from the command line) and check that all mounts are persistent across reboots.

Afterwards, remove the Plesk installations (and all other applications) and re-install them, which should work like a charm, as long as the mounts are in lign with Plesk requirements (i.e. one has to mount the bigger devices, such as the sda3 device, to the / directory, in order to have Plesk installed in the default directories).

At least, that is the theory.


In practice, there is a big question mark on my side: I am not sure who created the partitions and/or whether RAID is present at this moment.

It seems to be the case that your system actually has a software based RAID, since the difference in sda3 and sdb3 is a common indicator of software based RAID.

But that even makes the five partitions and 10 (!) devices more surprising, since common software based RAID 1 should imply that five partitions on each disk would result in 5 devices.

Furthermore, it is common that the hosting provider provides a RAID based system with a pre-installed OS (and certainly not something like you have at this moment).

In short, present the output of the commands "fdisk -l" and "df -h" and I will know much more!


In general, the creation of a RAID based storage would involve the following steps (in chronological order)

a) create the proper partitions on the disks (sda en sdb), if and only if you want to apply software based RAID to some part of your disks (and not the entire 4TB disk),

NOTE: a "bare metal" server would require this step. However, a server with a pre-installed OS (without software RAID) would NOT require this step, since the installation of the OS would involve some partitioning, implying that the remainder of the disks (i.e. sda or sdb devices) can be synchronized with RAID in full (read: no need for additional partitioning).

NOTE: a normal hosting provider would offer a pre-installed server WITH RAID, given the fact that the (pre-installed) OS should be included in the RAID setup (i.e. the OS should be present on both synchronized disks), implying that the creation of RAID should normally precede the installation of the OS. Otherwise, it does not make any sense.

b) create the software based RAID

NOTE: this step is not required for any server with HARDWARE based RAID.

NOTE: I would strongly recommend software like mdadm, in order to create software based RAID.

c) mount the RAID devices (often called md1, md2 and so on) by editing /etc/fstab


I would strongly recommend to remove the Plesk instance (and other applications) completely, before any partitioning and/or RAID creation takes place.

I would also strongly recommend to provide us with the output of the fdisk and df commands, before doing anything.


Anyway, hope the above helps or explains a little bit.

Regards....
 
@Daka,

Forgot to ask the following:

1 - can you provide the contents of /etc/mdadm.conf?
2 - can you provide the output of the command "cat /proc/mdstat"?
3 - can you provide the output of the commands "mdadm --detail /dev/md<n>" with <n> the numbers you find from the command under point 2.

Note that the above commands assume that mdadm has been actually used.

If /etc/mdadm.conf does not exist, then the commands from point 2 and 3 do not have to be executed.

Regards.....
 
Hi, thank you.
i installed the Debian OS from Server Robot, pre installed.
I restarted Server and was ok to go.
Software Raid should be automatically.
Seems this was not the case.
Anyways i played around with SDA3 so the difference is my fault.
I could make everything new but i didnt found detailed instructions.
I choosed Debian minimal.
And Plesk custom installation.
Marked everything i wanted to have. Except Nginx ;-)....


Code:
   Disk /dev/sda: 4000 GB (=> 3726 GiB)
   Disk /dev/sdb: 4000 GB (=> 3726 GiB)
   Total capacity 7452 GiB with 2 Disks

Network data:
   eth0  LINK: yes
         MAC:  x
         IP:   x
         IPv6: x
         Intel(R) PRO/1000 Network Driver

root@web-host04 ~ # cat /etc/mdadm.conf
cat: /etc/mdadm.conf: No such file or directory
root@web-host04 ~ # cat /proc/mdstat
Personalities : [raid1]
unused devices: <none>
root@web-host04 ~ # mdadm --detail /dev/md1
mdadm: cannot open /dev/md1: No such file or directory
root@web-host04 ~ # mdadm --detail /dev/md
mdadm: cannot open /dev/md: No such file or directory
root@web-host04 ~ # fdisk-l
-bash: fdisk-l: command not found
root@web-host04 ~ # fdisk -L

Usage:
fdisk [options] <disk>      change partition table
fdisk [options] -l [<disk>] list partition table(s)

Options:
-b, --sector-size <size>      physical and logical sector size
-c, --compatibility[=<mode>]  mode is 'dos' or 'nondos' (default)
-L, --color[=<when>]          colorize output (auto, always or never)
-l, --list                    display partitions end exit
-t, --type <type>             recognize specified partition table type only
-u, --units[=<unit>]          display units: 'cylinders' or 'sectors' (default)
-s, --getsz                   display device size in 512-byte sectors [DEPRECATED]

-C, --cylinders <number>      specify the number of cylinders
-H, --heads <number>          specify the number of heads
-S, --sectors <number>        specify the number of sectors per track

-h, --help     display this help and exit
-V, --version  output version information and exit

For more details see fdisk(8).
root@web-host04 ~ # df -h
Filesystem           Size  Used Avail Use% Mounted on
udev                  32G     0   32G   0% /dev
IPxxxxxx1:/nfs  247G  140G   95G  60% /root/.oldroot/nfs
overlay               32G  5.3G   27G  17% /
tmpfs                 32G  4.0K   32G   1% /dev/shm
tmpfs                 32G  226M   32G   1% /run
tmpfs                5.0M     0  5.0M   0% /run/lock
tmpfs                 32G     0   32G   0% /sys/fs/cgroup

root@web-host04 ~ # fdisk -l

Disk /dev/ram0: 64 MiB, 67108864 bytes, 131072 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/ram1: 64 MiB, 67108864 bytes, 131072 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/ram2: 64 MiB, 67108864 bytes, 131072 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/ram3: 64 MiB, 67108864 bytes, 131072 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/ram4: 64 MiB, 67108864 bytes, 131072 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/ram5: 64 MiB, 67108864 bytes, 131072 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/ram6: 64 MiB, 67108864 bytes, 131072 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/ram7: 64 MiB, 67108864 bytes, 131072 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/ram8: 64 MiB, 67108864 bytes, 131072 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/ram9: 64 MiB, 67108864 bytes, 131072 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/ram10: 64 MiB, 67108864 bytes, 131072 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/ram11: 64 MiB, 67108864 bytes, 131072 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/ram12: 64 MiB, 67108864 bytes, 131072 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/ram13: 64 MiB, 67108864 bytes, 131072 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/ram14: 64 MiB, 67108864 bytes, 131072 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/ram15: 64 MiB, 67108864 bytes, 131072 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/loop0: 2 GiB, 2097152000 bytes, 4096000 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sdb: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 9406FA7C-9383-451F-BC0A-9189C3EAFC7F

Device          Start        End    Sectors  Size Type
/dev/sdb1        4096   67112959   67108864   32G Linux RAID
/dev/sdb2    67112960   68161535    1048576  512M Linux RAID
/dev/sdb3    68161536 4291825663 4223664128    2T Linux RAID
/dev/sdb4  4291825664 7814037134 3522211471  1.7T Linux RAID
/dev/sdb5        2048       4095       2048    1M BIOS boot

Partition table entries are not in disk order.
Disk /dev/sda: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: D0C48D77-3063-47FE-9F38-A8E5018F3063

Device          Start        End    Sectors  Size Type
/dev/sda1        4096   67112959   67108864   32G Linux RAID
/dev/sda2    67112960   68161535    1048576  512M Linux RAID
/dev/sda3    68161536 3710937500 3642775965  1.7T Linux RAID
/dev/sda4  4291825664 7814037134 3522211471  1.7T Linux RAID
/dev/sda5        2048       4095       2048    1M BIOS boot

Partition table entries are not in disk order.
root@web-host04 ~ #
 
@Daka,

In response to your last post, we can see and conclude that

- cat /proc/mdstat is "empty": no RAID is active,
- df -h is indicating some uncommon lines: at least the "overlay" filesystem type is not a good sign (one should not want this),
- df -h is indicating that your (large) harddisks (of 4TB) are not mounted, as confirmed before by the empty /etc/fstab,
- filesystem type of sda1 to sda5 and sdb1 to sdb5 has been set to Linux Raid manually: this is not good, if RAID is inactive,

and the above is just a rough outline.

To be honest, any manually creation of RAID and mounting the RAID device (i.e. md0 or md1 etc.) to / (in order to have Plesk in a proper directory with enough space) is dangerous.

One of the reasons is the simple fact that the overlay filesystem has already been mounted on /.

In short, a complete re-install of the whole system would be strongly recommended.

Any idea on how you want to proceed? Go for the full re-installation?

By the way, note that this is a Plesk forum and this topic thread is actually not concerning Plesk related issues.

Regards.....
 
Hm, i would do like following:
1. Set Image Debian 8 minimal 64 Bit (as i would aspect that this will have auto created RAID1 included, maybe choosing "minimal" was bad?)
2. restart Server
3. # apt-get update
4. #apt-get install wget
5. # Plesk
5.1 wget http://autoinstall.plesk.com/plesk-installer
5.2 sh ./plesk-installer
5.3 follow instructions until done.
6. set rDNS / Hostname
7. Jump and be Happy / or not.

This is how i proceed the last time.
I think may i'm missing something.

For me i need more Informations as provided from Odin cause in FAQ and WIKI there are always 1-2 Commands "and you are done"

Any Advice or workaround with Software RAIDs would be awesome.
Would save Time.
 
@Daka,

Can you have a search on your system for the mdadm.conf file and present the output of the contents thereof?

Have a look at /etc/mdadm/mdadm.conf first and, if not there, run the command find / -name mdadm.conf.

This information can be relevant for a work-around for your problem.

Regards....
 
Back
Top