Changing the default database to /dev/sdb


#1

Hey,

I’ve just downloaded the appliance and tried to configure the database but it is not working.
There is a default database that I want to change to use /dev/sdb (80GB).

I’ve tried to use appliance_console, but it complains that an existing database exists. bundle exec rake db:drop evm:db:reset etc… gives the same error. Appliance_console will go through the set up process and tell me that the database already exists.

I just want to delete the existing database and recreate it again using /dev/sdb as storage in the command line or in the appliance.

Anybody has done it? Any clues?


Steps to create database on new partition when database already exists
#2

Hey @sergio_ocon,

I’m pretty sure there isn’t anything for this in the console.
We ship with a logical volume mounted for the database to use.
If you want to add more space to that volume you can do that by attaching additional storage and extending the existing volume and filesystem.

If you specifically want to use a separate disk (for example if you want the data disk to live on separate backing storage than the system disk) you can unmount the volume and remove its line from /etc/fstab then the console should prompt to add a new disk the next time you attempt to configure the database.
This is a bit more involved than I made it seem and it also will leave some unused space on the base disk where the database partition is, but it is another option.

Hopefully this helps.


#3

Hi @carbonin, that sounds good, but I was expecting something more easy. It is complex to change the region in the appliance if the database is there from the beginning, and there are several reasons to use a different storage. And the cli is failing because it is not resetting the database even though the option is there.

I would love to have one of the following options:

  • By default, use an external hard drive. That would reduce the footpring largely if I am not using the database in all appliances.
  • As an alternative, a script that builds the appliance, so I can choose which database version, rails version, storage, etc to use. So I donwnload the source code (using whatever brnach I have), configure ruby and whatever is needed, and I launch the script to get an environment that will work even if I reboot it.

#4

This should be rather easy actually. If you allow the default installation to come up after first boot, you should be able to select the “Reset Configured Database” option in the “Database Configuration” menu. This should prompt for a new region number and re-initialize the currently configured database to use the specified region id. If this doesn’t work, that is a separate bug and it would be great if you could open that up as an issue in the manageiq-gems-pending repo.

I think the decision was made to favor ease of use for initial deploy (just turn it on and you can start using ManageIQ) over ease of configuration for more complex deployments (multi-server and multi-region). If the community wants to revisit this decision I would want to bring @Fryguy and @chessbyte into the discussion.

This would be a complex project for the VM appliance, but it describes rather accurately what our OpenShift template accomplishes. One of the results of the re-architecture will be to make the template our default deployment mechanism, so I would be hesitant to spend too much time working on a VM image deploy tool when deploying using containers is the direction we are moving.


#5

Hi @sergio_ocon, configuring new region will destroy all existing data, so no matters if the database is there from before.

in addition to @carbonin comments, you have to delete (if exists) any partitions or logical volume (lv, vg and pv).

Three questions:

  1. Have you an all-in-one appliance? or have an appliance only for the database?
  2. Where is installed the database? (wich disk?)
  3. the disk /dev/sdb was used before? (if answer is yes, you can make a ‘dd’ to delete headers of the disk, for example: dd if=/dev/zero of=/dev/sdb bs=512 count=1000 to ‘delete’ the first 5mb of the disk, otherwise, the appliance won’t recognize the disk) //use this with careful

#6
  1. One appliance for the database. out of the box appliances
  2. Default value for MIQ in RHV
  3. No, I just added 80 GB as a second hard drive, and I am trying to add it

#7

#lsblk
NAME                           MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sr0                             11:0    1 1024M  0 rom  
vda                            252:0    0   50G  0 disk 
├─vda1                         252:1    0  512M  0 part /boot
├─vda2                         252:2    0   26G  0 part 
│ ├─vg_system-lv_os            253:0    0  4,5G  0 lvm  /
│ ├─vg_system-lv_swap          253:1    0  5,9G  0 lvm  [SWAP]
│ ├─vg_system-lv_home          253:3    0    1G  0 lvm  /home
│ ├─vg_system-lv_tmp           253:4    0    1G  0 lvm  /tmp
│ ├─vg_system-lv_var_log_audit 253:5    0  512M  0 lvm  /var/log/audit
│ ├─vg_system-lv_var_log       253:6    0   11G  0 lvm  /var/log
│ └─vg_system-lv_var           253:7    0    2G  0 lvm  /var
├─vda3                         252:3    0   10G  0 part /var/www/miq_tmp
├─vda4                         252:4    0    1K  0 part 
└─vda5                         252:5    0 13,5G  0 part 
  └─vg_data-lv_pg              253:2    0 13,5G  0 lvm  /var/opt/rh/rh-postgresql95/lib/pgsql
vdb                            252:16   0   80G  0 disk 
# lvs

  LV               VG        Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lv_pg            vg_data   -wi-ao----  13,48g                                                    
  lv_home          vg_system -wi-ao----   1,00g                                                    
  lv_os            vg_system -wi-ao----   4,50g                                                    
  lv_swap          vg_system -wi-ao----   5,86g                                                    
  lv_tmp           vg_system -wi-ao----   1,00g                                                    
  lv_var           vg_system -wi-ao----   2,00g                                                    
  lv_var_log       vg_system -wi-ao----  11,00g                                                    
  lv_var_log_audit vg_system -wi-ao---- 512,00m  

#8

SOLVED

What I’ve done:

$ sudo vgextend vg_data /dev/vdb
$ sudo lvextend /dev/vg_data/lv_pg /dev/vdb 
$ sudo xfs_growfs /var/opt/rh/rh-postgresql95/lib/pgsql

And after that I have 93,5G in /var/opt/rh/rh-postgresql95/lib/pgsql

However, I have another two systems using 13,5 G for nothing, I will use them for other things, but it should be optional.


#9

You should unmount the filesystem, remove your ‘lv_pg’ and ‘vg_data’ (both are associated to postgres database) aditional you need delete the line of that logical volume in fstab, then you will can configure a new database in appliance console menu, and it will ask you for the partition to install.

Edit: sorry, im writting with my smartphone and I my answer come late. If your database can/need run in your boot device then your soulution is great.


#10

Thanks @francisco1080 that must work too.


#11

has somebody tried this?

appliance_console_cli  --region=10 --internal --password="smartvm" --key --force-key --dbdisk=/dev/sdb

I’ve read it in a tutorial