Tagged with mysql

AWS RDS and MySQL's SUPER privilege

While moving a RackTables database from a classic MySQL server to Amazon's RDS service, I found two things:

  • RackTables requires SUPER privileges on the MySQL DB to create triggers that will run as a different MySQL user than its own and
  • AWS RDS does not provide SUPER privileges.

When I tried to import the data from the old Racktables DB into the RDS DB, I got the error You do not have the SUPER privilege and binary logging is enabled (you *might* want to use the less safe log_bin_trust_function_creators variable).

AWS provides a setting to enable functions, procedures, and triggers for an instance of MySQL on Amazon RDS.

I followed the guide and created a database parameter group with the option log_bin_trust_function_creators = true for the Racktables DB.

After rerunning the import, the error changed slightly to You do not have the SUPER privilege and binary logging is enabled.

This is caused by the DEFINER statements in the DB dump:

/*!50003 CREATE*/ /*!50017 DEFINER=`root`@`localhost`*/ /*!50003 TRIGGER `EntityLink-before-insert` BEFORE INSERT ON `EntityLink` FOR EACH ROW

On the RDS instance, there was no root user, but a rootuser so I changed the .sql file with the dump to read

/*!50003 CREATE*/ /*!50017 DEFINER=`rootuser`@`localhost`*/ /*!50003 TRIGGER `EntityLink-before-insert` BEFORE INSERT ON `EntityLink` FOR EACH ROW

but the import still failed with the same error so I checked the user privileges in MySQL:

[root@racktables.example.com tmp]$ mysql -u rootuser -h racktablesrdb.example.us-east-1.rds.amazonaws.com -P 3306 -p -e "select User,Host from mysql.user;"
Enter password: 
+----------------+-----------+
| User           | Host      |
+----------------+-----------+
| racktablesuser | %         |
| rootuser       | %         |
| rdsadmin       | localhost |
+----------------+-----------+

Then I remembered that % does not include localhost in MySQL. Changing the .sql file to

/*!50003 CREATE*/ /*!50017 DEFINER=`rootuser`@`%`*/ /*!50003 TRIGGER `EntityLink-before-insert` BEFORE INSERT ON `EntityLink` FOR EACH ROW

allowed the import to complete successfully.

Obviously it's much safer to actually define the hostname rather than using the % wildcard.

Sources

Tagged , ,

Resize physical disk of VMware guest on Linux

What

How to resize the disk of a VMware virtual machine that's using a physical disk as its storage backend. The disk the virtual machine is using is an LVM logical volume, which is very easy to extend.

In this article I'm extending the disk, which is always easier than reducing. If you're reducing your disk, make sure the new size is still MORE than the space used on the file system, or you will lose data. And make a backup before you touch anything.

Why

I'm running Windows 7 in a virtual machine on my Linux host. I figured 20 GB would be plenty for a guest OS I keep for my USB dongles that are not supported in Linux.

After the installation, Windows left me with only 7 GB of free space left. And to make use of Microsoft's cheap upgrade to Windows 8 program, 25 GB of disk space is a requirement. Time to grow that 20 to 30 GB!

How

Preparation

  • power down the virtual machine
  • go to the directory that holds the virtual machine's disk file with extension .vmdk and make a backup of the file

Extend the logical volume

The original size of my LV was 20 GB, or to be more precise: 2560 logical extents of 4 MB:

 $ lvdisplay /dev/vg00/vm-win7 
  --- Logical volume ---
  LV Path                /dev/vg00/vm-win7
  LV Name                vm-win7
  VG Name                vg00
  LV UUID                739fos-h2dt-4k9f-Ghai-HiKk-AAAA-9Xqtjz
  LV Write Access        read/write
  LV Creation host, time host, 2012-11-19 21:33:59 +0100
  LV Status              available
  # open                 0
  LV Size                20.00 GiB
  Current LE             2560
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           254:9

I increased the LV with 50 % by adding 1280 logical extents:

 $ lvextend -l +1280 -n /dev/vg00/vm-win7
  Extending logical volume vm-win7 to 30.00 GiB
  Logical volume vm-win7 successfully resized

There's a good reason for sizing the LV in logical extents (-l) rather than in the easier to read megabytes/gigabytes/... with -L, as you will see in the next step.

For more information on LV resizing, read Extending a logical volume in the LVM HOWTO.

VM configuration

The LV of the VM has been extended, but the VM is still seeing the original size. The size is defined in the VM's vmdk file:

...
# Extent description
RW 41943040 FLAT "/dev/vg00/vm-win7" 0
...

The LV /dev/vg00/vm-win7 is defined to be 41943040 sectors large, and one sector is 512 bytes giving a total of 21474836480 bytes.

Because I used logical extents and not megabytes or gigabytes to extend the LV, calculating the new number of sectors is very easy: 41943040 + 50 % = 62914560. So the line in the vmdk is changed to:

RW 62914560 FLAT "/dev/vg00/vm-win7" 0

Verify the new size in the virtual machine's settings: under Hardware, go to Hard Disk and you will see the resized capacity under Disk Information.

For more information on VMware's VMDK disk format, read the VMware Virtual Disk Format 5.0 technical note.

Resize the FS in the VM

The LV has been resized and VMware is aware of it, but the VM is still using a partition table and file system for the old size.

  • download the GParted Live CD and connect it to your VM's virtual CD drive
  • boot into GParted and resize the VM's disk to use the newly added space
  • shutdown the VM and remove the virtual CD
  • start the Windows VM as usual
  • Windows will want to run a file system check, DO NOT CANCEL THIS!
  • after the check, Windows will reboot and when it comes back the disk will show the new capacity!

Sources

Tagged , ,

Dumping MySQL databases with BackupPC

I'm using automysqlbackup to handle my MySQL dumps. This worked well when launched in a shell, but when BackupPC ran the exact same command as user backuppc, it failed: Backup failed on example.com (DumpPreUserCmd returned error status 256)

After some digging, I found the actual error from the backup script: mkfifo: cannot create fifo `mypipe': Permission denied

or as automysqlbackup calls it: Error: Failed to create the named pipe (fifo) for reading in all databases. Exiting.

My workaround: create a simple wrapper script that calls the automysqlbackup script.

#!/bin/bash
cd ${HOME}
/usr/local/bin/automysqlbackup /etc/automysqlbackup/$(hostname -f).conf
exit $?

Granted, this doesn't solve the actual problem, but at least automysqlbackup dumps my DBs.

Tagged , ,