MySQL Auto Increment Fun

Originally Posted 10/21/2014

Coming from a heavy Microsoft SQL Server background the concept of the AUTO_INCREMENT attribute in MySQL is very familiar.  In SQL Server we call it the IDENTITY attribute, but they are conceptually very similar.  When a row is inserted into the table the column specified with the AUTO_INCREMENT attribute automatically gets the next value in a sequence.  But while they are conceptually very similar, there are a number of differences that a SQL Server Database Administrator should be aware of.  To start with lets create a database and a table we will use as we explore the AUTO_INCREMENT attribute.  You can copy the following script to a file on your system named “auto_increment.sql”.  Or you can simply run the commands from the mysql prompt to generate the objects and seed the table with data.

CREATE DATABASE IF NOT EXISTS moderndba;

 USE moderndba;

 CREATE TABLE IF NOT EXISTS autofun
 (
       employee_id        INT UNSIGNED NOT NULL AUTO_INCREMENT
     , employee_name      VARCHAR(150) NOT NULL
     , employee_level     TINYINT UNSIGNED NULL
     , PRIMARY KEY (employee_id)
 );

 INSERT INTO autofun (employee_name, employee_level)
 VALUES
  ('Joe Smith', 0)
, ('Tim Bradford', 1)
, ('Betty Stafford', 1);

If you chose to copy the script and run it you can run the script as follows.

[student@percona56 ~]# mysql -u root -p < auto_increment.sql

In SQL Server you can insert directly into an IDENTITY column, but you have to turn on IDENTITY_INSERT first. You can only have it turned on for one table at a time and you need elevated permissions to do so. Let’s see what happens when we try to insert directly into a AUTO_INCREMENT column in MySQL. Run the following statements from the MySQL prompt.

[student@percona56 ~]$ mysql -u root -p
Enter password:
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 1
Server version: 5.6.24-72.2-log Percona Server (GPL), Release 72.2, Revision 8d0f85b

Copyright (c) 2009-2015 Percona LLC and/or its affiliates
Copyright (c) 2000, 2015, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> USE moderndba;
 Reading table information for completion of table and column names
 You can turn off this feature to get a quicker startup with -A

 Database changed
 mysql> INSERT INTO autofun
     -> (employee_id, empoyee_name)
     -> VALUES
     -> (10, 'John Wiley');
 Query OK, 1 row affected (0.03 sec)

 mysql> SELECT *
     -> FROM autofun;
 +-------------+----------------+----------------+
 | employee_id | employee_name  | employee_level |
 +-------------+----------------+----------------+
 |           1 | Joe Smith      |              0 |
 |           2 | Tim Bradford   |              1 |
 |           3 | Betty Stafford |              1 |
 |          10 | John Wiley     |           NULL |
 +-------------+----------------+----------------+
 4 rows in set (0.00 sec)

 mysql>

As you can see, in MySQL you can insert directly into the identity table as the default behavior. But what happens when we go to add the next row. Will the employee_id be a 4, an 11, or some other number? Let’s see.

mysql> INSERT INTO autofun (empoyee_name)
     -> VALUES ('Amanda King');
 Query OK, 1 row affected (0.03 sec)

 mysql> SELECT *
     -> FROM autofun;
 +-------------+----------------+----------------+
 | employee_id | employee_name  | employee_level |
 +-------------+----------------+----------------+
 |           1 | Joe Smith      |              0 |
 |           2 | Tim Bradford   |              1 |
 |           3 | Betty Stafford |              1 |
 |          10 | John Wiley     |           NULL |
 |          11 | Amanda King    |           NULL |
 +-------------+----------------+----------------+
 5 rows in set (0.00 sec)

 mysql>

Ok. That makes sense. The last row inserted was a 10, so the next row would be safest obviously to use the last value plus one. But if this is the case, surely when you insert a value into the column explicitly it must be greater than the largest value in the column, correct? Let’s find out. We can run the following commands and test out this theory.

mysql> INSERT INTO autofun
     -> (employee_id, empoyee_name)
     -> VALUES
     -> (5, 'Sarah Wagoner');
 Query OK, 1 row affected (0.03 sec)

 mysql> SELECT *
     -> FROM autofun;
 +-------------+----------------+----------------+
 | employee_id | employee_name  | employee_level |
 +-------------+----------------+----------------+
 |           1 | Joe Smith      |              0 |
 |           2 | Tim Bradford   |              1 |
 |           3 | Betty Stafford |              1 |
 |           5 | Sarah Wagoner  |           NULL |
 |          10 | John Wiley     |           NULL |
 |          11 | Amanda King    |           NULL |
 +-------------+----------------+----------------+
 6 rows in set (0.00 sec)

 mysql>

Alright, you can insert into the middle of the series. Well, for safety it will increment the column from the largest value and not the last then, right?

mysql> INSERT INTO autofun (empoyee_name)
     -> VALUES ('Mia Brown');
 Query OK, 1 row affected (0.03 sec)

 mysql> SELECT *
     -> FROM autofun;
 +-------------+----------------+----------------+
 | employee_id | employee_name  | employee_level |
 +-------------+----------------+----------------+
 |           1 | Joe Smith      |              0 |
 |           2 | Tim Bradford   |              1 |
 |           3 | Betty Stafford |              1 |
 |           5 | Sarah Wagoner  |           NULL |
 |          10 | John Wiley     |           NULL |
 |          11 | Amanda King    |           NULL |
 |          11 | Mia Brown      |           NULL |
 +-------------+----------------+----------------+
 7 rows in set (0.00 sec)

 mysql>

Yes it does. That seams pretty safe, since you could easily run into problems using the number after the last number used if you allowed inserts into the middle of the series. But it brings up a specific risk that DBAs and Developers need to be aware of. Remember from the script above we made the employee_id field an unsigned integer. The maximum value we can store in an unsigned integer in MySQL is 4,294,967,295. The risk is probably a lot lower on an unsigned integer than on a signed tiny integer (1 byte with a maximum value of 127), but what will happen if someone accidentally inserts a row with a value for employee_id towards the end of the range. Let’s take a look and see what happens.

mysql> INSERT INTO autofun
     -> (employee_id, empoyee_name)
     -> VALUES
     -> (4294967294, 'Brooke Allen');
 Query OK, 1 row affected (0.03 sec)

 mysql> SELECT *
     -> FROM autofun;
 +-------------+----------------+----------------+
 | employee_id | employee_name  | employee_level |
 +-------------+----------------+----------------+
 |           1 | Joe Smith      |              0 |
 |           2 | Tim Bradford   |              1 |
 |           3 | Betty Stafford |              1 |
 |           5 | Sarah Wagoner  |           NULL |
 |          10 | John Wiley     |           NULL |
 |          11 | Amanda King    |           NULL |
 |          11 | Mia Brown      |           NULL |
 |  4294967294 | Brooke Allen   |           NULL |
 +-------------+----------------+----------------+
 8 rows in set (0.00 sec)

 mysql> INSERT INTO autofun (empoyee_name)
     -> VALUES ('Chris Walters');
 Query OK, 1 row affected (0.03 sec)


 mysql> SELECT *
     -> FROM autofun;
 +-------------+----------------+----------------+
 | employee_id | employee_name  | employee_level |
 +-------------+----------------+----------------+
 |           1 | Joe Smith      |              0 |
 |           2 | Tim Bradford   |              1 |
 |           3 | Betty Stafford |              1 |
 |           5 | Sarah Wagoner  |           NULL |
 |          10 | John Wiley     |           NULL |
 |          11 | Amanda King    |           NULL |
 |          11 | Mia Brown      |           NULL |
 |  4294967294 | Brooke Allen   |           NULL |
 |  4294967295 | Chris Walters  |           NULL |
 +-------------+----------------+----------------+
 9 rows in set (0.00 sec)

 mysql>

So everything is cool so far, but we are now up against the limit of the range for an unsigned integer, and we got there with only two inserts. (We could have gotten there with one insert if we really wanted.) So what happens when we go to add the next auto incrementing row to the table. Lets find out.

mysql> INSERT INTO autofun (empoyee_name)
     -> VALUES ('Ben Forest');
 ERROR 1062 (23000): Duplicate entry '4294967295' for key 'PRIMARY'
 mysql>

So we have reached the end of the range and when MySQL tries to grab the next value larger than the largest value in the column it can’t and therefore grabs the same maximum value for the data type. This leads to the error of duplicate primary keys. But what if we remove the rows from the table, then what? Is it really looking at the maximum value in the table, or the maximum value ever inserted. Lets see.

mysql> DELETE FROM autofun WHERE employee_name = 'Chris Walters';
 Query OK, 1 row affected (0.05 sec)

 mysql> SELECT *
     -> FROM autofun;
 +-------------+----------------+----------------+
 | employee_id | employee_name  | employee_level |
 +-------------+----------------+----------------+
 |           1 | Joe Smith      |              0 |
 |           2 | Tim Bradford   |              1 |
 |           3 | Betty Stafford |              1 |
 |           5 | Sarah Wagoner  |           NULL |
 |          10 | John Wiley     |           NULL |
 |          11 | Amanda King    |           NULL |
 |          11 | Mia Brown      |           NULL |
 |  4294967294 | Brooke Allen   |           NULL |
 +-------------+----------------+----------------+
 8 rows in set (0.00 sec)

 mysql> INSERT INTO autofun (empoyee_name)
     -> VALUES ('Ben Forest');
 Query OK, 1 row affected (0.03 sec)

 mysql>  SELECT *
     -> FROM autofun;
 +-------------+----------------+----------------+
 | employee_id | employee_name  | employee_level |
 +-------------+----------------+----------------+
 |           1 | Joe Smith      |              0 |
 |           2 | Tim Bradford   |              1 |
 |           3 | Betty Stafford |              1 |
 |           5 | Sarah Wagoner  |           NULL |
 |          10 | John Wiley     |           NULL |
 |          11 | Amanda King    |           NULL |
 |          11 | Mia Brown      |           NULL |
 |  4294967294 | Brooke Allen   |           NULL |
 |  4294967295 | Ben Forest     |           NULL |
 +-------------+----------------+----------------+
 9 rows in set (0.00 sec)

 mysql> DELETE FROM autofun WHERE employee_name IN ('Ben Forest', 'Brooke Allen');
 Query OK, 2 row affected (0.02 sec)

 mysql> INSERT INTO autofun (empoyee_name)
     -> VALUES ('Nina Gardner');
 Query OK, 1 row affected (0.03 sec)

 mysql>  SELECT *
     -> FROM autofun;
 +-------------+----------------+----------------+
 | employee_id | employee_name  | employee_level |
 +-------------+----------------+----------------+
 |           1 | Joe Smith      |              0 |
 |           2 | Tim Bradford   |              1 |
 |           3 | Betty Stafford |              1 |
 |           5 | Sarah Wagoner  |           NULL |
 |          10 | John Wiley     |           NULL |
 |          11 | Amanda King    |           NULL |
 |          11 | Mia Brown      |           NULL |
 |  4294967295 | Nina Gardner   |           NULL |
 +-------------+----------------+----------------+
 8 rows in set (0.00 sec)

 mysql>

So it is not using the largest value in the column plus one. Instead it really is using the next value after the largest value ever inserted. The only issue is that when you get to the end of the range, instead of raising an error that the next value is out of range it takes the closest value in the range and tries to use it. This can be an issue since we are dealing with the primary key on the table, what does this behavior mean to other tables with relationships back to this key.

I understand that this is an edge case, and most of the time you will not need to worry about this. But it is an interesting behavior and one that I have not seen in other relational database management systems. I hope you found this as interesting as I have.

Installing Percona Server 5.6 on CentOS 6

Installing Percona Server on CentOS 6 couldn’t be easier using yum and the Percona repository.  We start with a minimal install of CentOS 6. I am using CentOS 6 because I am using RHEL 6 at work and want to stay closer to that environment when I work on testing things at home.

First we need to remove the MySQL libraries that ship with CentOS

[student@percona56 ~]$ sudo yum -y remove mysql-libs

Next import the Percona repository using the following command.

[student@percona56 ~]$ sudo yum -y install http://www.percona.com/downloads/percona-release/redhat/0.1-3/percona-release-0.1-3.noarch.rpm

Now that we have the repository installed we can verify by running:

[student@percona56 ~]$ sudo yum list | grep percona

This should return a listing of the yum packages in the Percona repository. For this example we will be installing not only the server but the client and some other packages that are helpful to have around when running mysql. We will be installing the 5.6 version of Percona Server as other versions are available at this time in the repository. To install our packages we will run:

[student@percona56 ~]$ sudo yum install Percona-Server-shared-56.x86_64 Percona-Server-client-56.x86_64 Percona-Server-server-56.x86_64 percona-toolkit percona-xtrabackup

Now that we have the binaries laid down we will need to create the data directory and start MySQL. To do that we run:

[student@percona56 ~]$ sudo touch /etc/my.cnf
[student@percona56 ~]$ sudo /usr/bin/mysql_install_db --defaults-file=/etc/my.cnf --force --datadir=/var/lib/mysql --basedir=/usr/ --user=mysql
[student@percona56 ~]$ sudo service mysql start

We now have MySQL running and the first thing I like to do is secure the installation using the built in script provided. We can run the script using the command below. Just hit Enter when it asks for the current root password. We will want to set the root password, remove anonymous users, disallow remote login by root, and remove the test database. We will also want to reload privileges when finished.

[student@percona56 ~]$ /usr/bin/mysql_secure_installation

I also like to install the sys schema. The sys schema is a set of objects that help you analyze data in the Performance Schema. This schema is included by default in 5.7, and is needed by Workbench reports. If you don’t install it now, you can do so through Workbench. I like to go ahead and get it out of the way.

[student@percona56 ~]$ sudo yum -y install git
[student@percona56 ~]$ cd /tmp
[student@percona56 tmp]$ git clone https://github.com/MarkLeith/mysql-sys
[student@percona56 tmp]$ cd mysql-sys
[student@percona56 mysql-sys]$ mysql -u root -p < ./sys_56.sql

But we still can’t use the sys schema because we have not turned on Performance Schema on our instance. In fact we haven’t configured our instance at all. Lets do that now by editing our /etc/my.cnf file. We want our file to look like this when finished.

[mysql]

# CLIENT #
port                           = 3306
socket                         = /var/lib/mysql/mysql.sock

[mysqld]

# GENERAL #
user                           = mysql
default-storage-engine         = InnoDB
socket                         = /var/lib/mysql/mysql.sock
pid-file                       = /var/lib/mysql/mysql.pid

# MyISAM #
key-buffer-size                = 32M
myisam-recover                 = FORCE,BACKUP

# SAFETY #
max-allowed-packet             = 16M
max-connect-errors             = 1000000
sql-mode                       = STRICT_ALL_TABLES,NO_AUTO_CREATE_USER,NO_AUTO_VALUE_ON_ZERO,NO_ENGINE_SUBSTITUTION

# DATA STORAGE #
datadir                        = /var/lib/mysql/

# BINARY LOGGING #
log-bin                        = /var/lib/mysql/mysql-bin
expire-logs-days               = 14
sync-binlog                    = 1

# CACHES AND LIMITS #
tmp-table-size                 = 32M
max-heap-table-size            = 32M
query-cache-type               = 0
query-cache-size               = 0
max-connections                = 500
thread-cache-size              = 50
open-files-limit               = 65535
table-definition-cache         = 4096
table-open-cache               = 4096

# INNODB #
innodb-flush-method            = O_DIRECT
innodb-log-files-in-group      = 2
innodb-log-file-size           = 256M
innodb-flush-log-at-trx-commit = 1
innodb-file-per-table          = 1
innodb-buffer-pool-size        = 1536M     #see https://www.percona.com/blog/2007/11/03/choosing-innodb_buffer_pool_size/ for guidance

# LOGGING #
log-error                      = /var/lib/mysql/mysql-error.log
log-queries-not-using-indexes  = 1
slow-query-log                 = 1
slow-query-log-file            = /var/lib/mysql/mysql-slow.log

# PERFORMANCE SCHEMA #
performance_schema
performance_schema_instrument = '%=on'
performance_schema_consumer_events_stages_current = ON
performance_schema_consumer_events_stages_history = ON
performance_schema_consumer_events_stages_history_long = ON
performance_schema_consumer_events_statements_current = ON
performance_schema_consumer_events_statements_history = ON
performance_schema_consumer_events_statements_history_long = ON
performance_schema_consumer_events_waits_current = ON
performance_schema_consumer_events_waits_history = ON
performance_schema_consumer_events_waits_history_long = ON
performance_schema_consumer_global_instrumentation = ON
performance_schema_consumer_thread_instrumentation = ON
performance_schema_consumer_statements_digest = ON

At this point we should be ready to restart MySQL and get started.

sudo service mysql restart

I will use this base image going forward as the starting point for all my MySQL explorations.

Percona Live 2015 Wrap Up

With Percona Live 2015 in the books I thought I should take the time to gather my thoughts.  There was a ton of information shared at the conference.  We sent 3 people to the main conference and one to the new MySQL 101 conference and we still had to leave excellent presentations on the table.  But we got what we came to get.  We feel more comfortable with MySQL in general and with Galera in particular.  We are rolling out a Percona XtraDB Cluster soon and not only did this conference clear up some questions, it also helped us tweak our architecture to provide better availability for our customers.

I could go on and on about all the technical knowledge we gained here, but I have already posted notes on each of the sessions I attended.  But beyond the technical, I was impressed by the vibrant and innovated community surrounding MySQL.  We spoke to people solving some of the most challenging OLTP workloads out there.  Not only did they show us that MySQL can support these workloads, they showed us how MySQL is better suited than proprietary database platforms.  MySQL gives you the ability to scale at a cost point that proprietary RDBMs can’t.

I know some will say that it takes application changes in order reach these levels of scale with MySQL.  This is true today, but I saw several presentations where the MySQL community is working on solving this problem and developing a layer of abstraction that “traditional” applications can connect to and not have to worry about the complexity.  In fact, MariaDB‘s MaxScale is already providing this type of functionality.

Heading back home and thinking about how to use all this knowledge for good I realize that I will have a lot of resistance.  We have a lot invested in our existing proprietary database platforms.  But talking to folks who have been where I sit, I realized this week that if we don’t adapt to speed up the rate of innovation we will become irrelevant.  I for one plan to evolve!

My Session Notes

JSON support in MySQL 5.7 – Percona Live 2015

I attended the “JSON support in MySQL 5.7” session given by Evgeny Potemkin and Manyi Lu from Oracle.  This was my last session of the conference and my head was swimming with knowledge at this point.  In 5.7 they are adding json data type, json functions and the ability to index json data.  Before your options were to store json as either text or binary but you had to manage all the validation, parsing, etc.  The new JSON datatype is

  • optimized for read intensive workload
  • parse and validation on insert only
  • dictionary (sorted object keys, fast access to array cells by index)
  • In-place updated
  • Smart size 64K & 4G
  • UTF8 encoding
  • Supports all native JSON types (numer, string, bool, object, arrays, etc.)
  • Also supports Date, time, datetime, timestamp, and others

They gave some sample inserts that I will need to grab from their slides.  JSON comparator has polymorphic behavior and provides seamless and consistent comparison.  So you can compare JSON to JSON or JSON to SQL.  Different data types will always be non-equal so no automatic type conversion.  They then showed a sample select using the JSON datatype.  They also have introduced the concept of JSON path which is a way to identify how to get JSON data.  The details are on the slides and it was too much to record so I will have to grab it later for the developers.  But $ is used for the document root, [] define an array, and they use * for a search.  Other functions

  • JSN_VALID()
  • JSN_TYPE()
  • JSN_KEYS()
  • JSN_LENGTH()
  • JSN_DEPTH()
  • JSN_CONTAINS_PATH()
  • JSN_REMOVE()
  • JSN_APPEND()
  • JSN_SET()
  • JSN_INSERT()
  • JSN_REPLACE()
  • JSN_MERGE()
  • JSN_ARRAY()
  • JSN_OBJECT()
  • JSN_EXTRACT()
  • JSN_SEARCH()
  • JSN_QUOTE()
  • JSN_UNQUOTE()

They then showed how you can use a select to return the data as JSON using these function (again need to refer to slides).  They will support both Stored and Virtual indexes on JSON data.  Primary index must be stored.  Stored indexes can be BTREE, FTs, or GIS and Virtual can only be BTREE.  They plan to add Instant ALTER to Virtual indexes soon.  They have a bunch more with JSON on the roadmap like Full-text indexing and GIS indexes.

Incremental backups available with Xtrabackup and Percona Server – Percona Live 2015

In the session “Incremental backups available with Xtrabackup and Percona Server” Vladislav Lesin and George Lorch from Percona went over differential backups.  There are three ways to get the delta; full scan, use redo logs, or a combination of the two.  For full scan you scan all pages and backup any pages with last change LSN newer than a specific LSN.  Advantages are that you only store changed pages and there are no server changes required.  The downside if you have the overhead of a full scan. The redo logs can be used to restore a database to/from a specific point since it has the change data.  Advantage of using redo log are that there is no full scan.  The downsize is the space consumed.

In the combined method they store the LSN information on pages in the server in a RB-tree utilizing a separate thread that checks the redo logs on checkpoint.  This is to avoid the full scan.  They shared an edge case where size can be an issue with this approach.

How DevOps Met Gaming Needs – Percona Live 2015

I attended the “How DevOps Met Gaming Needs” session by Sean Chighizola and Peter Garbes at Big Fish Games.  They talked about the migration process to a DevOps model.  They went over the history of Big Fish Games which started in 2002 with PC/Mac games.  They use Akimai for their delivery.  They use MySQL to run their site with physicall partitioning by feature/service.  They used a the following technologies, Nagios, cacti, php, perl, subversion, bash scripts, etc.  Their releases were site down deployments with all hands on deck.

The transition was hard to manage.  Change was difficult, there has to be a driver.  The addition of the free to play market was the driver for Big Fish.  They used Fairway Solitaire as an example of a game the made the transition from the digital download to the free to play model.  Even after the migration it would take 20-120 minutes to push out a change.  A change was pushed every day.

Big Fish moved their daily courses to a self-service process.  Was very successful.  But it was not all roses.  They had a lot of non standard changes that did not match their expected workflow.  They saw the DevOps model as a way to solve these residual pains.  They leveraged Jez Humble’s Continuous Delivery book as a guide.  They were able to increase velocity while maintaining team size.  They also were able to decrease failure rate.

They discovered that giving developers operational responsibility will make them think like operators.  They use MEM 3.0 to provide self-service with Query Analyzer being a key driver for that decision.  IT is not in the business of provisioning, we are in the business of engineering processes which put developers in control.  They also went over some MySQL issues they have run into.  I’ll need to grab their slides to get the links to dive in deeper.

They have been moving from Cacti to Graphite.  They have not yet implemented database changes but have a proof of concept in place.  They restrict the types of changes allowed and framework for where to put scripts.

Encrypting MySQL data at Google – Percona Live 2015

I attended the “Encrypting MySQL data at Google” session with Jeremy Cole and Jonas Oreland both from Google.  They started with the why of encrypting data.  The threat they are trying to protect from are:

  • access through network APIs (mysql client …) — not protecting against this
  • Access within from a running server (ptrace, memory dumping, etc.)– not protecting against this
  • Lost or misplaced disks — ARE PROTECTING
  • Backups — ARE PROTECTING

Not all threats are feasible to protect against.  An attacker with unlimited network access and time will be able own you.  And if they can get root they can own you.  You could encrypt data in the columns from the application but a lot of work and you cannot access your data via SQL.  It is also incompatible with 3rd party products.  You can purchase middleware products to provide column encryption.  (MyDiamo and CryptDB were given as examples).  Indexing because an issue with these approaches.  You could just encrypt all the disks but when mounted it would be unencrypted.

In their approach they wanted to encrypt all user data to include temporary tables and InnoDB data and log files.  They wanted to make drive-by data exfiltration impossible.  An attacker will need data and keys to decrypt.  InnoDB organized in 16K pages and they encrypt all pages except for page 0.  They do not encrypt page headers or page trailers.  They also do not use the same key to encrypt multiple similar pieces of data to avoid being able to make guesses on known attributes of InnoDB pages (i.e. lots of zeros in the middle of a page due to initialization).  They encrypt all log blocks but the first 4.  Each block is encrypted with a different key.

They have a Google key management infrastructure so you will need to role your own as theirs is not open source.  Keys are never stored on disk.  Only used in memory.  They did however public a key management plugin interface to allow for someone else to write an open source solution.  They rotate keys on the redo logs by writing dummy records to age out blocks as needed.  Temporary Tables should age themselves out within MySQL.  Binary and relay logs are encrypted by the latest key and similarly age out.  For InnoDB data they keep the key version in the page header and they have background threads that re-encrypt pages as needed to rotate keys.  The number of threads and how much IOPs should be used for rotation are both configurable.

The encryption works with replication as well.  Code available at code.google.com/p/google-mysql.