InnoDB: running out of AUTO_INCREMENT values

Most InnoDB primary keys are built on integer columns with the AUTO_INCREMENT option (which is a very good practice for reasons that are outside of the purpose of this article). But we have to monitor that we are not going to run out of AUTO_INCREMENT value. If this happens, we will get errors like this:

ERROR 167 (22003): Out of range value for column 'a' at row 1

Obviously, when creating tables, we should use a type that is sufficiently big, and make it UNSIGNED to avoid wasting half of its space. But there are also some details about AUTO_INCREMENT that we should remember.

First, the values to monitor are not MAX(id), but they are the AUTO_INCREMENT column in information_schema TABLES. This column shows the next autoincr value.

A used value is lost. It doesn’t matter if you delete a row: its autoincr value will not be reused. This is why we should monitor the next values from information_schema: it is possible that we used all autoincr values, but MAX(id) is much lower.

Values are lost even for transactions that fail or are rolled back. This is the reason why MAX(id) is generally higher, sometimes much higher, than the next AUTO_INCREMENT value. This can happen especially if we have periodical bulk inserts inserting many rows in one transaction. If such operation fails, it will waste autoincr values. If it fails often, we could run out of values.

Easy trick

An easy trick could be use a command like this:

ALTER TABLE table_name AUTO_INCREMENT = 1000;

But this only works if all values in the table are lower than the specified AUTO_INCREMENT. This can be useful if a bulk insert failed and no new row was still added, but typically this only postpones a bit the problem.

Or again, we could change all id’s to fill all the holes, and then set AUTO_INCREMENT. This is possible if, for example, the server is not used by night, or during the week end. But in other cases, it is not a viable solution. Modifying a primary key is not a cheap operation for InnoDB, and most likely, we will have to do this with many rows. Also, we would need to lock the table to avoid that new rows (with high values) are inserted in the process.

So, what can we do?

If we have a write-heavy workload, lowering the autoincr values is not trivial, because new rows are being inserted any time. And changing an id for many rows can be a long operation. If we do it with one transaction to make things faster, it’s locking. But well, we can do something similar to what pt-online-schema-change does to alter a table without locks. Let’s see the procedure step by step.

For our example, we will assume the following trivial table:

CREATE TABLE employee
(
    id INT UNSIGNED AUTO_INCREMENT,
    name VARCHAR(100),
    PRIMARY KEY (id)
);

1)

First, let’s create a table with the same structure. The id’s in the new table will start from 1. And let’s add a column, which is a reference to the original table’s id.

CREATE TABLE tmp_employee LIKE employee;
ALTER TABLE tmp_employee ADD COLUMN orig_id INT UNSIGNED NULL;
ALTER TABLE tmp_employee ADD UNIQUE INDEX idx_orig_id (orig_id);

2)

Now, we need to add a trigger to the original table, to add the new rows in the new table:

DELIMITER ||

CREATE TRIGGER employee_ai
    AFTER INSERT
    ON employee
    FOR EACH ROW
BEGIN
    INSERT INTO tmp_employee (old_id, name) VALUES (NEW.id, NEW.name);
END ||

DELIMITER ;

Note that original table’s id and the new table’s old_id columns are kept in sync.
This is just an example. In the real world, I would also create triggers for DELETE and UPDATE. This triggers need a reference to the new table, that’s why old_id exists and is indexed.

What if the table already has triggers? Fortunately MySQL 5.7 and MariaDB 10.2 support multiple triggers per timing/event. With older versions, just modify them and add the queries you need.

3)

Copy rows from the original table to the new table. If the new table already has some rows (added by our INSERT trigger) we won’t try to copy it.

INSERT INTO tmp_employee (old_id, name)
    SELECT id, name
        FROM employee
        WHERE
            (SELECT COUNT(*) FROM tmp_employee) = 0
            OR id < (SELECT MIN(old_id) FROM tmp_employee);

We are still populating the old_id column for DELETE and UPDATE triggers.

We didn’t mention the new table’s primary key, so the values are automatically generated.

4)

Until now, I assume we didn’t cause any damage. But the next step is more dangerous. Before proceeding, verify that we didn’t do anything wrong and our transactions didn’t fail.

5)

If everything’s ok, let’s switch the tables:

RENAME TABLE
    employee TO old_employee,
    tmp_employee TO employee;

This operation is atomic, so no query is expected to fail.

6)

Drop the old table and the old_id column.

Enjoy!

Advertisements

Prepared Statements in MariaDB 10.2

Prepared statements are particularly useful in stored procedures. In fact, they are the only way to execute dynamic SQL, which means that, for example, the list of ORDER BY columns is in a variable. You can do this by composing an SQL string, as you do in other languages.

However, in MySQL and up to MariaDB 10.1, there are some annoying limitations:

  • Prepared statements exist at a connection level. Even if you declare them in a stored procedure, they are global. If you are writing a stored procedure which should be usable in many contexts (maybe it’s part of an open source library), there is a risk that you overwrite existing prepared statements. There is no way to get a list of existing prepared statements.
  • A prepared statement is prepared from a literal string or a user variable. A literal string cannot contain variables. A user variable exists at a session level, so again, there is a risk to overwrite something. Especially if you always call the variable @sql, because a lot of people tends to do so.
  • Prepared statements are verbose. You need to run 3 different statements (PREPARE, EXECUTE, DEALLOCATE PREPARE) just to execute a dynamic SQL statement.

First, let me say that the first problem is not solved. I hope it will be solved in a future version, so it will be easier to distribute stored procedures libraries. After all, MariaDB 10.3 will have several improvements to stored procedures.

The second problem is solved in 10.2. You can pass any SQL expression (or, at least, all expressions I tested) to PREPARE. Including a local variable, which means that there will be no conflicts with variables declared by the user. For example:

DECLARE v_query TEXT DEFAULT 'SELECT COUNT(DISTINCT user) FROM mysql.user';
PREPARE stmt FROM v_query;
EXECUTE stmt;
DEALLOCATE PREPARE stmt;

But this example is verbose. It demonstrates what I stated above: you need at least 3 statements just to execute a dynamic query. But MariaDB 10.2 solves this problem introducing EXECUTE IMMEDIATE, which is basically a shortcut:

DECLARE v_query TEXT DEFAULT 'SELECT COUNT(DISTINCT user) FROM mysql.user';
EXECUTE IMMEDIATE v_query;

As you can see, we don’t need PREPARE or DEALLOCATE PREPARE if we use EXECUTE IMMEDIATE. This is nice to have in stored procedures, but is not useful outside of procedures, because the prepared statement is always reprepared, even if we executed it before.

Enjoy!