How to fix ERROR 1153 (08S01): Got a packet bigger than 'max_allowed_packet'

Error Message

When encountering MySQL Error 1153, you'll see a message similar to:

ERROR 1153 (08S01): Got a packet bigger than 'max_allowed_packet' bytes

What It Means

This error occurs when MySQL attempts to send or receive a data packet that exceeds the size limit defined by the max_allowed_packet system variable. MySQL uses packets for communication between the client and server, and this setting controls the maximum size of any single packet.

The error can happen during various operations involving large amounts of data, such as:

  • Inserting large text, BLOB, or JSON values
  • Running complex queries with large result sets
  • Importing large SQL dumps
  • Performing large transactions

Common Causes

  1. Large data inserts: Inserting very large text, BLOB, or JSON values
  2. Oversized SQL statements: Extremely long SQL queries or multi-queries
  3. Large result sets: Queries returning massive amounts of data
  4. Default configuration too small: Default max_allowed_packet value (often 4MB or 16MB) insufficient for needs
  5. Mismatched settings: Different max_allowed_packet values between client and server
  6. Data import operations: Importing large database dumps or CSV files
  7. Large transactions: Transactions affecting many rows or large objects

How to Fix

Solution 1: Increase max_allowed_packet Size

Check and increase the current setting:

-- Check current max_allowed_packet value
SHOW VARIABLES LIKE 'max_allowed_packet';

-- Increase max_allowed_packet for current session
SET GLOBAL max_allowed_packet = 1073741824; -- 1GB

For permanent changes, modify the MySQL configuration file (my.cnf/my.ini):

[mysqld]
max_allowed_packet = 1G

[mysql]
max_allowed_packet = 1G

[mysqldump]
max_allowed_packet = 1G

Then restart MySQL to apply the changes.

Solution 2: Configure Client and Server Consistently

Ensure both client and server use compatible settings:

# Server configuration (my.cnf in server)
[mysqld]
max_allowed_packet = 256M

# Client configuration (my.cnf in client or ~/.my.cnf)
[mysql]
max_allowed_packet = 256M

[mysqldump]
max_allowed_packet = 256M

For programming languages, configure the client library:

# Python example
import mysql.connector
db = mysql.connector.connect(
    host="localhost",
    user="username",
    password="password",
    database="mydatabase",
    max_allowed_packet=268435456  # 256MB
)

Solution 3: Split Large Operations into Smaller Chunks

Divide large operations into smaller pieces:

-- Instead of one large INSERT
INSERT INTO large_table (blob_column) VALUES (HUGE_BLOB_VALUE);

-- Use multiple smaller INSERTs
-- Insert first chunk
INSERT INTO large_table (id, blob_column) VALUES (1, CHUNK1);
-- Insert second chunk
INSERT INTO large_table (id, blob_column) VALUES (2, CHUNK2);

For application code, implement chunking logic:

# Pseudo-code for chunking large data
def insert_large_data(connection, large_data, chunk_size=1000000):
    cursor = connection.cursor()
    for i in range(0, len(large_data), chunk_size):
        chunk = large_data[i:i+chunk_size]
        cursor.execute("INSERT INTO table (data_column) VALUES (%s)", (chunk,))
    connection.commit()

Solution 4: Use External Storage for Very Large Objects

For extremely large binary objects, consider alternative storage approaches:

-- Store file path instead of content
CREATE TABLE documents (
    id INT PRIMARY KEY,
    filename VARCHAR(255),
    file_path VARCHAR(1024),
    file_size BIGINT,
    upload_time TIMESTAMP
);

-- Then store actual files in the filesystem

Solution 5: Optimize Import and Export Operations

For large database dumps:

# When creating dumps, split into smaller files
mysqldump --max_allowed_packet=256M --net_buffer_length=8K \
  --skip-extended-insert --database mydb > mydb_dump.sql

# When importing, increase buffer sizes
mysql --max_allowed_packet=256M --database=mydb < mydb_dump.sql

Solution 6: Use Streaming APIs for Large Results

For large result sets, use streaming/cursor-based approaches:

# Python example with MySQLdb cursor
cursor = connection.cursor()
cursor.execute("SELECT * FROM huge_table")
while True:
    row = cursor.fetchone()
    if row is None:
        break
    process_row(row)

Solution 7: Monitor and Adjust Based on Actual Usage

Set up monitoring to track packet sizes and adjust accordingly:

-- Enable query logging to identify large operations
SET GLOBAL general_log = 'ON';
SET GLOBAL log_output = 'TABLE';

-- Later, analyze logs for large queries
SELECT * FROM mysql.general_log
WHERE argument LIKE '%BLOB%' OR LENGTH(argument) > 10000;

Cloud Vendor Considerations

When working with MySQL in cloud environments:

  • AWS RDS/Aurora:

    • Configure max_allowed_packet through Parameter Groups (up to 1GB)
    • Use Performance Insights to monitor packet-related issues
    • For Aurora, consider different settings for writer and reader instances
  • Google Cloud SQL:

    • Set max_allowed_packet using database flags
    • Use import/export service for large data transfers
    • Consider Cloud Storage for object storage instead of BLOBs
  • Azure Database for MySQL:

    • Configure through server parameters
    • Use Data-in/Data-out service for large migrations
    • Monitor performance metrics for packet-related issues

For all cloud environments:

  1. Be aware of provider-specific limits that may affect packet sizes
  2. Consider service-specific import/export tools rather than raw SQL operations
  3. For very large objects, use cloud object storage (S3, GCS, Azure Blob Storage) instead of database storage
  4. Monitor costs, as increased packet sizes may affect network data transfer charges
Edit this page on GitHub