; sql-splitter

Split SQL dumps at 400+ MB/s

High-performance CLI tool for splitting large SQL dump files into individual table files. Written in Rust. 1.25x faster than the Go version on 10GB files.

Why sql-splitter?

Working with large SQL dumps is painful. Importing a 10GB file takes forever, and you often just need a few tables. sql-splitter lets you split the dump into individual files so you can import only what you need.

~

Blazing Fast

400+ MB/s throughput

<

Memory Efficient

~50MB constant usage

*

Zero-Copy

No garbage collection

Quick Start

Split a SQL file in seconds.

1

Build from source

git clone https://github.com/helgesverre/sql-splitter.git
cd sql-splitter
cargo build --release
2

Split your dump

./target/release/sql-splitter split database.sql --output=tables

That's it! Each table gets its own .sql file in the output directory.

Installation

Using Cargo

If you have Rust 1.70+ installed:

cargo install --git https://github.com/helgesverre/sql-splitter

Build from Source

git clone https://github.com/helgesverre/sql-splitter.git
cd sql-splitter
cargo build --release
sudo cp target/release/sql-splitter /usr/local/bin/

Optimized Build (Best Performance)

Build with CPU-specific optimizations for maximum throughput:

RUSTFLAGS="-C target-cpu=native" cargo build --release

Verify installation:

sql-splitter --version

User Guide

Commands

split

Split a SQL dump file into individual table files.

# Basic usage
sql-splitter split database.sql

# Custom output directory
sql-splitter split database.sql --output=tables

# Show progress
sql-splitter split database.sql --progress

# Split only specific tables
sql-splitter split database.sql --tables=users,posts

# Dry run (preview without writing)
sql-splitter split database.sql --dry-run

analyze

Analyze a SQL file and display table statistics.

sql-splitter analyze database.sql
sql-splitter analyze database.sql --progress

Split Options

FlagShortDescriptionDefault
--output-oOutput directoryoutput
--verbose-vVerbose outputfalse
--progress-pShow progressfalse
--tables-tFilter tables (comma-separated)-
--dry-run-Preview without writingfalse

Supported Statement Types

sql-splitter recognizes and routes these SQL statement types to their respective table files:

  • CREATE TABLE
  • INSERT INTO
  • CREATE INDEX
  • ALTER TABLE
  • DROP TABLE

Other statements (SELECT, UPDATE, DELETE, etc.) are skipped.

Examples

Split a large production dump

sql-splitter split production-backup.sql -o tables -p

Extract only specific tables

sql-splitter split database.sql --tables=users,orders,products

Preview before splitting

sql-splitter split database.sql --dry-run

Analyze tables before splitting

sql-splitter analyze database.sql

Import a single table

mysql -u root -p mydb < tables/users.sql

Performance

Benchmarks on Apple M2 Max:

MetricValue
Parser Throughput400-500 MB/s
vs Go Version1.25x faster on 10GB files
Memory Usage~50 MB constant
Cold Start~5ms

Actual performance depends on disk I/O speed and file complexity.

FAQ

Does it modify my source file?

No. sql-splitter only reads from your source file. It never modifies the original.

What about statements with semicolons in strings?

sql-splitter correctly handles semicolons inside quoted strings. It tracks quote boundaries and escape sequences.

Can it handle multi-line statements?

Yes. The parser uses streaming and statement boundary detection, not line-based parsing.

What about backtick-quoted table names?

Both backtick-quoted (`table`) and regular table names are supported.

Why Rust instead of Go?

Rust's zero-cost abstractions and lack of garbage collection enable higher throughput. The Rust version achieves 1.25x faster performance than Go on large files, with lower memory usage and faster cold starts.

Is it faster than other tools?

sql-splitter is among the fastest SQL splitters available, achieving 400+ MB/s throughput with minimal memory footprint thanks to Rust's ownership model and zero-copy parsing.