merge
Combine per-table SQL files into a single dump file.
Alias: mg (e.g., sql-splitter mg tables/ -o restored.sql)
When to Use This
Section titled “When to Use This”- Reassembling after edits - After using
splitto edit individual tables (e.g., withredactor manual fixes), merge them back into a single deployable file - Creating subset restores - Build a dump containing only specific tables for partial database restores or testing
- Building migration files - Combine schema/data for specific tables into a migration script for CI/CD deployment
- Streaming to databases - Pipe merged output directly to
mysqlorpsqlfor one-step restore - Compressing output - Pipe to
gziporzstdfor compressed backups without intermediate files
Use order first if you need foreign key-safe ordering—merge sorts alphabetically by filename, not by FK dependencies.
How It Works
Section titled “How It Works”The merge process:
- Discover files - Scans the input directory for all
.sqlfiles - Filter - Applies
--tables(include) and--excludefilters - Sort - Sorts files alphabetically by table name (e.g.,
accounts.sql→users.sql→zones.sql) - Stream - Writes each file’s contents sequentially with separator comments
What gets included:
- All
.sqlfiles in the directory (non-recursive) - Each file is prefixed with a separator comment:
-- Table: tablename - Files are streamed line-by-line (memory-efficient for large dumps)
What does NOT happen:
_global.sqlis treated like any other table file (sorted alphabetically with_at the start)- No FK dependency ordering—files are merged alphabetically. Use
orderon the output if you need topological order. - No deduplication—if a table appears in multiple files, both are included
Dialect-specific behavior:
- MySQL: Adds
SET FOREIGN_KEY_CHECKS = 0;header,SET FOREIGN_KEY_CHECKS = 1;footer - PostgreSQL: Adds
SET client_encoding = 'UTF8';header - SQLite: Adds
PRAGMA foreign_keys = OFF;header,PRAGMA foreign_keys = ON;footer - MS SQL: Adds
SET ANSI_NULLS ON;,SET QUOTED_IDENTIFIER ON;,SET NOCOUNT ON;headers
sql-splitter merge <INPUT_DIR> [OPTIONS]Examples
Section titled “Examples”Basic Merge
Section titled “Basic Merge”# Merge all tables from a directorysql-splitter merge tables/ -o restored.sql
# Preview what would be merged (no files written)sql-splitter merge tables/ --dry-runCreating Subset Restores
Section titled “Creating Subset Restores”# Merge only specific tablessql-splitter merge tables/ -o users-data.sql --tables users,profiles,settings
# Merge everything except large or sensitive tablessql-splitter merge tables/ -o partial.sql --exclude logs,audit_trail,sessionsAtomic Restores with Transactions
Section titled “Atomic Restores with Transactions”# Wrap entire merge in a transactionsql-splitter merge tables/ -o restored.sql --transactionThis produces a dump that either fully succeeds or fully rolls back on error.
Streaming to Database
Section titled “Streaming to Database”# MySQL: Pipe directly (omit -o to write to stdout)sql-splitter merge tables/ | mysql -u user -p database
# PostgreSQLsql-splitter merge tables/ --dialect postgres | psql "$PG_CONN"
# SQLitesql-splitter merge tables/ --dialect sqlite | sqlite3 mydb.sqliteCompressed Output
Section titled “Compressed Output”# Compress on the flysql-splitter merge tables/ | gzip > merged.sql.gzsql-splitter merge tables/ | zstd > merged.sql.zst
# With progress bar (written to stderr, doesn't interfere with pipe)sql-splitter merge tables/ --progress | gzip > merged.sql.gzMinimal Output (No Headers)
Section titled “Minimal Output (No Headers)”# Skip the generated header comments and dialect-specific SET statementssql-splitter merge tables/ -o raw.sql --no-headerOptions
Section titled “Options”| Flag | Short | Description | Default |
|---|---|---|---|
--output | -o | Output SQL file | stdout |
--dialect | -d | SQL dialect for headers/footers | mysql |
--tables | -t | Only merge these tables (comma-separated) | all |
--exclude | -e | Exclude these tables (comma-separated) | none |
--transaction | Wrap in BEGIN/COMMIT transaction | false | |
--no-header | Skip header comments | false | |
--progress | -p | Show progress bar | false |
--dry-run | Preview without writing files | false | |
--json | Output results as JSON | false |
Transaction Wrapping
Section titled “Transaction Wrapping”Use --transaction for atomic restores:
sql-splitter merge tables/ -o restored.sql --transactionOutput (dialect-aware):
-- MySQLSTART TRANSACTION;-- ... table contents ...COMMIT;
-- PostgreSQLBEGIN;-- ... table contents ...COMMIT;
-- SQLite / MS SQLBEGIN TRANSACTION;-- ... table contents ...COMMIT;JSON Output
Section titled “JSON Output”sql-splitter merge tables/ -o merged.sql --json{ "input_dir": "tables/", "output_file": "merged.sql", "dialect": "mysql", "dry_run": false, "statistics": { "tables_merged": 4, "bytes_written": 125000, "elapsed_secs": 0.015, "throughput_kb_per_sec": 8138.02 }, "tables": ["orders", "products", "users", "zones"], "options": { "transaction": false, "header": true }}When writing to stdout (no -o), JSON is printed to stderr to avoid mixing with SQL output.
Composing with Other Tools
Section titled “Composing with Other Tools”FK-Safe Merge
Section titled “FK-Safe Merge”Merge then reorder for foreign key safety:
sql-splitter merge tables/ -o temp.sqlsql-splitter order temp.sql -o ordered.sqlSplit → Edit → Merge Workflow
Section titled “Split → Edit → Merge Workflow”# Split the dumpsql-splitter split dump.sql -o tables/
# Redact sensitive data in specific tablessql-splitter redact tables/users.sql -o tables/users.sql -c redact.yaml
# Merge back togethersql-splitter merge tables/ -o sanitized.sqlMerge Subset and Validate
Section titled “Merge Subset and Validate”# Create subset dumpsql-splitter merge tables/ -o subset.sql --tables users,orders,products
# Validate FK integrity of the subsetsql-splitter validate subset.sql --check-fkCI/CD Pipeline Integration
Section titled “CI/CD Pipeline Integration”# Generate deterministic output for diffingsql-splitter merge tables/ -o merged.sqlsql-splitter order merged.sql -o ordered.sqlgit diff ordered.sqlTroubleshooting
Section titled “Troubleshooting””No .sql files found in directory”
Section titled “”No .sql files found in directory””The input path must be a directory containing .sql files:
# Wrong: pointing to a filesql-splitter merge backup.sql -o out.sql# Error: no .sql files found in directory: backup.sql
# Correct: point to directory created by splitsql-splitter merge tables/ -o out.sqlTables in Wrong Order for FK Constraints
Section titled “Tables in Wrong Order for FK Constraints”merge sorts files alphabetically, not by foreign key dependencies. If you get FK constraint errors during import:
# Option 1: Reorder after mergingsql-splitter merge tables/ -o merged.sqlsql-splitter order merged.sql -o ordered.sql
# Option 2: Use dialect header which disables FK checkssql-splitter merge tables/ -o merged.sql --dialect mysql# Header includes: SET FOREIGN_KEY_CHECKS = 0;
# Option 3: Disable FK checks manually during importmysql -e "SET FOREIGN_KEY_CHECKS=0; SOURCE merged.sql; SET FOREIGN_KEY_CHECKS=1;"Missing Tables in Output
Section titled “Missing Tables in Output”Check if tables were filtered out:
# Preview what will be mergedsql-splitter merge tables/ --dry-run
# Check if table files existls tables/*.sqlTables might be excluded by --exclude or not matched by --tables filter. Filters are case-insensitive.
_global.sql Not at Start of Output
Section titled “_global.sql Not at Start of Output”_global.sql (created by split for header statements like SET NAMES) is sorted alphabetically with other files. Since _ sorts before letters, it typically appears first. If you need specific ordering:
# Exclude _global from merge, prepend manuallysql-splitter merge tables/ -o body.sql --exclude _globalcat tables/_global.sql body.sql > complete.sqlJSON Output Mixed with SQL
Section titled “JSON Output Mixed with SQL”When using --json without -o, both SQL and JSON would go to stdout. The command writes JSON to stderr instead:
# JSON goes to stderr, SQL goes to stdoutsql-splitter merge tables/ --json > merged.sql 2> stats.json
# Or capture just the SQLsql-splitter merge tables/ --json > merged.sqlOut of Disk Space During Merge
Section titled “Out of Disk Space During Merge”The merge streams data and uses 256KB write buffers. For very large merges:
# Check available spacedf -h .
# Stream directly to compressed file (smaller output)sql-splitter merge tables/ | gzip > merged.sql.gz
# Or pipe directly to database (no disk needed)sql-splitter merge tables/ | mysql -u user databaseExit Codes
Section titled “Exit Codes”| Code | Meaning |
|---|---|
0 | Success |
1 | Error (no files found, write error, invalid directory) |
See Also
Section titled “See Also”split- Split a dump into per-table files (inverse of merge)order- Reorder merged output for FK-safe importsvalidate- Check FK integrity after mergingredact- Sanitize table data before merging- JSON Output Schema - Full schema for
--jsonoutput - Glossary: Streaming - How sql-splitter handles large files