Migration Studio
System and data migration lifecycle for platform transitions, version upgrades, and data moves
Stage Pipeline
Stage Details
Inventory what's being migrated, identify risks and dependencies
Hats
Inventory every artifact in scope — schemas, data stores, services, integrations, jobs, and configuration. Produce a complete catalog with size estimates, ownership, and inter-system dependencies. Nothing can be migrated safely if it isn't inventoried first.
Identify what can go wrong — data loss vectors, downtime windows, compatibility gaps, and blast radius. Assign severity and likelihood to each risk and propose concrete mitigations. Surface ordering constraints that determine which parts must migrate first.
Map source schemas and systems to target, define transformation rules
Hats
Review the schema-mapper's spec for correctness, completeness, and feasibility. Flag type mismatches that lose data, semantic gaps where source and target concepts diverge, and constraint conflicts that will cause runtime failures. Ensure downstream consumers are not broken by the mapping decisions.
Produce field-level mappings from every source entity to its target equivalent. Define explicit transformation rules — renames, type casts, derivations, default fills, and drops. Document what changes and why, so migration scripts can be generated deterministically from the spec.
Implement migration scripts, adapters, and data transforms
Hats
Verify that migration scripts produce correct output against a non-production target. Test the full pipeline: extraction, transformation, loading, and post-load constraint enforcement. Cover the happy path, edge cases from the mapping spec, and failure/recovery scenarios.
Implement the migration scripts, adapters, and data transforms specified in the mapping document. Every script must be idempotent, logged, and runnable in dry-run mode. Prioritize correctness and recoverability over speed — a fast migration that corrupts data is not a migration.
Verify data integrity, functional parity, and performance
Hats
Confirm that downstream consumers and application logic produce identical results when reading from the migrated target instead of the original source. Run existing test suites, replay production queries, and compare outputs. Surface any behavioral difference, no matter how small.
Perform quantitative verification that the migrated data matches the source. Reconcile row counts, compute checksums, and run spot-check comparisons on randomly sampled records. Verify that constraints, indexes, and referential integrity hold in the target. The goal is proof, not confidence.
Plan and execute the production cutover with rollback procedures
Hats
Plan and sequence the production cutover. Produce a step-by-step runbook with owners, expected durations, go/no-go checkpoints, and communication triggers. Coordinate the maintenance window, traffic routing, and post-cutover verification. The cutover is a one-shot operation — rehearse it until it's boring.
Design, implement, and test the rollback procedure that restores the source system to its pre-migration state. Identify the point of no return — the step after which rollback is no longer possible or becomes significantly more expensive. Ensure the rollback can execute within the defined RTO.
Migration Studio
Lifecycle for platform transitions, version upgrades, data moves, and system replacements. Use this studio when work involves moving data or functionality from one system, schema, or platform version to another — where integrity verification and rollback planning are essential. Covers the full arc from inventory through production cutover.