Every legacy system that still runs in production earned its place there. It survived because it does something the business depends on, usually in ways that nobody fully documents and everybody takes for granted. The Access database tracking 15 years of customer orders. The PHP 4 application handling job scheduling. The Excel workbook that somehow runs payroll.
Legacy code migration is the work of replacing those systems without breaking the business processes embedded in them. It sounds like a technology project. It is actually an archaeology project: excavating business logic from code that was written before anyone thought to explain why it works the way it does.
Since 2005, we have migrated systems built on Access, Excel, PHP 4, classic ASP, and .NET into modern Laravel applications. The patterns in this guide come from that work, including the failures that taught us more than the successes.
Why Most Legacy Code Migrations Fail
The most common mistake in legacy system migration is underestimating what the old system actually does. A system that looks simple from the outside (a few forms, a database, some reports) typically contains years of accumulated business logic that nobody remembers implementing.
Three factors make migration harder than it appears.
Undocumented business rules
The original developer added a rule that rejects orders below a certain value on Tuesdays. Nobody remembers why, but removing it breaks something downstream. These rules live in application code, database triggers, stored procedures, and sometimes in the gap between what the system does and what the documentation says it does.
Data quality decay
Legacy databases accumulate inconsistencies over years. Nullable fields that should be required. Duplicate records with slightly different spellings. Date formats that changed partway through 2012 when someone updated the input form. A migration plan that ignores data quality will import these problems into the new system.
Integration dependencies nobody mapped
The legacy system sends a nightly CSV to the accounting package. It exposes an undocumented API that the warehouse team built a script against. It writes to a shared folder that three other processes read from. Each of these is a thread that, if pulled, unravels something.
The naive approach is to rebuild the system from requirements documents. This fails because the requirements documents, if they exist, describe what the system was supposed to do five years ago, not what it actually does today. The gap between specification and reality is where migrations die.
Big Bang Versus Incremental Migration
There are two fundamental approaches to legacy code migration. The first, big bang migration, replaces the old system in a single cutover. The second, incremental migration, replaces the system piece by piece over weeks or months.
Why big bang usually fails
Big bang migration has an appealing simplicity. Build the new system, migrate the data, switch over on a Friday evening, go live on Monday. In practice, this approach fails for systems of any real complexity. The testing window is too short. You discover on Sunday afternoon that the data migration missed 2,000 records because of a schema mismatch in a field you did not know existed.
The rollback trap: You cannot roll back because the old system's data is now 48 hours stale and the business has been entering new records into the new system since Saturday morning. Big bang works for simple, isolated systems. For anything else, it concentrates risk into a single weekend when you have the least capacity to deal with problems.
Incremental migration
Incremental migration replaces functionality in slices. Users might use the new system for order entry while the old system still handles reporting. Over weeks, each module transfers to the new platform until the old system has no remaining responsibilities. This approach reduces risk because each slice is small enough to test thoroughly and roll back independently. It increases complexity because you are running two systems simultaneously, which means maintaining data consistency between them. The trade-off is worth it for any system that the business cannot afford to lose for a weekend.
The Strangler Fig Pattern
The strangler fig is the most reliable pattern we use for legacy system migration. Named after the fig that grows around a host tree, gradually replacing it, the pattern works by intercepting requests to the legacy system and routing them to new code, one feature at a time.
The implementation uses an API facade that sits in front of both the old and new systems. All traffic flows through the facade. Initially, the facade routes everything to the legacy system. As new features are built and tested, the facade routes those specific requests to the new system instead.
Route-level switching
The facade decides per endpoint whether to send traffic to the old or new system. You migrate /orders/create to the new system while /orders/report still hits the old one.
Feature flags for gradual rollout
Before switching an entire endpoint, route 10% of traffic to the new system and compare results. This catches discrepancies that testing missed.
Shared data layer
During migration, both systems need access to the same data. Options include a shared database (simplest but creates coupling), event-driven synchronisation (more complex but cleaner), or dual-write (the facade writes to both systems).
The strangler fig works because it makes migration reversible at every step. If the new endpoint has a bug, the facade routes traffic back to the old system in seconds, not hours.
The 80% stall: A common failure mode with this pattern is letting the migration stall at 80% completion. The last 20% of functionality is always the hardest because it contains the most obscure business logic. Set a decommission deadline early and protect the budget for that final push.
Data Migration Strategies
Data migration is where legacy code migrations most commonly fail. Moving application logic is challenging; moving 15 years of accumulated data without losing records, corrupting relationships, or breaking downstream reports is harder.
The ETL pipeline
Extract, Transform, Load is the standard pattern for moving data between systems.
Handling schema mismatches
The most common data migration failures stem from schema differences that were not caught during planning.
Type changes
The legacy system stored phone numbers as integers. Leading zeros are gone. Dates stored as strings in inconsistent formats. Timestamps without timezone information.
Encoding and integrity
The legacy database used Latin-1. The new system uses UTF-8. Customer names with accented characters break. Foreign key relationships that should exist but do not, because the legacy system never enforced referential integrity.
Build a quarantine table for records that fail transformation. Do not skip them silently. Every quarantined record needs manual review before the migration is considered complete.
Dual-write and shadow reads
During the transition period, new data enters both systems. The dual-write pattern writes every transaction to both databases. Shadow reads query both and compare results, logging discrepancies without affecting users. Use dual-write only during the active migration window, not as a permanent architecture. The moment you have validated the new system's data, stop writing to the old system.
Testing and Risk Mitigation
Legacy system migration requires a testing strategy that goes beyond standard application testing. You are not just verifying that new code works; you are verifying that the new system produces identical outcomes to the old one for every scenario the business depends on.
Parallel running
Run both systems simultaneously with the same inputs and compare outputs. This catches the problems that unit tests miss: the edge case where a discount calculation rounds differently, the report that includes records the new system filters out, the nightly batch job that processes records in a different order and produces different totals.
Rollback plans
Every migration step needs a tested rollback plan. Not a theoretical rollback plan documented in a wiki. A tested one, rehearsed on production-equivalent data, with a measured rollback time.
The critical question: If we discover a critical issue at 3am on Tuesday, how long does it take to restore the previous state, and what data do we lose? If the answer is "we lose anything entered since the migration step," then your migration step is too large.
Feature flags and change freezes
Feature flags control which users see the new system. Start with internal users, expand to a pilot group, then roll out to everyone. During active migration, freeze changes to the legacy system. Any change to the old system during migration invalidates your testing baseline.
The real failure modes during migration are not dramatic. They are quiet: a database migration script that silently truncates a text field, a batch job that skips records with null values, an integration that stops receiving data because the API endpoint changed. Build monitoring that catches these discrepancies in hours, not weeks.
When to Migrate Versus When to Wrap
Not every legacy system should be replaced. Sometimes the right decision is to leave the legacy system running and wrap it with a modern interface.
An API facade pattern exposes the legacy system's functionality through a modern REST API. New applications consume the facade instead of talking to the legacy system directly. The legacy system continues running, but it is contained.
Consider the full picture when evaluating build vs buy decisions around legacy systems. Sometimes the right move is neither migration nor wrapping, but replacing the legacy system with an off-the-shelf tool that did not exist when the original was built.
What IGC Legacy Migration Projects Look Like
We have migrated systems built on technologies that most developers would rather not touch: Access databases with 50,000+ records and no documentation, Excel workbooks with VBA macros that encode an entire pricing engine, PHP 4 applications that predate Laravel by a decade, and .NET systems running on Windows Server 2003.
Each migration follows the same structure.
Discovery (2-4 weeks)
We map what the legacy system actually does, not what anyone thinks it does. This means reading code, interviewing users, and tracing data flows.
Architecture
We choose the migration pattern (usually strangler fig), design the data model for the new system, and plan the migration sequence. Migrate the features with the most pain first to build momentum.
Incremental build
We build and deploy in slices, with each slice tested against the legacy system's outputs. Users transition gradually. At no point does the business lose access to its data or its processes.
Hypercare (4-6 weeks)
After the final cutover, heightened monitoring catches post-migration issues. Most surface within the first two weeks.
Decommission
The legacy system runs in read-only mode for an agreed period (typically 3-6 months), then shuts down. Data is archived according to retention requirements.
Timelines vary. A simple Access-to-web migration might take 6-8 weeks. A complex multi-system migration with data transformation and parallel running typically takes 3-6 months. We provide fixed-price quotes after discovery so there are no surprises.
Replace Your Legacy System
If you are running a system that everyone is afraid to touch, we can help. The first conversation is free, comes with no obligation, and we will tell you honestly whether migration makes sense for your situation, or whether the smarter move is to wrap it and wait.
Book a discovery call →