Integration Patterns

Connecting Business Systems


Most businesses run between five and fifteen systems. A CRM for sales, an ERP for operations, an accounting package, a project management tool, maybe a legacy database that nobody wants to touch but everybody depends on. Each system holds a piece of the picture, and none of them share it willingly.

System integration is the work of making those systems exchange data reliably, on time, and without losing records along the way. It sounds straightforward until you realise that each system has its own data model, its own timing expectations, and its own ideas about what a "customer" or an "order" actually is.

Since 2005, we have built system integration services across more than 50 Laravel applications for UK businesses. The patterns in this guide come from that work: connecting CRMs to ERPs, bridging legacy databases to modern APIs, and keeping data synchronisation running when one system goes down at 2am on a Saturday.


What system integration actually involves

System integration is not just connecting two APIs. It is reconciling differences in data models, timing, reliability guarantees, and update velocities across every system in a business.

Consider a straightforward scenario: a new customer signs up on your website. That customer record needs to reach your CRM, your accounting system, and your project management tool. Each system expects different fields, in different formats, at different times. Your CRM wants the record immediately. Your accounting system wants it when the first invoice is raised. Your project tool wants it when the first project is created.

The naive approach is to write a direct connection from your website to each system. It works for two or three systems. By the time you reach five, you are maintaining ten separate connections, each with its own error handling, its own retry logic, and its own way of failing silently.

The real cost of poor integration is rarely the integration itself. It is the downstream effects: duplicate customer records in your CRM, invoices sent to the wrong address, stock levels that are three hours out of date. These failures erode trust in your data, which leads people to maintain spreadsheets alongside the systems that were supposed to replace spreadsheets.


The four integration architecture patterns

Every system integration project uses one of four fundamental patterns, or a combination of them. The right choice depends on how many systems you are connecting, how fresh the data needs to be, and how much operational complexity you are willing to manage.

Point-to-point

Direct system-to-system connections. System A calls System B's API, gets or sends data, and handles errors itself.

When it works: Two or three systems with clear, stable interfaces. A single CRM integration with your website, for example.

When it breaks: With N systems, point-to-point requires N*(N-1)/2 connections for full mesh connectivity. Five systems means ten connections. Ten systems means 45.

Hub and spoke

A central integration hub handles all data transformation and routing. New systems connect only to the hub, not to every other system.

When it works: Five or more systems where data flows through a central process. ERP integration projects often land here because the ERP becomes the natural hub.

When it breaks: The hub is a single point of failure. If it goes down, every integration stops. You need monitoring, redundancy, and a team that understands the hub's configuration.

Event bus and message broker

Systems publish events ("order created", "customer updated") to a message broker. Other systems subscribe to the events they care about. Publishers do not know or care who is listening.

When it works: Many systems reacting to common business events. If adding a new system should not require changing existing systems, this is the right pattern.

When it breaks: Debugging is harder because the flow is indirect. Without monitoring dashboards and dead letter queue management, you will spend more time debugging than you saved in development.

ETL (extract, transform, load)

Batch-oriented processing where data is extracted from source systems, transformed into the target format, and loaded into destination systems on a schedule.

When it works: Reporting, analytics, data warehousing, and any scenario where data does not need to be real-time.

When it breaks: When someone assumes it is real-time. If your stock levels update every four hours via ETL, your website will sell items that are already out of stock.


When CRM and ERP integration goes wrong

CRM integration and ERP integration are the two most common system integration projects we see. They are also the two most likely to fail, and the failures follow predictable patterns.

The duplicate record problem

Your sales team creates a contact in the CRM. Your accounts team creates the same person in the ERP. Now you have two records for the same entity, with slightly different data in each. The CRM says "J. Smith, Acme Ltd". The ERP says "John Smith, Acme Limited".

The robust pattern is a canonical identifier: a single, system-generated ID that both systems use to refer to the same entity. One system is the authority for creating new entities. Other systems reference that authority's ID. This requires clear data modelling before any code is written.

The timing mismatch

Your CRM updates in real-time. Your ERP runs batch imports every 30 minutes. A salesperson closes a deal, the CRM fires a webhook, and the integration tries to create an order in the ERP. But the ERP is mid-batch and rejects the write.

The robust pattern here is a message queue with guaranteed delivery. The integration writes the message to a queue. The ERP consumer picks it up when it is ready. If the ERP is busy, the message waits. If it fails permanently, it lands in a dead letter queue for human review.

The field mapping nightmare

Your CRM has a "Company Type" dropdown with five options. Your ERP has a "Customer Category" field with twelve. Neither maps cleanly to the other. Some CRM values map to two ERP categories depending on context. One ERP category has no CRM equivalent at all.

There is no shortcut here. Field mapping requires a spreadsheet, both teams in the room, and enough time to work through every edge case. The transformation logic that results is often the most complex part of the integration, and the part most likely to need updating as either system evolves.


Data synchronisation strategies that hold up

Data synchronisation between systems requires clear answers to three questions: which direction does data flow, which system owns which fields, and what happens when two systems change the same record at the same time.

Sync direction

One-way sync
Data flows from source to destinations. Source is authoritative. Simplest model. Use unless you have a specific reason not to.
Two-way sync
Both systems can create and update records. Requires conflict resolution, which adds complexity that is easy to underestimate.
Primary-replica
One system is authoritative. Others receive copies. All writes go back to the primary. Good middle ground.

The source-of-truth-by-field approach

Rather than declaring one system the overall authority, assign ownership at the field level. Your CRM owns contact details. Your ERP owns financial data. Your project tool owns delivery dates. When a field is updated in its owning system, the change propagates outward. When someone tries to update a field in a non-owning system, the integration either rejects the change or flags it for review.

This approach, closely related to the single source of truth principle, eliminates most synchronisation conflicts by making ownership explicit.

Conflict resolution

Strategy How it works Trade-off
Last write wins The later update overwrites the earlier one Simple but lossy
Source-of-truth wins The owning system's value always takes precedence Predictable and safe
Manual resolution Flag conflicts for human review Appropriate for high-value records

For most UK mid-market businesses, source-of-truth-by-field with manual resolution for exceptions is the right balance between automation and safety.


Legacy system integration without the rewrite

Most businesses have at least one system that is ten or fifteen years old, runs on technology that is no longer mainstream, and is too critical to replace overnight. Legacy system integration is the work of connecting these systems to your modern stack without destabilising them.

1

File-based export and import

The legacy system writes a CSV or XML file to a shared location. A modern service picks it up, parses it, and routes the data. Unglamorous but reliable. Many legacy systems have been exporting flat files since before REST APIs existed.

2

Direct database connection

Connect to the legacy system's database and read or write data directly. This bypasses the legacy system's business logic. If the legacy system validates data on entry, your direct writes skip that validation. Use read-only connections where possible.

3

Wrapper service (strangler fig pattern)

Build a modern API that sits in front of the legacy system. New consumers talk to the wrapper. The wrapper translates requests into whatever the legacy system understands. Over time, you can migrate functionality from the legacy system to the wrapper without changing any consumers.

4

Screen scraping and UI automation

Automate the legacy system's user interface. This is a last resort. It is fragile, slow, and breaks whenever the UI changes. But sometimes it is the only option for systems with no API, no database access, and no file export capability.

The critical rule for legacy integration: Never change the legacy system's behaviour during integration. The legacy system works. People depend on it. Your integration must adapt to the legacy system, not the other way around.


API integration patterns for reliable data flow

When systems do expose APIs, the quality of your integration depends on how you handle the things that go wrong. Networks fail, services restart, rate limits trigger, and data arrives in unexpected formats.

Idempotent operations

An idempotent operation produces the same result whether you call it once or ten times. Design every write operation to be idempotent. Use unique request identifiers so the receiving system can recognise and deduplicate retries.

Circuit breakers

When a downstream system is failing, continuing to send requests makes things worse. A circuit breaker tracks failure rates and stops sending requests for a cooldown period. This prevents cascade failures where one system's outage takes down every connected system.

Dead letter queues

When a message fails processing after all retry attempts, it goes to a dead letter queue rather than being discarded. They are the safety net that means a transient failure at 3am does not result in permanently lost data. Monitor your dead letter queues: an empty queue is healthy, a growing queue means something is wrong upstream.

Rate limiting and backpressure

Most APIs enforce rate limits. Your integration needs to respect them gracefully, not hammer the API until it blocks you. Implement rate limiting on your side, queue requests that exceed the limit, and process them at a sustainable pace.


Choosing the right integration middleware

If you are connecting more than three systems, you probably need middleware. The choice depends on your throughput requirements, your team's skills, and how much operational overhead you are willing to accept.

RabbitMQ
A mature message broker that handles tens of thousands of messages per second. Good documentation, wide language support. Best for mid-market businesses with moderate throughput who value operational simplicity.
Apache Kafka
A high-throughput event streaming platform with persistent, log-based storage. Handles hundreds of thousands of messages per second. Best for businesses with high data volumes, event sourcing requirements, or the need to replay historical events.
Cloud-native services
Amazon SQS, Azure Service Bus, and Google Pub/Sub. Pay per message, no infrastructure management. Best for businesses already committed to a cloud provider. Be aware of vendor lock-in.

Ask these questions when deciding: How many messages per second? Do you need event replay? Who will operate it? What is your budget model? Under 10,000 messages per second, any option works. Over 100,000, Kafka or a cloud-native service. If you do not have a DevOps team, cloud-native services remove that burden.


How we approach system integration projects

We have been building system integration services since 2005. Over 50 Laravel applications, connecting CRMs, ERPs, accounting packages, legacy databases, and everything in between. Our approach follows a consistent pattern because the problems are consistent.

Discovery
Map every system, every data flow, and every timing requirement
Pattern selection
Choose the right architecture based on your specific needs
Incremental delivery
Connect systems one pair at a time, starting with highest value
Monitoring from day one
Every integration gets a dashboard showing throughput, latency, and errors

Every field mapping, every transformation rule, and every error handling decision is documented. Your team can maintain and extend the integration without depending on us indefinitely.


Connect your systems

If your business is running on disconnected systems and the manual workarounds are slowing you down, we should talk. We will map out what a connected system looks like for your specific setup.

Book a discovery call →
Graphic Swish