API Integrations

Connecting Systems That Weren't Designed to Talk


Modern businesses run on connected systems. Your application talks to payment processors, email services, accounting software, CRMs, and a dozen other external services. Each connection is an API integration, and each one is a potential point of failure.

Good API integration is invisible. Data flows between systems automatically. Orders appear in your accounting package. Payment confirmations trigger fulfilment workflows. Customer records stay consistent across every platform. Bad API integration creates operational headaches: failed payments that never retry, missing customer data, orders that vanish between systems.

Since 2005, we have built and maintained API integrations across more than 50 Laravel applications. Payment processors, shipping providers, accounting platforms, marketing tools, government services. The difference between an integration that works in a demo and one that works in production comes down to engineering decisions made before the first line of code is written.

If your systems need to talk to each other reliably, and you need a team that understands what "reliably" actually means in practice, book a discovery call and we will walk through your integration requirements.


Why API integration is harder than it looks

Every API integration has the same underlying problem: you are coupling your system to something you do not control. The external service can change behaviour, go offline, rate-limit you, return unexpected data, or deprecate the endpoint you depend on.

This is not a solvable problem in the sense that you can eliminate it. It is a constraint you design around. The goal is not to prevent external services from failing. The goal is to ensure that when they fail, your system continues to operate in a predictable, recoverable state.

Four things make API integration consistently difficult:

You do not control the other side

External APIs change. Providers update their systems, modify response formats, deprecate endpoints. Sometimes they notify you. Sometimes they do not. Your integration needs to handle versions you did not anticipate and degrade gracefully when fields disappear.

Networks are unreliable

Requests fail. Timeouts happen. Services go down. Packets get lost. DNS resolves incorrectly. TLS handshakes fail. An integration that works perfectly in development can fail unpredictably in production. You must assume failure and design for it.

Data does not map cleanly

Your system's concept of a "customer" might not match theirs. Field names differ. Required fields in one system are optional in another. Date formats vary. Currency precision differs. Keeping a single source of truth across connected systems requires deliberate data mapping at every boundary.

Errors are ambiguous

Did the request succeed? Fail? Partially succeed? A timeout does not mean failure. A 500 error might have processed the request before erroring. Different APIs communicate errors differently. Error handling must interpret responses correctly regardless of how they are formatted.


What goes wrong with naive API integration

Most integration code starts the same way. A developer reads the API documentation, writes a function that makes an HTTP request, parses the response, and moves on. The code works in development. It works in the first few weeks of production. Then it fails.

The naive approach treats the external API like a local function call: synchronous, reliable, and consistent. This assumption is false in every particular.

No timeout configured. HTTP client uses default timeout (often 30 seconds or infinite). A hung connection blocks threads, exhausts connection pools, and cascades into system-wide failure.
No retry logic. Transient failures become permanent failures. Users retry manually, creating duplicate operations.
No idempotency. Retries create duplicate records, duplicate charges, duplicate emails. The customer gets charged twice. The order gets placed twice.
Inline execution. API calls happen in the request-response cycle. Slow external APIs make your application slow. Failing external APIs make your application fail.
No logging. When something fails, no one knows what was sent, what was received, or how long it took. Debugging becomes guesswork.
Tight coupling. API client code is scattered throughout the codebase. When the external API changes, dozens of files need updating.

These are not edge cases. They are the normal operating conditions of any third-party API integration that runs long enough in production.


How we build API integrations that work in production

Reliable API integration requires explicit design decisions about failure modes. Every external call needs answers to specific questions before writing code. What happens if this request times out? What if the service is down for an hour? What if the response is malformed? What if we get rate-limited? What if the same request is submitted twice?

We assume failure

Every API call can fail. We design for it from the start. Integrations that assume success work in demos. Integrations that assume failure work in production. Before implementing any integration, we define the failure envelope: every way the call can go wrong and what the system does in each case.

We make operations idempotent

If a request fails and we retry, the same operation must not happen twice. We use unique identifiers (idempotency keys) to prevent duplicate charges, duplicate orders, and duplicate records. Most payment processors and financial APIs support this natively, as documented in Stripe's idempotent requests guide. For APIs that do not, we implement idempotency on our side by tracking operations in a database before sending them.

We log everything (safely)

When an API integration fails at 3am, the logs need to tell us exactly what happened. We log request details, response details, timing, outcome, and a correlation ID linking the request to the business operation that triggered it. Sanitisation is critical: logs must not contain API keys, passwords, credit card numbers, or personal data. For more on maintaining complete operational records, see our approach to audit trails.

We isolate external dependencies

We wrap external API calls in abstraction layers. The rest of the codebase interacts with our wrapper, not the external API directly. When the external API changes, we change the wrapper. Every other file remains unchanged. This pattern also lets us mock the external API for testing, swap providers without touching business logic, and add logging and retry logic in one place.


REST API integration patterns for different scenarios

Different integration scenarios call for different patterns. The choice depends on latency requirements, reliability needs, and the capabilities of the external system. We use four primary patterns across our custom web applications.

Synchronous request-response

Send a request, wait for a response, proceed based on the result. Used for real-time data lookups and synchronous operations where the user is waiting (payment authorisation, address validation). The risk is that your application blocks waiting. We mitigate with aggressive timeouts, circuit breakers, and fallback behaviour.

Asynchronous with webhooks

Send a request, receive acknowledgement that it was queued, then receive results via webhook callback. Webhook integration requires particular care: endpoints must verify that requests come from the expected source using HMAC signature verification, acknowledge quickly (return 200 within one second), store the raw payload, and process asynchronously. Duplicate webhooks are normal, so processing must be idempotent.

Queue-based processing

Decouple the triggering of an operation from its execution. The request goes into a queue, a background job processes it, and the result is stored or forwarded. Laravel's queue system handles retry logic, rate limiting, and dead letter routing out of the box. When an external API has a bad hour, queued operations wait and retry rather than failing immediately.

Polling

Periodically check an external system for updates. Used when the external system does not support webhooks, when webhook delivery is unreliable, or when you need to synchronise data that changes outside your control. We implement polling with exponential backoff during quiet periods, proper pagination for large datasets, and state tracking to process only changed records.

Requirement Pattern Trade-off
User waiting for result Synchronous Must handle timeouts gracefully
Operation takes more than 5 seconds Async with webhooks or polling More complex state management
High volume (1000+ calls/min) Queue-based Latency between trigger and execution
External system unreliable Queue with dead letter Need monitoring and manual review
Need eventual consistency Polling or event-driven Data may be stale between sync cycles

For a deeper look at how these patterns compose in practice, see our guide to integration patterns.


Error handling, rate limiting, and security

The operational concerns of API integration are where most implementations fall short. Error handling, rate limiting, and security are not afterthoughts. They are load-bearing parts of the design.

Error classification and retry logic

Different errors require different responses. We classify errors into four categories:

Transient errors (retry immediately)
Network timeouts, connection refused, DNS failures, 502/503/504 responses. These often resolve on retry with exponential backoff.
Rate limit errors (retry with delay)
429 Too Many Requests. The request is valid but we are sending too many. Respect the Retry-After header; aggressive retry makes the problem worse.
Client errors (do not retry)
400, 401, 403, 404, 422 responses. The request is malformed or unauthorised. Retrying will not help. Log and alert for investigation.
Ambiguous errors (retry with idempotency)
Timeouts where we do not know if the request was processed, 500 errors that might have partially succeeded. Retry with idempotency keys and verify state before proceeding.

Our retry implementation uses exponential backoff with jitter: immediate first attempt, then delays of 1s, 2s, 4s, 8s before moving the operation to a dead letter queue for manual review.

Circuit breakers and rate limiting

If an external service is failing repeatedly, continuing to call it wastes resources and can cascade into broader failure. The circuit breaker pattern detects repeated failures and stops calling the failing service for a period, giving it time to recover. Three states: closed (normal), open (fail fast), half-open (testing recovery). Circuit breakers prevent one bad third-party API integration from bringing down your entire application.

Most external APIs have rate limits. We implement client-side throttling (token bucket or sliding window) to stay within limits rather than hitting limits and dealing with errors. This is more efficient and more reliable.

Authentication and credential management

API credentials are keys to external systems. They never belong in code or version control. We load credentials from environment variables or secret management systems at runtime, implement OAuth 2.0 with proper token refresh scheduling, and scope permissions to the minimum required. All API calls go over TLS 1.2 or higher with certificate validation enabled.


Third-party API integration for small business

If you run a small or medium-sized business, you probably do not need an enterprise integration platform. You need specific systems connected to each other reliably: your payment processor to your accounting software, your CRM to your email marketing, your order management to your shipping provider.

The challenge for smaller businesses is that integration problems hit harder. A large enterprise has a dedicated team to monitor and fix integration failures. When your Stripe-to-Xero sync breaks at a 20-person company, it might be days before someone notices the invoices stopped reconciling.

This is where API integration for small business differs from enterprise integration. You need:

  • Integrations that recover automatically Retry logic and queue-based processing mean transient failures resolve without human intervention.
  • Monitoring that tells you before customers do Alerting on error rates, latency, and queue depth catches problems early.
  • Integrations that are maintainable Clean abstraction layers mean your team (or ours) can update integrations when external APIs change, without rewriting the application.
  • Integrations that are documented When something does need attention, clear documentation and structured logs mean anyone on the team can diagnose the issue.

We have been building these kinds of integrations since 2005. Payment processors (Stripe, GoCardless, PayPal), accounting software (Xero, FreeAgent), CRMs (HubSpot, Salesforce), email services (Postmark, Mailgun), shipping providers, government APIs, and dozens of industry-specific systems. The integration patterns are consistent even when the specific APIs are not.


Connect your systems

We build API integration services that connect your systems for the real world, where networks fail, APIs change, and services go down. Not brittle connections that break at the first timeout. Integrations designed for production, with proper failure handling and full visibility.

Book a discovery call →
Graphic Swish