Course – LS – All

Get started with Spring and Spring Boot, through the Learn Spring course:

>> CHECK OUT THE COURSE

1. Overview

When we’re building applications in a distributed cloud environment, we need to design for failure. This often involves retries.

Spring WebFlux offers us a few tools for retrying failed operations.

In this tutorial, we’ll look at how to add and configure retries to our Spring WebFlux applications.

2. Use Case

For our example, we’ll use MockWebServer and simulate an external system being temporarily unavailable and then becoming available.

Let’s create a simple test for a component connecting to this REST service:

@Test
void givenExternalServiceReturnsError_whenGettingData_thenRetryAndReturnResponse() {

    mockExternalService.enqueue(new MockResponse()
      .setResponseCode(SERVICE_UNAVAILABLE.code()));
    mockExternalService.enqueue(new MockResponse()
      .setResponseCode(SERVICE_UNAVAILABLE.code()));
    mockExternalService.enqueue(new MockResponse()
      .setResponseCode(SERVICE_UNAVAILABLE.code()));
    mockExternalService.enqueue(new MockResponse()
      .setBody("stock data"));

    StepVerifier.create(externalConnector.getData("ABC"))
      .expectNextMatches(response -> response.equals("stock data"))
      .verifyComplete();

    verifyNumberOfGetRequests(4);
}

3. Adding Retries

There are two key retry operators built into the Mono and Flux APIs.

3.1. Using retry

First, let’s use the retry method, which prevents the application from immediately returning an error and re-subscribes a specified number of times:

public Mono<String> getData(String stockId) {
    return webClient.get()
        .uri(PATH_BY_ID, stockId)
        .retrieve()
        .bodyToMono(String.class)
        .retry(3);
}

This will retry up to three times, no matter what error comes back from the web client.

3.2. Using retryWhen

Next, let’s try a configurable strategy using the retryWhen method:

public Mono<String> getData(String stockId) {
    return webClient.get()
        .uri(PATH_BY_ID, stockId)
        .retrieve()
        .bodyToMono(String.class)
        .retryWhen(Retry.max(3));
}

This allows us to configure a Retry object to describe the desired logic.

Here, we’ve used the max strategy to retry up to a maximum number of attempts. This is equivalent to our first example but allows us more configuration options. In particular, we should note that in this case, each retry happens as quickly as possible.

4. Adding Delay

The main disadvantage of retrying without any delay is that this does not give the failing service time to recover. It may overwhelm it, making the problem worse and reducing the chance of recovery.

4.1. Retrying with fixedDelay

We can use the fixedDelay strategy to add a delay between each attempt:

public Mono<String> getData(String stockId) {
    return webClient.get()
      .uri(PATH_BY_ID, stockId)
      .retrieve()
      .bodyToMono(String.class)
      .retryWhen(Retry.fixedDelay(3, Duration.ofSeconds(2)));
}

This configuration allows a two-second delay between attempts, which may increase the chances of success. However, if the server is experiencing a longer outage, then we should wait longer. But, if we configure all delays to be a long time, short blips will slow our service down even more.

4.2. Retrying with backoff

Instead of retrying at fixed intervals, we can use the backoff strategy:

public Mono<String> getData(String stockId) {
    return webClient.get()
      .uri(PATH_BY_ID, stockId)
      .retrieve()
      .bodyToMono(String.class)
      .retryWhen(Retry.backoff(3, Duration.ofSeconds(2)));
}

In effect, this adds a progressively increasing delay between attempts — roughly at 2, 4, and then 8-second intervals in our example. This gives the external system a better chance to recover from commonplace connectivity issues or handle the backlog of work.

4.3. Retrying with jitter

An additional benefit of the backoff strategy is that it adds randomness or jitter to the computed delay interval. Consequently, jitter can help to reduce retry-storms where multiple clients retry in lockstep.

By default, this value is set to 0.5, which corresponds to a jitter of at most 50% of the computed delay.

Let’s use the jitter method to configure a different value of 0.75 to represent jitter of at most 75% of the computed delay:

public Mono<String> getData(String stockId) {
    return webClient.get()
      .uri(PATH_BY_ID, stockId)
      .accept(MediaType.APPLICATION_JSON)
      .retrieve()
      .bodyToMono(String.class)
      .retryWhen(Retry.backoff(3, Duration.ofSeconds(2)).jitter(0.75));
}

We should note that the possible range of values is between 0 (no jitter) and 1 (jitter of at most 100% of the computed delay).

5. Filtering Errors

At this point, any errors from the service will lead to a retry attempt, including 4xx errors such as 400:Bad Request or 401:Unauthorized.

Clearly, we should not retry on such client errors, as server response is not going to be any different. Therefore, let’s see how we can apply the retry strategy only in the case of specific errors.

First, let’s create an exception to represent the server error:

public class ServiceException extends RuntimeException {
    
    public ServiceException(String message, int statusCode) {
        super(message);
        this.statusCode = statusCode;
    }
}

Next, we’ll create an error Mono with our exception for the 5xx errors and use the filter method to configure our strategy:

public Mono<String> getData(String stockId) {
    return webClient.get()
      .uri(PATH_BY_ID, stockId)
      .retrieve()
      .onStatus(HttpStatus::is5xxServerError, 
          response -> Mono.error(new ServiceException("Server error", response.rawStatusCode())))
      .bodyToMono(String.class)
      .retryWhen(Retry.backoff(3, Duration.ofSeconds(5))
          .filter(throwable -> throwable instanceof ServiceException));
}

Now we only retry when a ServiceException is thrown in the WebClient pipeline.

6. Handling Exhausted Retries

Finally, we can account for the possibility that all our retry attempts were unsuccessful. In this case, the default behavior by the strategy is to propagate a RetryExhaustedException, wrapping the last error.

Instead, let’s override this behavior by using the onRetryExhaustedThrow method and provide a generator for our ServiceException:

public Mono<String> getData(String stockId) {
    return webClient.get()
      .uri(PATH_BY_ID, stockId)
      .retrieve()
      .onStatus(HttpStatus::is5xxServerError, response -> Mono.error(new ServiceException("Server error", response.rawStatusCode())))
      .bodyToMono(String.class)
      .retryWhen(Retry.backoff(3, Duration.ofSeconds(5))
          .filter(throwable -> throwable instanceof ServiceException)
          .onRetryExhaustedThrow((retryBackoffSpec, retrySignal) -> {
              throw new ServiceException("External Service failed to process after max retries", HttpStatus.SERVICE_UNAVAILABLE.value());
          }));
}

Now the request will fail with our ServiceException at the end of a failed series of retries.

7. Conclusion

In this article, we looked at how to add retries in a Spring WebFlux application using retry and retryWhen methods.

Initially, we added a maximum number of retries for failed operations. Then we introduced delay between attempts by using and configuring various strategies.

Finally, we looked at retrying for certain errors and customizing the behavior when all attempts have been exhausted.

As always, the full source code is available over on GitHub.

Course – LS – All

Get started with Spring and Spring Boot, through the Learn Spring course:

>> CHECK OUT THE COURSE
res – REST with Spring (eBook) (everywhere)
Comments are open for 30 days after publishing a post. For any issues past this date, use the Contact form on the site.