You live in the cloud. Your app lives in the cloud. Mostly. You’ve decided to add access controls via a simple proxy. Your service is supposed to have “100%” uptime, so of course the proxy has to have “100%” uptime.
So far so good – except that the back end only has 99.9% uptime and your stupid ops people have set up alarms that check service uptime via your proxy. Since you don’t want to get dinged you figure you’ll retry. No alarms, no problem. Right?
Truth is you’ve just made your app slower. Probably a lot slower. And more expensive. And less stable.
Look at the data
Have a look at this picture. This is a test for a proxy that retries after 15s.
Let’s focus on the orange data. You’ll have to trust me when I say there is orange dots under the green dots. What you see is that the retry works really well: typical response time is about 2s and if that fails we get responses after about 17s (15+2) and if that fails we get responses after about 32s (2*15+2) and if that fails we get responses after about 47s (3*15+2). This is great! The proxy works!
Does it though? What should the client do? Should it wait for 50s? Or should it retry retry 25 times after 2s in the hopes that a single call will take the expected 2s? ? 10 times after 5s to account for some spread? Exponential backoff?
Based on the orange lines the client should absolutely retry every 3-5s. Of course that will kill your proxy and back end because each of the “timed out” calls will still go through the full proxy/back-end retry cycle. You just DoSed yourself.
Or course the blue data is more realistic. Under load there is actual spread. Some calls really do take up to 15s. So really you want exponential backoff. But even now you are abandoning calls to the retry pattern and DoSing your self. Not as badly but still.
In both of the above cases you client contains retry code. Now, why would you have retry code in your proxy?
I don’t believe you!
Ok. Just for you I have created this cool little toy on GitHub which allows you to walk through this step by step. Let’s say your server takes at least 2s to respond and at most 6s. Let’s model this as a gaussian because they are pretty:
So far so good. Now let’s have a look at the red line and what happens if we retry. If we retry early then we give up on any chance of the old request being fulfilled and start the wait again at the beginning. What this shows very nicely is that for any retry before you are guaranteed completion at 6s your performance will get worse.
How’s that different from the client doing the retry? Admittedly it isn’t. Except the client now has to wait until it’s guaranteed that the proxy would return!