Serverless React On Lambda -Part 3(unexpected problems of this project)

Now it’s not all roses, we did run into a couple of unexpected problems of serverless react on lambda and I’ll go over them briefly.

All those problems are

  • Backwards-incompatible changes are painful
  • Discovered data problems (But fixed them!)
  • Still need to gather performance metrics
  • Unexpected timeouts on Lambda

Backwards-incompatible changes are painful

First of all we discovered that backwards incompatible changes are painful. As I just mentioned we do continuous deployment for all of our lambda functions. So what that means is as soon as something is merged to master its out on production, there’s not really any delay.

If you want to change the schema that we mentioned this JSON schema that I talked about earlier, let’s say we want to change the name of one of the parameters.

Well we did that a couple of weeks after launch and then shortly discovered that all of our emails were failing and that happened. Because we hadn’t yet made the change to the backend.

That was calling our lambda functions, so it was passing information in the old format whereas lambda was immediately expecting information in the new format.

So it was just throwing validation errors, so we reverted that change very quickly and we’ve learned that now all changes need to be backwards compatible at least until the back end is modified to fit with the new structure of the data as well.

Benefits of serverless react on lambda

Discovered data problems (But fixed them!)

unexpected problems in serverless react on lambda project

We also unexpectedly discovered problems in our database, problems with the data that we held in increase. So for example every single one of our emails has a big highlighted call-to-action button.

It might be something like login and accept your invitation or go and submit your review to this person. We had our email setup, so that if you pass an optional invitation token parameter for each user that was supposed to indicate that the user hasn’t actually claimed their account on M phrase yet.

Therefore the call-to-action should say register and submit your review otherwise, it should just say submit your review. Well we had a couple of customers contact us and ask us why they were being asked to register for their emperor’s account that they’d had for a couple months.

Now turned out that the code in our back-end wasn’t properly deleting the invitation token and the back end was just you know sending that information along to lambda which meant that our emails were doing exactly what they were coded to do.

They were sending along the information that said, you need to register. So having this robust data validation helped us discover problems that we didn’t think we had in a very unexpected place and we were able to solve them pretty quickly.

Still need to gather performance metrics

unexpected problems in serverless react on lambda project

Another problem, this is not so much a problem just sort of a lack of time issue we haven’t actually gathered performance metrics now.

I said right at the beginning of the talks that one of the reasons why we wanted to do this is because we had a very spiky very heavy workload, I’ll be honest I don’t actually know what our workload was like and I don’t actually know how much it’s improved.

I presume that it has but this is one of the things that we’re looking for help on this we know that we want to move in this direction.

We know that we want to gather performance metrics and make sure that our service, our servers and our services are running smoothly it’s just not something that we’ve
gotten to yet.

Unexpected timeouts on Lambda

unexpected problems in serverless react on lambda project

There’s another problem that we’ve been having where we have unexpected timeouts on an AWS Lambda. Most of our functions go perfectly, smoothly and all of our emails are getting sent out.

But we’ve discovered that some of these lambda functions after the email gets sent out after we send it over to postmark, our email delivery service just kind of hangs out just kind of waits on lambda until the timeout happens doesn’t exit.

Just waits until o lambda kills the process, we don’t know why we’re hoping that we can figure out why that’s happening so that.

We can cut off these processes earlier because of course every millisecond that we spend on lambda costs a little bit more money. So it’d be really nice to figure out what’s causing these timeouts.