Building Border Times

Cold Starts Improved 15x When I Re-Wrote My Lambda in Go

January 9, 2026
5 min read
Featured image for blog post: Cold Starts Improved 15x When I Re-Wrote My Lambda in Go

Cold Starts Improved 15x When I Re-Wrote My Lambda in Go

My iOS app's splash screen was taking over 7 seconds to load. Most users probably wouldn't even care if they end up getting what they wanted. But I care.

I care deeply. What the hell takes so long over https that requires 7 seconds?

The token validation endpoint was taking 1.3-1.4 seconds on cold start. The times endpoint was even worse—up to 3.8 seconds. For a splash screen that needs both, this was unacceptable.

I've used go before, and I figured using typescript for a mostly AI-Driven project would be better. Like more people write TypeScript more training data yaddayadda.

I dunno, I guess so? But also what the hell how slow is this crap? Plus I have to like transpile the tsx files into javascript during a build step? It's just yucko and I should have just started it off in go to begin with.

The Problem with TypeScript on Lambda

Don't get me wrong—TypeScript is fantastic for Lambda functions. The developer experience is great, the ecosystem is mature, and you can move fast. But there's a cost: the Node.js runtime has overhead.

Here's what a typical TypeScript Lambda cold start looks like:

Init Duration: 715ms
Duration: 670ms
Total: 1,385ms
Memory: 108MB

That 715ms init duration is the Node.js runtime loading your dependencies, parsing your TypeScript-compiled JavaScript, and warming up the V8 engine. It happens every time AWS spins up a new Lambda instance.

For background jobs or APIs where latency doesn't matter? Not good for your wallet, but fine. IT'S FINE. For a mobile app splash screen where every millisecond counts? Brutal.

Here's the same Lambda function rewritten in Go:

Init Duration: 68ms
Duration: 90ms
Total: 158ms
Memory: 30MB

That's a 10x faster cold start and 70% less memory. On warm invocations, it's even more dramatic—90ms vs 317ms.

The Migration Strategy

If this is something you're interested in doing and you happen to use AWS with SAM, you can do this pretty simply by just spinning up a new stack to test things. It's exactly what I did and I may just end up making it my main stack instead of my side stack.

src-go/
├── go.mod                    # Shared dependencies
├── shared/                   # Reusable code
│   ├── db.go                 # DynamoDB client
│   ├── auth.go               # JWT verification
│   ├── http.go               # Response helpers
│   └── user.go               # User queries
└── lambda/
    ├── validate-token/
    │   └── main.go
    └── times/
        └── main.go

The shared package is key. JWT verification, DynamoDB connections, CORS headers—all the boilerplate lives in one place. Each Lambda handler is thin, just wiring things together.

Base Path Mapping for Gradual Rollout

To make this even easier, I didn't replace the existing endpoints. I created new versioned paths:

  • TypeScript: POST /validate → Go: POST /v2/validate
  • TypeScript: GET /v2/times → Go: GET /v3/times

Both stacks share the same API Gateway domain through base path mapping. The iOS app just needed to update two URL strings to switch to the Go endpoints. If something breaks, reverting is a one-line change.

The Results

After deploying to production:

EndpointTypeScriptGoImprovement
validate-token (cold)1,385ms654ms2x faster
validate-token (warm)317ms90ms3.5x faster
times (cold)3,826ms505ms7x faster
times (warm)1,601ms545ms3x faster
Memory usage110-127MB30MB75% less

The times endpoint also gained in-memory caching. On cache hits, it returns in 4 milliseconds. Not a typo—four milliseconds.

What I Learned

Go's DynamoDB SDK is different. The AWS SDK for Go v2 uses strongly-typed attribute values. Instead of just passing objects, you construct types.AttributeValueMemberS{Value: "foo"}. It's verbose but catches type errors at compile time.

Singleton patterns matter. DynamoDB and SSM clients are expensive to create. I use sync.Once to initialize them on first use and reuse across invocations. The JWT secret from SSM gets cached in memory—subsequent requests skip the SSM call entirely.

ARM64 is free performance. Go cross-compiles trivially: GOARCH=arm64 go build. AWS Graviton processors are cheaper and faster than x86. There's no reason not to use them.

The shared package is worth the upfront investment. My first Lambda was 400 lines of self-contained code. After extracting common patterns, my second Lambda was 200 lines, most of it business logic. The third will be even smaller.

Should You Rewrite Everything in Go?

Probably not. But you might want to. My friend you get to choose your own adventure.

I'm finding it quite a bit more useful for the lambdas attached to gateway, and will likely consider migrating some of my SQS processes for backend jobs.

The splash screen was the perfect candidate. It runs on every app launch, users are actively waiting, and the logic is straightforward—validate a token, query a database, return JSON.

The Takeaway

Lambda cold starts are a solvable problem. If your users are experiencing slow initial requests, consider whether the runtime itself is the bottleneck. Go's 10-15x faster init times can turn a frustrating user experience into a snappy one.

The migration doesn't have to be all-or-nothing. Start with one endpoint, measure the results, and expand from there. Your splash screen users will thank you.

My app Border-Times is supposed to save you time in crossing the US-MX border, not make you space out while you look at a loading screen for 7 seconds. Sorry about that folks you can have your time back.

Originally published on pixelper.com

© 2026 Christopher Moore / Dead Pixel Studio

Let's work together

Professional discovery, design, and complete technical coverage for your ideas

Get in touch