Customers/Shinami
How Shinami Cut Infrastructure Costs by 79% with Latitude.sh
Shinami, the leading developer platform for next-generation blockchains like Sui, Aptos, and Movement, found itself facing a familiar challenge: AWS was getting expensive. As a one-stop shop for developers, they run critical infrastructure, including node services, gas station, and wallet services: all designed to help developers improve the user experience for their Web3 applications.
"We run validators and full nodes," explains Ivan Peng, a software engineer at Shinami. "These are compute instances with underlying databases, and there's another process on top that does syncing, querying. There are all sorts of things going on underneath the hood."
The problem wasn't simply AWS's capabilities. The team appreciated the flexibility: need more compute? Swap out an instance. Need more storage? A simple command-line change or Terraform modification would do it. But that convenience came at a steep price.
What made Shinami's infrastructure particularly suited for optimization, from VMs to bare metal, was its predictability. Unlike stateless services with spiky traffic patterns, their validators and full nodes ran stable, stateful workloads. "We know it's going to run one stateful workload, and we know the ingress and egress, it's very predictable," Ivan notes.
This realization sparked a question: could they achieve the same performance at a fraction of the cost by moving to bare metal?
Oh, The Wonder of Trusted Recommendations
The path to Latitude.sh came through the Aptos Foundation, also Latitude.sh’s customer. Shinami had been working closely with Aptos, building services on top of their blockchain and running an Aptos validator. When they started exploring bare metal options, Aptos pointed them toward Latitude.sh.
"They had referred us to you, of many other partners, other validators, or whoever they're working with, that they have first-hand experience of them running validators on Latitude.sh specifically," Ivan recalls. The endorsement carried weight. If other blockchain validators were successfully running on Latitude.sh, it was worth serious consideration.
The competitive pricing sealed the deal, but it was the combination of cost and proven track record that solidified their choice.
Keeping Kubernetes in a Hybrid World
For many companies, migrating to bare metal means rethinking their entire deployment process. But Shinami's team had a non-negotiable requirement: they wanted to keep using Kubernetes.
"We run everything through Kubernetes," Ivan explains. "We have a very strong infrastructure as code philosophy. We adhere to that pretty strongly." Their entire CI/CD pipeline was built around the assumption that workloads (whether stateful or stateless) would run on Kubernetes. Changing that would mean rebuilding fundamental processes.
The solution was a hybrid approach. Shinami kept their EKS control plane on AWS while extending the data plane to include Latitude.sh bare metal instances as worker nodes. The control plane stayed within AWS's managed environment, but the actual workloads could run on cost-effective bare metal.
The biggest technical hurdle was networking. AWS encapsulates everything within VPCs with numerous networking assumptions, while Latitude.sh provides a more straightforward approach: a VLAN connected to a router with a public IP address. Bridging these two worlds required careful architecture.
This is where Latitude.sh's Cloud Gateway became crucial: "That Cloud Gateway product that you guys have was like a very, very good fit for us," Ivan notes. The gateway established private bidirectional networking communication between AWS and Latitude.sh nodes, enabling the hybrid setup to work seamlessly.
Using AWS's documentation for EKS Hybrid nodes and Direct Connect, combined with Cloud Gateway, Shinami established private networking between their AWS VPC and Latitude.sh infrastructure. "That was 99% of the work, in terms of getting spinning up and provisioning a Latitude node for EKS."
The Beautiful Simplicity of the Final Solution
Once the networking was configured, the deployment process was remarkably simple. After installing the Kubernetes requirements, the Latitude.sh node registered itself with EKS. From there, deployments looked nearly identical to their previous AWS-only setup.
"Our deployment template looks almost identical to what we had when we were just exclusively on cloud," Ivan explains. "Now we just have to change the targeting. Say, okay, instead of targeting this EC2 instance on AWS, target the Latitude node. And it's somewhere around five to six lines of code to say, hey, go this way."
The Aptos validator became their test case, a way to prove the concept before expanding further. And the results were immediately evident.
Massive Savings Without Compromise
The performance metrics told one story: there was virtually no change. "In terms of performance metrics on compute and whatnot, there wasn't really anything. It was basically identical, which is actually a good thing," Ivan observes. No regressions meant the migration was technically sound.
But the cost metrics told a dramatically different story.
Their monthly infrastructure bill for the validator dropped from approximately $4,000 to $850. That’s a 79% reduction. When factoring in network egress costs, which decreased from $300-500 per month to essentially zero (absorbed by Latitude's 20TB monthly allowance per server), the total savings were even more substantial.
"Yeah, we have our Latitude bill just for the validator specifically, which is 850-ish, 840 bucks a month. So that's down from like 4,000-ish," Ivan confirms.
The Infrastructure-as-Code Experience
For Shinami, the Latitude.sh experience is almost entirely code-driven. As a team that lives and breathes infrastructure as code, they interact with Latitude.sh primarily through Terraform.
"Honestly, we're very IaC-heavy, so we almost exclusively deploy through Terraform," Ivan explains. "95% of what I would interact with Latitude is through the Terraform provider."
The hybrid Kubernetes setup brought another advantage: Shinami could port their existing monitoring and observability tools directly to the Latitude.sh nodes. Metrics automatically flow to their Grafana dashboards, giving them visibility into workloads running on Latitude.sh without any additional configuration.
"Interaction with the Latitude dashboard is actually very minimal. I maybe check it once a month, if I'm like, okay, just check the invoice," Ivan notes.
What Makes Latitude.sh Different
When asked to define Latitude.sh in one sentence, Ivan's response captures what matters most to infrastructure engineers: "Latitude does a very simple thing very well."
He elaborates: "At the end of the day, running bare metal is a very simple product. Here's a bare metal server, here's an IP address to SSH into it, here's uptime, and here's if there are issues, we're transparent about it."
The simplicity doesn't mean a lack of sophistication. It means focusing on doing the fundamentals exceptionally well. For a team like Shinami's, that's exactly what matters.
"You definitely give AWS a run for its money in terms of how the experience is compared to it and literally in terms of cost," Ivan adds. "What I do appreciate most about it is the simplicity behind it and how the product itself is very simple to use, and the communication is very transparent."
Looking Forward
The Aptos validator was just the beginning. With the hybrid approach proven successful, Shinami's plan is to migrate more stateful workloads to Latitude.sh over time. The combination of substantial cost savings, unchanged workflows, and reliable performance makes the path forward clear.
For companies running predictable, stateful workloads on AWS, especially in the blockchain space, Shinami's story offers a blueprint: you don't have to sacrifice your existing tools and processes to achieve dramatic cost reductions. Sometimes, the best solutions are the ones that work seamlessly with what you've already built.
Start building on Latitude.sh today and enjoy a seamless multi-cloud experience, too! Get started.