🚀 Fauna Architectural Overview White Paper: Learn how Fauna's database engine scales with zero ops required
Download free ->
Fauna logo
FeaturesPricing
Learn
Customers
Company
Support
Log InContact usStart for free
Fauna logo
FeaturesPricing
Customers
Start for free
© 2024 Fauna, Inc. All Rights Reserved.
<- Back
Interview Mike Rispoli

A CTO reveals their preferred serverless stack for 2023

Mar 28th, 2023
Interview

Mike Rispoli, CTO of Cause of a Kind

Read on to learn why digital agency Cause of a Kind has adopted a serverless stack for their web development projects.

➡️ State of serverless

➡️ Building with Fauna

➡️ Building for scale in a high throughput gaming application

➡️ Database comparisons for serverless stacks

The modern economy presents a range of challenges for engineering teams, including the need to balance cost-efficiency with rapid development cycles. Serverless architectures have emerged as an innovative solution to these challenges. By eliminating the undifferentiated engineering and overhead costs associated with scaling infrastructure, IT teams can focus more on developing applications that meet customer needs while optimizing cost structures. Serverless technology also enables faster development cycles, allowing teams to deploy updates and new features more quickly and efficiently. With its flexible and scalable architecture, serverless technology has become an essential tool for modern IT teams seeking to optimize their operations and stay ahead of the competition. However, until recently, many serverless technologies had not been tested by internet-scale workloads.

Fauna’s Tech Evangelist Luis Colon had an opportunity to sit down recently with Michael Rispoli, CTO at leading digital agency and application development firm Cause of a Kind, to discuss their approach to application development and how today’s serverless solutions have unlocked opportunities for Michael’s team and their clients.

In this wide-ranging conversation, Michael unfolded his team’s approach to modern application development, his preferred stack, and how to optimize for differentiated feature development.

State of serverless

Luis Colon: Can you tell us about your perspective on the current state of serverless technology in 2023? What is your 2023 tech stack of choice, and why?

Michael Rispoli: In 2023, we are recommending serverless technology for most greenfield projects. Unless there is a special use case, we suggest using serverless databases, functions, and deployment to shift the workload from manually configured technology to tools that do the job for you. We mostly use Next.js with TypeScript as the gold standard for deployment. We typically host our application on Vercel and use Cloudinary for image storage, which helps us bundle up and package our services for clients. In the past, we’ve also used AWS Lambda, Azure Cloud Functions, Digital Ocean, and Heroku depending on requirements. Meanwhile, we recommend Fauna as the data layer since it is a NoSQL database that supports relational data, making it easier for our clients who are used to Postgres or MySQL. It also offers automated backups, global distribution that scales, and no data loss advantages.

Luis Colon: Interesting. In your experience, are there any complex use cases that these serverless technologies can’t handle?

Michael Rispoli: Previously, serverless technology had difficulty handling web sockets or cron jobs. However, many serverless tools have resolved those issues, and we are not encountering use cases anymore that the mature serverless tooling can’t accommodate.

Building with Fauna

Luis Colon: That’s great. You mentioned a few of the reasons earlier, but let’s dig in more into your use of Fauna. Can you share why you originally selected Fauna?

Michael Rispoli: Sure, the first thing that comes to mind is that we wanted to do way more with less manual intervention. For example, in the past, we've run into limits with creating indexes in other databases like MongoDB, where you can quickly run out of memory if you're not careful about what you're indexing. In our experience, Fauna is super forgiving in that you don't really have to worry about unused indexes or limited-use indexes that are going to use up your RAM. You can just create an index and use it for the times it's going to get used. It's not costing you anymore to have it.

Luis Colon: That makes sense. In your experience, how does Fauna help reduce the cognitive load of managing a database?

Michael Rispoli: With Fauna, you don't really have to think about how to manipulate data on hardware to get the most out of it. Fauna takes care of that for you. So you can focus on solving business problems and delivering value to your clients and their customers. And that's really what they want at the end of the day. They don't want to pay you to do low-level optimization work. People just want their product to work well and deliver value to their customers.

When we start talking about the nitty-gritty details of how we're optimizing the database, it can be hard to explain why that work is necessary and why they (our customer) should be paying for it. We’re happy that Fauna allows us to avoid these conversations. The cognitive load of thinking about how to manipulate data on hardware to get the most juice out of it is not something my clients want to pay for. They want me to solve business problems, and it’s the same for my customers; they want to solve problems for their users and their customers at the end of the day. This is where Fauna comes in. It helps reduce that cognitive overhead by taking care of the hardware aspects.

Luis Colon: When you first started using Fauna, what was the learning curve like?

Michael Rispoli: When we first started migrating to Fauna, we were still writing unit tests and mocking Fauna's behavior. We were kind of used to that workflow of creating mocks. But over time, we started to realize that Fauna could do so much for us, and we started to migrate our most critical roles and permissions to it.

With Fauna, we had less code to maintain and fewer bugs that could leak in. This is because we were able to rely on Fauna's built-in features for managing roles and permissions, rather than writing our own custom code for these tasks.

While FQL (Fauna Query Language) took some getting used to, we’ve discovered that ChatGPT has really helped streamline the development experience even for developers on our team that have been using Fauna for years - and this would certainly be the case if you’re just getting started with Fauna. You’re able to describe the query and data access you want and ChatGPT can write some insanely complex queries for you without a fuss. It’s also helped us optimize queries — for example, where we once would use two, three, or even four separate queries, ChatGPT has helped us distill that down to one.

Luis Colon: What advice do you have for people who are just starting out with Fauna?

Michael Rispoli: My advice would be to start with Fauna as a data store and gradually start migrating your most critical roles and permissions to it (luckily, you have a lot of control over the permissions and roles with Fauna - but it might take some time to get used to this). I’d generally recommend taking it one step at a time, and don't feel daunted by the fact that you're not using every single feature in this database right away. One of the cool things about Fauna is the user-defined functions, which allow you to apply predicate logic to these different roles and systems, and it's really, really incredible. It makes things much easier. User-defined functions are essentially custom functions that you can create within Fauna. They allow you to apply predicate logic to different roles and systems, which can be really useful when it comes to managing permissions and access control.

Building for scale

Luis Colon: One of the common concerns from technology leaders is that serverless technologies aren’t built for scale. Can you tell us about a project you worked on this past year built on serverless technologies that put your stack to the test?

Michael Rispoli: One of our larger projects earlier this year was the VeeFriends game, led by Gary Vaynerchuk. It's a Flappy Bird-style game, and every time a player scores, the data is streamed into Fauna. We were worried about the scale of the game, because we had never worked with that much traffic before. We used the Fauna cost calculator and it looked okay, but we weren't sure what would happen when we hit millions of reads and writes. However, it turned out to be very cost-effective, even when we went into metered usage.

We did encounter one issue with our use of an infinite connection in node. We were using a long running server instead of a cloud function system because we knew there would be heavy, continuous traffic and we didn't want to deal with cold starts. However, we found that in the way we were using it, we didn't want to keep that persistent connection and we rapidly turned it off after we went live. But even with this issue, Fauna never skipped a beat. We never lost any data or encountered any issues, even with 10,000 requests a second coming through. It was really incredible and gave us a lot of confidence in Fauna's ability to handle scale. We were able to handle a huge amount of traffic with no issues of scale whatsoever. It was great to see that we never had to worry about getting hit with a huge bill or losing data. It gave us an incredible amount of confidence in Fauna's ability to handle large-scale projects.

Database comparisons for serverless stacks

Luis Colon: It might be helpful to contextualize using a database like Fauna relative to some other databases available. Can you tell us a bit about your experience building between DynamoDB and Fauna?

Michael Rispoli: First, Dynamo and Fauna have different approaches when it comes to addressing data access patterns. With Dynamo, you have to think about these patterns upfront, which can be challenging when you're working in an agile environment. The reality is that developers don't always have all the details upfront, or the ability to forecast how an application might evolve. So, with Dynamo, you have to consider all possible data access patterns upfront, which can be overwhelming, especially if you're working on a large project with many features — but it can become fairly easy to code yourself into a corner if you're not careful. Also, while Dynamo allows you to work with denormalized data, creating different data sets can quickly become messy, and you don't want to have that mess early in the development process.

With Fauna, it's actually quite easy to adapt to these changes. If you're normalizing your data, then most of the work becomes writing different FQL queries or creating new indexes to bring the data together in different way. Fauna allows you to start putting models together and work with normalized data without worrying about how someone else will fetch a piece of data in an unrelated feature. This is particularly beneficial when you're working on atomic pieces of a story.

Luis Colon: How does Fauna solve this problem?

Michael Rispoli: Fauna is great for this because in a lot of ways, building reviews is as simple as building a reviews Collection. For us, we're seeing these use cases in the world of replacing these legacy e-commerce and CMS systems. The same thing goes for something like WordPress if you want commenting on your blog. We can now leverage Fauna for that. It's very easy for us, and we're able to make it a bit of an offering as well to our end customers.

One advantage of using Fauna is that it's very easy to use and integrate with other systems. Also, we don't have to maintain the service. Fauna takes care of that, so we can focus on building and providing solutions to our customers.

Multi-tenancy is great in Fauna because I can just create a new database for each customer, and their data is totally isolated. If I then upgrade this service, I don't need to redeploy it to everybody else. They have their own database that can follow its schema. This allows us to fill gaps and affordably provide solutions to our customers, without having to necessarily go out and buy a SaaS solution and go through a whole enterprise sales pitch.

Use cases now change, so we have to work a migration into our development plan, which costs sprints, time, and velocity. However, we don't have that with Fauna. We recently had experience with someone who built their application with Postgres, and the whole team kept saying, can we never use this again? People really missed Fauna. There's that cognitive overhead of learning FQL initially, but once you get the hang of it, you don't want to go back to the old world. Using Postgres came with a lot of conversations about local database usage, installation, and demo database work. All of these just cost the team time and money.

I think people justify the old database model based on assumptions that don't really come to fruition in reality. For example, people say that Postgres is open source and you can host it anywhere. While that's true, not many people actually do that. Also, using an ORM to migrate databases is not as straightforward as it sounds. Complicated migrations where you switch between two similar databases, like Postgres and MySQL, often have problems. So all of these justifications don't really hold up in practice.

If you enjoyed our blog, and want to work on systems and challenges related to globally distributed systems, and serverless databases, Fauna is hiring

Share this post

TwitterLinkedIn

Subscribe to Fauna's newsletter

Get latest blog posts, development tips & tricks, and latest learning material delivered right to your inbox.

<- Back