20 years in this field has taught me that (1) we move on to new technologies more often because we don't understand the current ones than because the current ones are flawed, (2) we fail to weigh the costs and risks and setback of moving to new technologies, and (3) we don't realize that we're conserving overall complexity and flawedness, just moving it around.
Though the shifting sands of the economics of compute, disk and network tend to favour this or that approach as time goes on. So while FaaSes are just CGI, they aren't just CGI; but we can at least try to be non-doomed with regards to repetition of history.
Problem is that these young whippersnappers don't even realize that there is nothing "new" about this tech stack. You can recreate the new sexy with 30 year old tech. You are talking about a load balancer that is redirecting requests to individual cgi-scripts based on the url. They have just given up knowing how to setup and configure physical servers.
> They have just given up knowing how to setup and configure physical servers.
Or they know how complex and error-prone it can be, and decided to spend their time on other things.
It’s good to know how that stuff works, how to configure a LB, install nginx, rack a server... the way being able to do long divisions by hand is good to know. But when you’re crunching numbers all day, it’s easier to use a calculator.
More like learning to use a slide rule :) You still have to learn how to setup a load balancer (API Gateway), firewall (IAM, API Gateway), Server Config (CloudFormation, API Gateway, S3, etc) and so on. And those are Vendor specific. Move to Azure or GC and you have a whole new set of "serverless" servers to learn to configure. About the only thing you have really given up is knowing where your machines are physically.
You've also given up having to buy machines, predict resources, over provisioning to meet peak demand, server maintenance for databases, caching, web servers, etc.
If I want to load test something for a day and spin up 20 EC2 instances and spin them down, I can do that with a script. Then I can see where my bottlenecks are and provision instances, load balancers, increased disk IOPS, etc. as appropriate and tear everything down I don't need.
Apples to Apples. Your 20 EC2 instances are just 20 VPS at any VPS provider located geographically where you want to deploy them. Also with a script. You still have not gained anything from your vendor lock-in. IaaS has been around since the 90s.
And what about the load balancers, the databases instances, the queuing system, the global CDN, the caching servers, etc? I could script my own strategy for autoscaling that integrates with metrics from the running instances, but why would I when I click on a few buttons and have autoscaling based on CloudWatch metrics, the size of the SQS queue, CPU usage etc?
But as far as "vendor lock-in", it's like developers wrapping database access up into a repository pattern just in case we want to change databases. In the real world, hardly anyone takes on massive infrastructure changes to save a few dollars.
On the other hand, there are frameworks like Serverless and Terraform to build infrastructure in a cloud vendor neutral method.
Again each piece you have named can be done in an "older" tech which was the original point of this thread. Every few years the tech industry reinvents the same tech and a new generation of developers think mana has fallen from heaven, when in truth it is the same as the last round with new buzzwords attached.
Yes it can be done but how efficiently? I couldn't call up the netops guys to buy and provision all of the resources I needed to test scalability within the time it takes me to setup a cloud formation script.
In 2008 we had racks of servers we were leasing and that were sitting idle most of the time just so we could stress test our Windows Mobile apps.
I've been developing professionally for 20 years and 10 years before that as a hobbyist. I know what a pain it is to get hardware for what you need when your company has to manage all of its own infrastructure.
Just setting up EC2 instances and installing software on them doesn't reduce the pain by much. Sure you're cutting down on your capex but you still end up babysitting servers or doing the "undifferentiated heavy lifting", I would much rather stand up a bunch of RDS instances.
As far as serverless, why manage servers at all when you can just either create a Lambda function for the lightweight stuff or deploy Docker images with Fargate? That's just one less thing to manage and you can concentrate on development
I am not disagreeing with you that it is easier than deploying your own infrastructure. .. But, again back to my original point, lamda functions are not anything new. They are simply an http app that is "typically" responding to a single route. The API gateway is simply a configured proxy routing the "public" routes to your various "functions".
All the parts are easily replaced or scaled however you see fit. Your function can be in any language that can respond to http on any platform you want. You can put whatever proxy you want in front to define your routes. You can get as simple or complicated as you want.
Serverless is not serverless, you are just abstracted away from it.
[EDIT] I would add that personally I would spin you a cluster of Flynn on Digital Ocean :)
With serverless, you automatically get scale for each endpoint individually, not just the entire app. If for some reason you get an unexpected amount of GET request to POST requests, just the GET lambda will scale. If I tried to do the same with EC2 instances behind an ELB, I wouldn’t get the same level of granularity.
And lambdas aren’t just about responding to http requests, they are also used to respond to messages, CloudWatch events, files being written to S3, etc. I would hate to have to stand up servers for that. Even if you don’t want to get “locked in” to Lambda, why not serverless Docker?