Agreed. I saw it as a general lament against over-engineering. I don't think the point got lost in the super specific example...
You could just as easily level similar rants against the likes of React and it's wider ecosystem, Tensorflow, Typescript (many will disagree), Docker... I'm sure others have their own bugbears.
Much of this is subjective, of course. But to me, it feels like software development is trending towards unecessarily complicated development processes and architectures.
And the only beneficiaries appear to be the large technology companies.
I suppose in exchange, you're getting a guarantee of maintenance. But is that really worth the additional complexity associated with the common use of these tools?
TypeScript is a funny one. I love the language, but at the same time, I totally agree with the premise that it is unnecessary complexity! And yet I swear by it. I can't explain why there's not more cognitive dissonance there.
JavaScript taught me to love async, then functional programming, and TypeScript taught to me to love static types. I'm now desperately wishing for a world of OCaml/Haskell, but where are you going to find teammates using those? And so I'm back at TypeScript.
I think all of these "higher order" languages that do transpiling (including Scala, Closure, Kotlin, Groovy, etc) fit this same scenario. Increased toolchain complexity for a decrease in development/test/maintenance complexity.
With any toolchain, though, it can be hard to know where to draw the line for "worth it for this project" until you're already an expert in using the toolchain, at which point, why wouldn't you use it?
You choose the tool chain to fit the anticipated team. If that team is just you, then who cares? Pick the easiest thing for you. But if you’re an expert in the tool chain and you anticipate new people hopping on, then the on-boarding process should definitely be one of your considerations.
The argument for simpler tool chains for simpler projects is that the time it takes to on-board should not outweigh the time saved by the toolchain’s amenities.
So the argument for TypeScript despite its complexities is that you believe its approach to type decorations on top of JS strikes the right balance for ease of on-boarding (IMO fairly straightforward) and supporting code maintainability (IMO a big improvement in 99% of the cases) despite the extra complexity of lengthening the toolchain (IMO it has some counterintuitive parts but you can mostly just forget about it once you find a working setup.)
I won’t use TypeScript if I’m whipping something up quickly, but if I anticipate open sourcing, adding unit tests and CI, etc., then TypeScript feels like a useful addition.
I used to make the same argument, about not using TS for prototyping, but the tooling is so good that at this point, I generally just change the file extension to ".ts" and write no manual types for the mistakes and type-hints + intellisense the compiler and language server provide.
Yeah exactly, and we can also pick the features we need in a language. E.g. C++ is super complex after C++/11, but we can still just pick a small subset to make it easier.
Then some poor soul gets to wrap the old patterns into the new patterns!
But seriously, if modern C++ has one thing going for it, it has a great backwards compatibility and forward refactoring story.
I'm not a huge fan of the "evolved" syntax, and I rarely use C++ anymore, but I like modern C++ substantially more than C++03, which was where I first cut my teeth as a programmer.
> Then some poor soul gets to wrap the old patterns into the new patterns!
This isn't always an option. Google's C++ guidelines generally forbid using exceptions, but concede that you might have to if you're dealing with existing code, particularly on Windows.
What is unnecessarily complex about TypeScript? It's JavaScript, with static typing plus type inference, and pretty nice generics. The ecosystem of modern JS surrounding it is horribly complex but TypeScript itself seems like a fairly straightforward programming language.
It hasn't been a big detriment for me as someone learning Typescript on their own, but it is another moving target for looking up "how do I do x..." and finding most of the forum posts are a little outdated and the latest version of Typescript has a different/better way of doing things than just a year or two ago. I find myself scrolling through github issues comparing my tsconfig to figure out why my stack behaves differently than someone else.
That was my experience with TS maybe two years ago - at this point project scaffolding tools are good enough to generate sane output that I spend a little bit of time upfront but then keep plowing away. Maybe I got better at it as well - but I haven't kept up with TS news in a long time and I don't feel like I'm missing out on stuff or encountering things I don't understand.
I've written >50k LoC of TS in last few months for sure (doing a huge frontend migration for a client) and I can't remember the last time I googled anything TS related. Actually I remember - a month ago I wanted to know how to define a type with a field excluded, took 30 seconds of google.
Meanwhile the project started out as mixed TS and ES6 because most of the team was unfamiliar with it and there are a few dynamic typing evangelists - we ended up going back and just using TS all over the place, the complexity introduced is minimal and the productivity boost from good tooling on product of this scale is insane.
Typically for me the time cost is in going down rabbitholes to attempt to improve implicit static types for getting closer to "whole program" functional type inference (TypeScript repeatedly seduces me into this), and the decision inflection point is generally not for application code but for the space between application and script code, things that you might also write Perl or python scripts to accomplish...the types are especially useful in this context because they tell you a lot more about the script than your typical script, but they also introduce a bunch of overhead for a few lines of code.
Yeah I probably shouldn't complain, I've written less than a couple thousand LoC so I'm still googling a lot, but TS has definitely paid for itself in code clarity already.
At least I can pick it apart and deduce it. I can even use the compiler to give me information about it via editor integration (all same editors have it now). So I’ve I’m unsure it might take a few minutes of investigation to completely understand the signature.
I’ve seen some JavaScript written so tersely it was nearly impossible to figure out without spending Possibly hours on unwrapping the code. That’s the value provided here.
When it comes to spending hours unwrapping code, gotta love js that has undocumented heavy use of string based property access for things like imports. Like the worst of both functional and OOP combined, and generally no IDE support to be had.
Implementation approximate, but otherwise true story, both LHS and RHS:
It's funny that you consider that complex; I consider that a pretty normal, expressive type signature that gives the compiler critical information about how I expect to use that function, which in turn allows it to completely squash several classes of bugs I might write.
But I love strong type systems; people who prefer weak type systems would likely consider things like this to get in their way.
Could you elaborate on this point for someone who isn't familiar with typescript (or javascript, for that matter). I'm no stranger to strongly typed languages but that function signature seems pretty complex to me. By elaborate I mean explain what's going on in that signature, and what the critical information it supplies is?
It's from the TypeScript 4.0 beta blog post[0], which describes it as:
> partialCall takes a function along with the initial few arguments that that function expects. It then returns a new function that takes any other arguments the function needs, and calls them together.
The type signature looks like:
type Arr = readonly unknown[];
function partialCall<T extends Arr, U extends Arr, R>(f: (...args: [...T, ...U]) => R, ...headArgs: T) {
First of all, know that in TypeScript, colon separates a variable or parameter name from its type. So in C or Java you'd say "Date lastModified", in TypeScript it's "lastModified: Date".
Now, looking at it piece by piece:
function partialCall
declares a function named partialCall.
<T extends Arr, U extends Arr, R>
says that this is a generic function with type parameters T, U, and R (the first two of which must extend Arr, that is, a readonly array of objects whose types are unknown).
This function's first parameter is:
f: (...args: [...T, ...U]) => R
The name of the parameter is `f`, and the type of this parameter is `(...args: [...T, ...U]) => R`. This means that `f` must be a function whose return type is R, and its parameters must match `...args: [...T, ...U]`.
The `...` in `...args` makes it a rest parameter[1] (variadic parameter in other languages). The type of `...args` is `[...T, ...U]`, which you can think of as the concatenation of the T and U arrays. (It's a bit strange to see `...` in a type specifier, I can't recall having needed this before.)
...headArgs: T
says that the `partialCall` function itself is also variadic, and its arguments are gathered up into headArgs. The type of headArgs is T.
I don't know if I'd call this normal or clear compared to the signatures I encounter daily, but it's pretty elegant for a higher-order function that takes a function with any signature, and some arguments that match the given function's parameters, and returns a function with the matched parameters removed. And the implementation of partialCall is just this one line!
I would say, there is some complexity in reading (and writing) the thing. But there is probably not as much complexity in using it. And you are explicitly codifying the complexity in one place, that would presumably still exist in a dynamically typed language, just implicitly, and likely spread out across the code. This looks pretty nice to me at a glance, just like anything once you write a few of them and come across a few in the wild and take the time to pick them apart they no longer seem so scary.
Offtopic but what's nice about Typescript generics? Typescript doesn't even let you specify variance. It's one of the unsound parts of the language's type system in fact.
You cannot explicitly specify covariant vs contravariant, but TypeScript absolutely does allow you to express these relationships. Unless I misunderstand you.
That said, the type system has come a very long way even in just the last year. The biggest improvements imho being around recursive types (which was one of the biggest holes for a long time imo), literal types / literal type inference, and tuple types.
It's not complete by any means, but it's improving quite rapidly.
It appears that function parameter bivariance is still a thing? [1] Although there seems to now be a flag to make this one use of variance correct.
I would assume even Array<T> is still bivariant as well...
Both of those are horribly unsound, just for convenience. Sure convenience and compatibility are Typescript's ultimate goals, but to actually praise it for its generics? That's very strange.
> TypeScript absolutely does allow you to express these relationships
How would you express a class with covariant or contravariant type params in Typescript?
describe the steps to release the simplest ever code in javascript to production: write a js file, host it, done.
The same thing in TS adds at least one step (not to mention the rest of the tooling you will want)
So while a prefer it over JS, there's no arguing that it is more complex as now you require a build step for a language that only exist because people wanted a language without a build step.
A lot of fortune 500 companies with some developers who missed the trendy stuff still do it that way. I made a medium size website (30 pages) in React with pure javascript and dependencies being script tags in index.html to vendored files.
So not even JSX. I did it that way because it was the easiest way to develop and deploy in that environment
And also don't use modern frameworks like React or Vue, or don't mind sticking all your templates in strings, or in your index.html, and shipping 100kb of template compiler to to user, or write render functions directly ala mithril.
my team (in a large enterprise) uses js for scripts using a shebang interpreter declaration, eg
```
#!/use/bin/env node
console.log("hello cli")
```
While it does depend on node, and there are arguably better crossplatform languages for this purpose, it is a zero-tool chain use case that is very convenient for us.
"Nobody" here means few, or more loosely, much fewer teams than before, not "literally 0 people/teams".
And the group mentioned is (I deduce) not generally individual devs, enterprise devs building some internal thing, and so on but teams in companies doing public-facing SaaS, teams in startups, companies like Amazon/Google/Facebook/Apple all the way to AirBnB etc, and so on.
Yes, exactly. The larger the team, the more likely someone is a front end expert and wants to use latest cool framework, which will by its nature require a build step. Even for something simpler, you'll probably want it for cache busting, minimization, etc.
At the rate the Typescript is releasing, that'd be a support nightmare. Perhaps a better solution is for TC39 to propose optional types. It could be modeled on Typescript for sure, but it would still be backward compatible.
Javascript of today borrows liberally from coffeescript of yesterday, so it would make sense for javascript of tomorrow to borrow liberally from typescript of today.
But then if that's an option, I think Typescript will be the last language I migrate to, because Typescript development culture, tending as it does towards overcomplicated solutions to simple problems, is unpalatable to me.
I'm drawn to the idea of using Rust over WASM as a frontend language, and I think I'd rather choose that approach to develop any browser UI where type safety is critical, provided there is no discernable difference in performance (when compared to TS over WASM).
Yes, it's probably a better idea to improve WASM than add a proprietary format (TS is by Microsoft) to the open browsers. Google tried to do the same thing with Dart and it was decried about a decade ago, so now they use it for Flutter.
I think it will be great once support is broad enough.
It might, ironically, increase the current fashion for framework churn, but at least there will be no single language for developers to derride.
In fact, I wonder how ECMAScript will fare in a post WASM world... I suspect it would still thrive tbh.
Or perhaps people will take to other flexible, expressive languages for UI development. Like Python's niche in computer graphics, or Lua in games and AI research.
I can still see myself using JS in that future.
But not for everything.
Can't put my finger it for you, but it defn doesn't feel as good. Perhaps because it needs to interop with javascript, and js objects are all over the place with types.
All the JVM languages you list aren't transpiled. They target JVM bytecode just like Java. They're first class, even if Java obviously gets the overwhelming amount of VM level support. Engineers working on the JVM are definitely aware of and want to support non-Java langs.
Scala can target JavaScript. Kotlin is usually used with the JVM, but can target native machine code (and JavaScript too I think?). Transpile was the wrong word for me to use.
> I'm now desperately wishing for a world of OCaml/Haskell, but where are you going to find teammates using those?
Having walked that way I can recommend you to look at F#, and if you aren't a solo developer (or you want to get real regarding jobs and colleagues) make sure you do not have allergy to C# interop.
You hit the nail on the head with tool chain complexity. Transpiling adds yet another layer in a stack that's already deep. Web isn't my field but it looks like from the outside that obsevability[1] is also lacking. Some friends that are in web programming mention a lot of younger engineers don't realize how bad it is.
It makes me think of military contracting as a parallel example. (at least in the USA) The rules required to develop, build, and deliver military equipment to the US is exceedingly complex. And it isn't necessarily a benefit for the Pentagon, as much as it is for the established defense contractors. The barriers to entry in that industry are huge, purely based on the contracting requirements.
So complexity (in tools and process) do not necessarily serve the individual developer the way they serve larger organizations.
> And it isn't necessarily a benefit for the Pentagon, as much as it is for the established defense contractors.
As with many government related procurement systems, there is so much paranoia about abuse, and desire not to repeat various disasters from the past, that the system has by perceived necessity become complicated.
Of course it makes it frightfully expensive to the degree that few companies can actually throw the resources at it to be able to navigate it. The lack of competition can result in projects costing a large amount of money, and the buyers in the agencies having no real options to look elsewhere.
At an e-government company I worked for, we a service for the state that helped companies navigate the state's procurement system. We created a centralised place for all data to be gathered by the applicant, generating all the forms they needed, and provided a way for the applicant to track the state of their application. It introduced quite a change for the state, dramatically widening the potential pool of companies that could bid for contracts. Also pissed off quite a few of the already entrenched ones :)
It's like a macro version of the story told in Capt. David Marquet's book Turn the Ship Around! Huge piles of bureaucracy, complexity, and waste build up in an organization risk avoidance is allowed to become the primary goal.
Hawaii Information Consortium (I guess it's now, NIC Hawaii, https://nichawaii.egov.com/), which is a subsidiary of NIC.
HIC was a great company to work for, as was NIC from the limited interactions I had with the larger corporate powers that be. Most websites/services were provided for free to the state / state agencies, instead relying on small fees per transaction on some of the stuff we did, to fund the work that didn't have transactions (I forget the amount, but we're talking something like 50c per transaction).
Being free was a key incentive to getting various agencies to come on-line.
My first boss (rip) had a internship with a defense contractor his junior year in college. They gave him one project. Design a cover for the air intake for an APC or some such.
Easy!!! No not actually easy because of all the constraints.
It had to be stowable. So a hard cover was out. It couldn't produce toxic fumes if it caught fire. So most plastics were out. Cloth was a problem because it couldn't get sucked into the intake if someone tried starting the engine without removing the cover.
He designed heavy canvas cover with metal stiffeners. And snaps. And then had to switch to a draw string because the snaps failed at -40F. And tended to get clogged with snow.
Whole thing took him three months.
Then there was the friend at college who worked on a VCR to record gun camera footage. Also a bunch of requirements. Higher video quality. And can't lose lock when the pilot does a hard pullout after firing his munitions. And the total production run? Well exactly how many fighters does the airforce have? Couple of thousand?
I think military tech has a problem that it's trying to keep up with commercial driven tech which operates on a scale that's a 1000 timers larger. Military produces a few thousand artifacts to commercials few million artifacts.
Those constraints seem a lot more reasonable than what I've seen in other aspects of the business world. At least they are grounded in the realities of the actual purpose. Well, mostly anyway. I'll leave justifications for the $1000 left-handed hammers as an exercise for later.
I've seen (and removed) plenty of requirements that were put in the specification just because. Because someone needed X amount of Y technology in their projects for the year. Because it seemed cool. Because other people were doing it. Because they wanted it on their resume. Because. But not because it was appropriate to the project or its purpose.
I'd say the majority of the work I do now is pruning these nonsense "just because" clauses out.
Its entertaining at least. I regularly shake my head in disbelief as I go through a specification and wonder, "who hired these people?" or "Why is the person who hired them still working?"
Fortunately for me I also get to hand these questions back, much like the good uncle who only needs to uncle and not actually parent: "Here, have them back!" I say at the end of the visit.
The the hammer was waaaaay more than $1000. But that's because the military said "We'll pay you a total sum of X, but to make it easier to fund the project we'll let you break it into n parts and pay X/n per part".
The contractor decided that to make their cashflow smoother, they'd include "manual impulse force generator" as one of the deliverable parts.
The hammer was "only" $435 and that price is really a reporting artifact.
The hammer was part of a larger contract that included spare parts and R&D effort linked to those parts. When the spending was reported, the same absolute amount of R&D ($420) was allocated to every item, inflating the apparent price of a $15 hammer to $435. By the same token, the engineering work on more complicated systems (e.g., an engine) was an absolute steal at $420 and since the total amount of R&D spending was fixed, nobody really got ripped off.
Because IBM was there 20 years ago and not only left behind a pure-IBM stack, but got their RUP requirements written into the policy manual as a barrier to entry for any non-IBM vendor to take over.
I'll leave justifications for the $1000 left-handed hammers as an exercise for later.
Those tend to be an accounting artifact, I heard. If you order a thousand different items for a million dollars total then each one of them will show up as costing exactly $1000 no matter what they are.
> I think military tech has a problem that it's trying to keep out with commercial drive tech which operates on a scale that a 1000 timers larger. Military produces a few thousand artifacts to commercials few million artifacts.
Not to excuse military contracting pork and cost padding, but this is a good point that a lot of people seem to miss. There's also the fact a military contract will be for a production run and follow on support.
So they buy a thousand full units and parts/tools to fix and maintain them up front. You can't just go to Autozone and pick up a new tank tread or parts for a jet turbine.
Another huge factor is the service lifetime of military equipment.
There are aircraft that were designed and created in the 1970s that are still in use today. Sure, many of the components and internal systems have been modified and upgraded, but much of the original design is still there and operating.
>> a military contract will be for a production run and follow on support.
When "follow-on support" has to last for 50 years or more, it makes a big difference.
React I have been thinking about, having worked with it a lot and also recently done a React / TS / GraphQL (GraphQL may have been the poorest choice I made; time will tell) project.
I think React itself is awesome and I've always enjoyed it. While I'm not an expert, it's conceptual foundations and core abstractions felt right, and I do think it makes lots of frontend tasks simpler, especially for non-small projects.
At the same time the frontend ecosystem is a mess, as we all know, and as my CSS skills advance I see more just how much could be done with HTML / CSS / limited use of JS and your classic REST app, that's not a single-page app, and how much complexity might be able to be avoided and thus time saved.
At the same time, I don't think it's just "trendiness" that has caused the growth of React and SPA's. I think they give us various wins that we probably take for granted because we're now used to them.
So, the short answer to my long ramble is, I don't know.
But I do think React itself, specifically, hit a sweet spot in terms of being a powerful yet understandable tool, and still remaining a tool, as opposed to a framework.
Me: But we've been doing CRUD for 30 years without an API this is a small project
Dev A: But I don't know how, its not best practice, my team lead agrees, here is a medium article, get with the times, etc.
Me: Ok so you don't know how to do your job.
Dev B: Here, put the React SPA in a Docker container and run it on some cloud its easy...
Me: But all I need is still CRUD, I dont need to scale it, I dont care if its isolated, it should have been 100 lines of code
Dev B: its not best practice, my team lead agrees, here is a medium article, get with the times, Docker run everywhere, etc.
Me: But what I need to connect it to internal resources and use DNS?
Dev B: load balancer! If only you have an AKS cluster!
Me: We work in a 25 person company that needs some very basic CRUD. It will never scale past one deployment, ever, if it does, dont worry we will pay someone to do it right because we will be swimming in money.
The problem is that people are starting to not know how to do simple things. So you get grumpy admins losing their marbles over the complexity of this BS. Most of the time things could be fixed faster, easier, deployed and managed easier using simple technologies that have been around for 20+ years.
But lets wack it with the Python, Docker, React, NoSQL, GraphDB, K8S hammer because...? Not only that, the juniors don't know multiple technologies anymore they just know X. So sysadmins complain that devops is pushing more roles on them, while I have developers that have written full Node.js sites and dont know that IP or a Port, or TCP, UDP, is...
Dude! You need to run more servers. 250 should do it. I realize its just a simple page with a pair of boxes for name registration, but that's irrelevant! All the cool kids agree. You need infrastructure. It needs to be done as complex as possible! Complexity-as-a-Service won't just happen, you have to want to make it happen. You don't want to be uncool, do you?
Looks at a specification on the desk Yep. We'll keep the first and last page. Everything else is gibberish. Next!
Edit: Looks like a few people disagree. Perhaps they might like to explain why I saved a bunch of companies around 100k in AWS fees just by pruning their server requirements? No? And yes, one of the projects was a simple set of web forms that required 7 servers to run. We pruned it down to 3 and that was just for redundancy and load. That's just one example.
i didn't downvote you, but since you're asking – the first part of your comment is needlessly snarky and doesn't really add much beyond "yeah, i'm frustrated by people adding unnecessary complexity just because it's trendy". on the other hand, the part added in the edit sounds like it could be an interesting story! expanding on that would make for a more interesting comment:
> I saved a bunch of companies around 100k in AWS fees just by pruning their server requirements. One of the projects was a simple set of web forms that required 7 servers to run. We pruned it down to 3 and that was just for redundancy and load.
EDIT
and if that thing about cutting down the specification really happened, just tell the story – no need to wrap it in a performance piece:
> I've actually had people come to me with specifications so over-engineered that the whole 20-page doc could be simplified down to a single page without loss of functionality!
EDIT 2
and i'm not saying "never use hyperbole"! just don't make it the whole point of your comment :)
I made that comment once about "Soylent". (Remember Soylent? The nutritional drink?) That company made a big deal about their "tech stack". Not for manufacturing or quality control, but for ordinary web processing. Their order volume was so low that a CGI program on a low-end shared hosting system could do the job. But they were going to "scale", right?
Amusingly, they eventually did "scale". They started selling through WalMart in bulk, and accepted their fate as yet another nutritional drink, on the shelves alongside Ensure Plus and Muscle Milk. Their "tech stack" was irrelevant to that.
That's an amusing story. I suppose the Soylent board met a good salesperson, and the pitch worked. So when the team landed the contract, they had to justify their costs some way... I suspect almost every software project in existence is in some way a victim (or beneficiary...) of fanatical marketing.
I remember working on an simple CMS site, amongst other things, for a pretty large company. When we begun the project, we were tasked with re-purposing an overworked Plesk instance to host the site. We eventually managed to do it, but then found the small amount of disk space that was left to us was getting chewed up by logs.
So I reported this to my project manager, suggesting that we procure more HD space. I think I said something about us 'running out of memory on our hard drive'. The PM promised to feed this back to the client... A week later, our PM said that he and the client had resolved the issue. The client will pay for a new server with a stonking 96GB of RAM!!!
That ought fix our 'memory' issue, right!!?
I mean, it also came with a 1TB HD, a second box for redundancy, dev time for migration, and additional dev time for a switch to Journald, or Logrotate, so I wasn't rushing to point out the misunderstanding... Working with the second box over RedHat Pacemaker was all new to us though, and a complete PITA.
But it was also another 'feature' to sell back to the client. They loved the sound of that. A site that couldn't go down... We made it work, but there was absolutely no real technical need for a Pacemaker cluster and 96GB*2 of RAM. It was just a simple CMS backed site.
Occasionally, that's where such complexity comes from. Not the developers themselves, but some loose cannon of a saleperson. That said, these 'sales people' may even be developers themselves. I often think that's a large part of how and why questionably complicated software exists...
A company I used to work for needed a website. They already had a backend and a REST API (served on the same domain as the website should be) and the old website was served directly by the backend that also served the API.
I am not aware of why it was chosen to retire that and separate the website into its own service - maybe there was a good reason, so I won't comment on that.
However the approach they (or rather some frontend developer) chose was a React front-end (fair enough) with a Node.js backend that translated between the REST API and GraphQL (WTF).
The existing API was served on the same domain so the new React website could've directly interacted with it without any problems. But no, this guy wanted to put "GraphQL" on his resume and so introduced an unnecessary extra component and potential point of failure that the company now has to support.
I made this same tech-stack decision while working for a company in the manufacturing industry. React, GraphQL, Node.js. There was a reason behind it.
The head of the company was pushing to modernize our process flow via software meant to drive manufacturing. We had contractors on site from three different companies who were each using us as a test-bed for their software/manufacturing integration. Every week or two they'd plot out some new data they'd need to move from sales to engineering, manufacturing, QC, etc.
In our case, having GraphQL as a translation layer for the sales website saved everyone involved time. However, I can also see many scenarios where that wouldn't have been the case.
It definitely comes down to using the right tool for the job. Knowing how to identify which tools fit and which don't is one of the skills it's very important to help new devs develop.
Yeah, GraphQL seems like a cool piece of tech that's overkill for 90% of the places it's used.
I'm a bit confused as to how it's become so popular for normal development when it seems to have a lot more boilerplate and setup than a simple REST API.
I feel like GraphQL is the new NoSQL. Not just that it's mistakenly adopted, but also that it is quite valuable for the right use cases, but the public doesn't seem to understand what those are.
Do you really, _really_ need to support queries? 99/100, I'd guess no, and thus constraining people to a more defined interface and access pattern is simpler.
Oh... you need to fill your RDMS with a bunch of JSON responses from your 3rd party API. So you can then decode them in memory because I actually need to select * where identifier = 'banana' to do analytics.
Meanwhile the API returns highly structured data perfect for a RDMS, but we dont know how to query SQL without an API in the frontend. So welcome to my hell.
It's a combination of resume-driven development and premature optimization.
The former doesn't need explaining but the latter can be explained as developers being concerned about requiring the benefits of GraphQL in some uncertain future and decide to include it from the start, even though in most cases they end up never actually reaping the benefits while still being plagued by the extra overhead of using that technology.
> The existing API was served on the same domain so the new React website could've directly interacted with it without any problems. But no, this guy wanted to put "GraphQL" on his resume and so introduced an unnecessary extra component and potential point of failure that the company now has to support.
There's a simple solution to this kind of problem: let the resume driven developer use his skills - fire him.
So how would you operate each of these bespoke services? One service uses TCP, one uses UDP, one uses SCTP. What happens if your buffer sizes are incorrect? What about if your keep alives are too aggressive? What happens if a problem in your TCP connection pool makes it seem like there's a networking issue and so you play around with your network settings only to have all your other protocols dive in performance?
I sympathize with the OP for single developers or small teams (by small, I mean a team of 5 not in a larger corporation), but whenever you have more than a single team, you want to keep system management overhead low. Unifying around a single paradigm like gRPC and Docker containers means that not every team will have their own bespoke chroot configuration and you won't have to retune an HTTP client every time you interface with a new service.
I think there's in general too little representation from small-scale or indie developers. I spend a lot of time working with the Gemini protocol outside of work and I appreciate the simpler approach that the protocol takes. I would love to see these sorts of stakeholders have a say in greater net architecture as well, but let's not pretend like this is all complexity for complexity's sake; this stuff is needed at scale.
>So how would you operate each of these bespoke services? One service uses TCP, one uses UDP, one uses SCTP. What happens if your buffer sizes are incorrect?
Such bullshit. Since when we use UDP for a CRUD service ? Since when SCTP is even considered outside of the telecom world?
HTTP and Rest-like API have been able to handle properly simple CRUD API since 20 years without problems, way before gRPC was even a thing.
gRPC has its usage for large, complex API that required a proper RPC framework. But in 99% of the case, yes it's an overkill.
> Such bullshit. Since when we use UDP for a CRUD service ? Since when SCTP is even considered outside of the telecom world?
You're making a strawmam out of this. I'm responding to the following:
"while I have developers that have written full Node.js sites and dont know that IP or a Port, or TCP, UDP, is..."
But thank you for the insult.
> HTTP and Rest-like API have been able to handle properly simple CRUD API since 20 years without problems, way before gRPC was even a thing.
Hm, what are you talking about? Are you talking about the client? Are you talking about a load balancer? I don't think anyone is proposing that a simple static blog use gRPC to serve content to its users.
> But in 99% of the case, yes it's an overkill.
Compared to what? I'd argue that CRUD over HTTP is wrong. In fact, folks have been writing CRUD over TCP for ages. Why do we need to massage CRUD syntax over HTTP when we can just stream requests and responses over TCP? In fact, TCP predates HTTP by 20 years, so if you're using the historical argument, raw TCP is even older. How much bullshit is there around AJAX and Websockets and HTTP multipart and HTTP keep-alive when we're just trying to recreate TCP semantics?
So why are you drawing the line at HTTP? Seems arbitrary to me.
That's a very good argument against the modern framework-conatiners-k8s stack. We used to be able to keep an organization wide track of things such as tcp services, buffer sizes, timeouts etc. but now it's all a black box deployed by a guy who cut and paste some yaml from a Medium article, at best.
It's probably acceptable to not learn the details of what you're doing, as hardware is cheap and all that, but it also constitutes a glass ceiling for how much the organization learns. That's a bigger problem in the long run.
Hopefully any organization that decides to use k8s goes into it understanding what they're getting into, but yes if you just get onto the hype train without thinking, you'll probably have a bad time when scaling.
Dev B: Here, put the React SPA in a Docker container and run it on some cloud its easy...
I have 100 different devs telling me "do this thing, it's easy" and it's easy for them, but now I have 100 different things to think about and make work together and devs never think about how their tiny thing fits into the big picture.
99% of web apps could be pure HTML + CSS on the front end, and Flask or something on the back end, talking pure SQL to Postgres. Really. That's all you need. And probably 99.99% of those apps that run internally to one company.
Docker facilitates the "throw a Dockerfile onto Ops and that's their problem now" mentality. That's perhaps not intentional, but choosing to over-optimize the developer experience makes life very hard for those who have to support and run the application over time.
Docker facilitates the "throw a Dockerfile onto Ops and that's their problem now" mentality. That's perhaps not intentional, but choosing to over-optimize the developer experience makes life very hard for those who have to support and run the application over time.
It also enables pushback along the lines of "your container doesn't run, and it's 100% your fault, as it is supposed to contain all its dependencies". So developers have unwittingly played into the hands of ops there ;-)
Agreed, mostly. To be fair I think that, based on my limited experience with React and Docker and Python, those 3 technologies have valid use-cases in smaller orgs. The other technologies I'm not familiar enough with to comment on.
Also if I was gonna complain about someone I wouldn't complain about devs, I'd complain about team leads, engineering managers - the folks who should be guiding devs - and, the people at the business level who choose the team leads / engineering managers.
The devs are at the bottom of the food chain; it's not their fault.
GraphQL is unfortunately still part of a low grade hype cycle where more and more companies are using it because "it's the next REST". But if you don't have a real need for the advantages it offers (for example a mobile app whose client side requests you can't easily change in sync with your API, or a data model that really is graph-like), it can be more cost than benefit.
React on the other hand: I've lost count of the number of small prototypes I've started in vanilla JS, thinking "this time I can just roll it myself", then ended up pulling in React. Even the simplest things are easier and simpler with it, and hugely complex applications are also easier and simpler with it. It has a fantastic API surface.
> towards unecessarily complicated development processes and architectures
please note that what's "unnecessary" for you is not necessarily unnecessary for others.
this is an important point.
i guess what these projects need is a way to communicate explicitly the costs of maintenance and developer mind-share when adding more complex features to their product.
Because especially as small dev team it is difficult to figure out these costs during a short evaluation of the software, is not easy to pick the best tool for the job unless some members have got experience in some previous job with these tools.
I can’t even begin to clearly express how much I agree with you. I work for a very large company, not a software company but one that produces a lot of internal applications. The infrastructure that we leverage to do even trivial things is mind blowing. Want to send an email to customers? Roll a series of APIs and fuck it let’s do some machine learning bc yolo, integrate all the things and don’t forget to run everything through APIC. It’s insane.
That said, I also see the opposite when this gets some traction with "normal users". The Abseil issue was fixed, and the TensorFlow 2.0 release did make TensorFlow more usable by smaller teams in many situations.
It seems to me the problem is all of our software is built by and for megacorps. In the context of what they need it for its the best tool for the job but it was never made for smaller orgs where 90% of the bells and whistles are not needed and end up as needless complexity.
I'm not sure how react fits in here. I have found it to be very simple to understand enough to be productive and upgrading to new versions has required next to no changes to our large codebase. Perhaps you mean SPA react apps which I agree are a bit more work than needed for a solo dev.
Docker seems to be optimal for medium size teams. Configuring a server the old way is easier than docker for 1 server but less so for when you have 10 or CI/review apps.
React fits in because it's in the process of jumping the shark because FB needs it to. React is already hugely bloated for no discernible reason compared to Preact, but they're just going to keep bloating it with Suspense. Cf. https://crank.js.org/blog/introducing-crank
You could just as easily level similar rants against the likes of React and it's wider ecosystem, Tensorflow, Typescript (many will disagree), Docker... I'm sure others have their own bugbears.
Much of this is subjective, of course. But to me, it feels like software development is trending towards unecessarily complicated development processes and architectures.
And the only beneficiaries appear to be the large technology companies.
I suppose in exchange, you're getting a guarantee of maintenance. But is that really worth the additional complexity associated with the common use of these tools?