Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I hear this from older developers a lot, that the problems we are solving today have been solved already. That may be true to some extent, but I think that is dangerous, the ways the problems are solved again have different characteristics and advantages and generally push the ball forward. I think of it like forest fires clearing out old grown for new, and I think it's best to just have a zen-like mindset of being a constant beginner.


I completely agree. I'm not giving a rant about society re-inventing the wheel, I'm talking at a personal level. If I know language X, but I'm trying to learn language Y, I have to learn how to do things that I already know how to do in X. It's a change that HAS to happen to learn, and I'll walk away a better coder, but that doesn't make the process of relearning any more enjoyable or more productive.

I switched jobs recently and tried to adopt a zen-like attitude of beginners mind. It definitely helped...but I'd be lying if I said I never hit frustration points where I'm trying to go from A to C but I have to learn all about B when I don't WANT to care about B, I want to get to C. A good attitude helps, but I think the underlying reason is still true and is part of why experienced devs aren't adopting new tech at the rate of newer devs.


I don't know. I've been writing software for over 24 years. I'm not that ancient but learning c as a youngster helps pad the numbers.

There's an awful lot of cyclical patterns in development, and an awful lot of convergence. HTML5 canvas for example is an awful lot like writing custom 2d graphics back in the day. Docker, Vagrant, etc. are neat but there was chroot, zones, vmware images, cygwin etc. before. A lot of hot new language features as well are just existing patterns from the functional discipline being tacked onto imperative languages and vice versa.

The fact that the constraints and reasons for adding generics to java or lambdas to c/c++ are different from the constraints and reasons for adding templating to c/c++ and lambdas to lisp doesnt mean that you can't go in if you understand one well and identify the pit falls or gotchas of newer implementations.

Is it really meaningful to go all in on learning how feature x from 2000 repurposed to solve solution y is all that fundamentally different? WebSockets are great, so was long polling and comet architecture, restful is great, so was SOAP, and their non web based COM messaging or just cross application communication ancestors.

There are a lot of pitfalls and unique characteristics to knew technologies but there is a lot more of new spins on old ideas, common underlying concepts and reinventing of the wheel.


> so was SOAP

You had me until that (j/k)(mostly)

> Is it really meaningful to go all in on learning how ....is all that fundamentally different?

That'd be the diminishing returns I mentioned up-thread: each iteration of learning a new way to do an old problem gives you less. I'd argue it's still more than 0, but we aren't comparing to zero, we're comparing to the opportunity cost.

That said, I think, as a generalization, experienced devs tend to be prematurely dismissive. That is, however, a personal opinion.


Hey SOAP was great when the alternative was coming up with your own custom RPC protocol. At least if you were using visual studio's tooling. And nullable types didn't show up to give you a headache. XML was great when the alternative was writing your own custom serialization format. I don't miss them but, I remember their being a step forward.

* stealth edit.


100%. I think this is the biggest trap that an experienced developer can fall into. "I know Tech X, I can still get lucrative jobs in Tech X, there's no reason for me to jump out of my comfort zone and learn Tech Y."

But if you don't do it, you're stuck on the local maximum of Hill X, and as it gradually erodes away, one day you realize that you don't have any lucrative jobs that you're qualified for, and you're so far away from what Tech Z is nowadays that you're effectively starting over from scratch, except that also you need to unlearn a bunch of stuff that's no longer true.

And this happens fast. If you try to coast for even, say, five years, so many of your skills will have turned to dust.

This is why the only real way to have a long-term career as a developer is to genuinely be interested in this stuff for its own sake. You need to want to go learn the latest new tech not just out of a cynical career calculus, but because you think it's genuinely cool and interesting and sounds fun. As long as that's true, I think you'll be able to be a valuable dev even as you get older; as soon as it stops being true, well, time to consider what's next for you.


There's definitely a balance. Switching languages doesn't always make you a better coder. There's the danger of shallowly scratching the surface of a bunch of different languages/ frameworks/ etc without deep knowledge in any of them. That by far has been my biggest career challenge - trying to pick the right things to invest in.


It's also often just not true. A lot of the tools we use to solve problems today are miles better than what we used in the past. Imagine trying to process terabytes worth of data with a parallel compute engine before Spark - you would have to write tons of custom scheduling/workflow balancing code, whereas today I can buy a databricks instance from Azure and start writing a massively parallel stream-processing demo in an hour.

Now, a lot of companies do over-engineer and cutting edge tools aren't necessary in a lot of cases, but that's a separate problem.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: