For those who do not know, Adams was still putting up daily Dilbert strips, just for paid subs on Twitter instead of in a newspaper. I think it's impressive he didn't stop until the end, even though AIUI he was in serious pain for a while. (He did stop doing the art himself in Nov.)
Because when you don't do this, people get scammed out of money.
If there is a series of buttons you can press to circumvent the anti-scam measures, then the scammers simply walk you through pressing those buttons. If you cover them in giant warning labels the scammers simply add explanations into their patter. The buttons must physically not exist, for gullible people to not get scammed out of money.
The next response will be 'well maybe we shouldn't accommodate them'. They vote, and there's more of them than you.
> Because when you don't do this, people get scammed out of money.
No, only when you don't do this and nothing else to improve security. You're presenting a false dichotomy.
> If there is a series of buttons you can press to circumvent the anti-scam measures, then the scammers simply walk you through pressing those buttons.
If the scammers can walk somebody through doing all that, why would they stop at just asking them to send money over to them "to safekeep it because of a compromised account" or whatever the social engineering scheme of the week is?
One of the benefits or downsides of a government depending on who you ask is that it can help stop people from making bad decisions that hurt people around them. Bad decisions rarely hurt only one person.
They represent more of the customer base, and a larger voting bloc, than tech nerds. You can offer your opinion of what society exists for, and the rest of society doesn't have to listen to it. The only actual leverage tech nerds who aren't billionaires have is when the particular ones who work for Google are asked to implement these features.
> Because when you don't do this, people get scammed out of money.
Bullshit. Big tech's war on general purpose computing hasn't stopped scam. It's a pretext for rent seeking and control and you know it. It's the reason we don't have a popular ecosystem of FOSS alternatives on mobile. It's the reason we can't run virtual machines on tablets when the hardware very much can.
If combating scam is a priority of big tech, I know where to start. Get rid of ads! That would actually be enormously effective as it gets rid of the primary entry point of scams.
> If there is a series of buttons you can press to circumvent the anti-scam measures
So the best you can come up with is an imaginary button on phones that can magically circumvent checks that should be implemented server-side? Have you any idea how software works?
Or rig screens such that the buttons do not appear to be what they are. I've seen many a install-this-app ads where cancel isn't cancel.
The average user simply does not have the skill to determine real from fake and any heuristics to do so will be defeated by the scammers. You have to be able to understand what could be done with access, not what's "intended" with the access.
> If there is a series of buttons you can press to circumvent the anti-scam measures, then the scammers simply walk you through pressing those buttons. If you cover them in giant warning labels the scammers simply add explanations into their patter. The buttons must physically not exist, for gullible people to not get scammed out of money.
We shouldn't be protecting someone that gullible at the expense of everyone else who is smart enough to actually read whats on the screen and not fall for such simple scams.
Not that long ago most of this forum was very much against giving up freedoms in favor of catering to the lowest common denominator. What happened?
People need to take responsibility for their own actions and educate themselves, not rely on a lack of freedom to protect them.
Two very similar things are presented as though they are different (go.mod and lockfiles, not go.sum) for the purpose of sneering at one of them, when both are essentially the same. 'Ignored by downstream dependents' is not any less true of go.mod than of lockfiles. In both cases a later version can be demanded, overriding the earlier version, potentially breaking your code.
It should not be strange that a tool which is better in every way and makes your code less buggy by default has its praises sung by most of the people who use it. It would be odd to go around saying 'electric drills are strangely and disproportionately pushed at Home Depot over the good old hand auger', and even if I don't work at your contracting company I'd be slightly unnerved about you working on my house.
I’ve heard this analogy used to justify firing developers for not using GenAI: a cabinet maker who doesn’t use power tools shouldn’t be working as a cabinet maker.
If only programming languages (or GenAI) were tools like hammers and augers and drills.
Even then the cabinets you see that come out of shops that only use hand tools are some of the most sturdy, beautiful, and long lasting pieces that become the antiques. They use fewer cuts, less glue, avoid using nails and screws where a proper joint will do, etc.
Less glue and avoidance of nails and screws doesn't make it sturdier. Fastening things strongly makes your furniture sturdier than not doing so. Antiques suck as often as they don't, and moreover you are only seeing the ones that survived without a base rate to compare it to; they succeeded in spite of power tools, but power tools would have made the same object better.
Comparing it to AI makes no sense. Invoking it is supposed to bring to mind the fact that it's worse in well-known ways, but then the statement 'better in every way' no longer applies. Using Rust passively improves the engineering quality compared to using anything else, unlike AI which sacrifices engineering quality for iteration speed.
> Less glue and avoidance of nails and screws doesn't make it sturdier. Fastening things strongly makes your furniture sturdier than not doing so.
No disrespect intended, but your criticism of the analogy reveals that you are speaking from assumptions, but not knowledge, about furniture construction.
In fact, less glue, and fewer fasteners (i.e. design that leverages the strength of the materials), is exactly how quality furniture is made more sturdy.
There was an interesting video on YT where an engineer from a fastener company joined a carpenter to compare their products with traditional joints.
The traditional joints held up very well and even beat the engineered connectors in some cases. Additionally one must be careful with screws and fasteners: if they’re not used according to spec, they may be significantly weaker than expected. The presented screws had to be driven in diagonally from multiple positions to reach the specified strength; driving them straight in, as the average DIYer would, would have resulted in a weak joint.
Glue is typically used in traditional joinery, so less glue would actually have a negative effect.
> Glue is typically used in traditional joinery, so less glue would actually have a negative effect.
And a lot of traditional joinery is about keeping the carcase sufficiently together even after the hide glue completely breaks down so that it can be repaired.
Modern glues allow you to use a lot less complicated joinery.
Given the author's misunderstanding of what Rust provides, the most charitable interpretation is that they haven't updated the parts discussing Rust since 2017. If they had, it would reflect more poorly on them.
The most charitable interpretation of this is that the Reddit mods should stick to Reddit. If they had, this wouldn't have reflected so poorly on them.
The issue is that Rust proponents automatically assume that if you write enough C code, there will be memory related bugs.
In reality, this is not the case. Bad code is the result of bad developers. Id rather have someone writing C code that understands how memory bugs happen rather than a Rust developer thinking that the compiler is going to take care of everything for them.
The topic seems to be native programming languages -- I don't think any of the languages concerned are "better in every way" for every possible coding problem. Many will rightfully choose Fortran over Rust for their application -- knowing full well their choice is far away from "better in every way".
When writing code meant to last, you need a language that’s portable across compilers and through time. C has demonstrated both. Fortran 77 and 90 were portable across compilers, but are now at risk from breaking changes, and later versions are not very portable across compilers.
If the alternative has drawbacks (they always do) or is not as well known by the team, it's perfecly fine to keep using the tool you know if it is working for you.
People who incessantly try to evangelise their tool/belief/preferences to others are often seen as unpleasant to say the least and they often achieve the opposite effect of what they seek.
The 'some degree' is pretty important, though. The Rust language undergoes backwards incompatible changes sometimes, but the Rust tools do not. The 2024 edition has breaking changes since the 2021 edition, but all compilers can compile all historical editions and will do so forever, and new language features are available in as many editions as possible, and editions can be mixed in a dependency tree, so you do not ever have to update your 2015 edition code for anything.
No you’re misunderstanding the ecosystem. Rust 2024 code can call 2021 code without issue (and vice versa I think although could be wrong on the vice versa). So you can progressively update code of individual components as you want or not at all and still continue using it just fine in new code using later editions. Thats the very definition of back compat, something you really really shouldn’t do with C++ (every file should be compiled with the same language version target although practically that may be less of an issue depending on the specific stdlib implementation and if they break ABI)
There’s also automated migration tools to convert 2021 code to 2024. It might fail on some translations but generally it’s pretty automatic.
So huge difference both in the migration mechanism and the bc story.
Right. A good mental model would be to imagine every crate of Rust source has an associated Edition and there's a sort of "pre-processing" step where we translate that into the latest version of Rust seamlessly, preserving its exact meaning.
So e.g. 2018 Edition said r# at the start of an identifier now marks a "raw" identifier. Keywords promise never to start this way, so r#foo is the same as foo but r#foo even works if some lunatic makes foo a keyword whereas just foo would become a keyword if that happened. As a result if you write
let async = 5;
... in Rust 1.0 that translator treats it exactly as though you'd written
let r#async = 5;
... in a modern Rust edition because these days the keyword async exists.
The equivalent in python-metaphor-land would be that python files clearly designate whether they are py2 or py3, and a single python interpreter can run both py2 and py3 scripts, as well as cross-include files of each version without issue.
Rust editions only (and rarely!) break your code when you decide to upgrade your project's edition. Your public API stays the same as well (IIRC), so upgrading edition doesn't break your dependents either -unless they don't have a new enough version of the compiler to support said newer edition.
what would be the difference with a binary that's has both a py2 and py3 interpreter and a flag --edition=2 or =3 redirects to either file?
If I have Rust code from 2021, can I add a feature from 2024 and run it with --edition=2021 or 2024? Wouldn't adding a 2024 feature then possibly break the 2021 code?
I think the fact that rust is compiled has a big impact in terms of bc for dependencies. py2 must use py2 dependencies, but rust24 could use rust21 binaries as long as there were no API bc breaks, the code itself is already compiled away.
> what would be the difference with a binary that's has both a py2 and py3 interpreter and a flag --edition=2 or =3 redirects to either file?
The difference is that it's not the entire compiler. Rust's editions are only allowed to change the frontend. At the middle part of the compiler, it's all the same thing, and the differences are completely gone. This is the core of how the interop works, and it also means that you can't just change anything in an edition, only smaller things. But completely changing the language every few years wouldn't be good either, so it's fine in practice.
Rust is not‡ dynamically linked. The whole tree is compiled from source every time. The same compiler compiles all editions of the language together, and it is exactly the same as py3 interpreting a py2 script and allowing a py3 script to call it or vice versa.
Very few features are restricted to the 2024 edition; only those that actively introduce or leverage breaking changes. Most things released since 2024 are available in 2015 edition. If you want to upgrade to 2024 edition, that is a manifest flag; your code may break when you change it, and there's incompatibility lints available on the lower editions to tell you what will break when that happens and how to fix it.
This is junk. Writing a type annotation takes basically zero time, then saves you time by preventing the runtime error because you forgot which variable was which, then saves you more time by autocompleting the correct list of valid methods when you hit dot.
Acting like Go is comparable to JS is ridiculous; Go's type system is the only kind of type system needed in Ruby. Rust is a staggering outlier in complexity. And the Turborepo port took a long time specifically because they tried to port one module at a time with C interop between the old and new codebases, which massively slows down development in any language, especially Go. This is just about the most dishonest 'example' you could have picked.
Either that or you are saying 'weakly typed' to mean type inference in `var := value`, in which case (a) Rust has that too and (b) that's not what the debate is about, nobody is against that
Making the type annotations pass restricts you to writing more bloated and verbose programs in general.
Stating that A is an integer isn't much of a issue but once you get a reasonably complex program and A now has a compound type made of 5 parts, it really does slow you down and it really does make you write considerably worse programs for the sake of passing a type checker.
Any commercial code will need to be unit tested so there is no time saving from finding runtime errors earlier and an any good IDE will detect the same errors and provide you with the same auto complete automatically for free without any type annotations at all. These are problems which exist solely in your head.
1 developer vs a whole team of developers. I think you need to face the facts.
There are studies comparing old dynamically types languages against statically type languages. They always show approximately 1/3 of the lines of code being used with 3x faster development times in the dynamically types languages. This isn't some new discovery.
Well even Python is strongly typed but for the sake of this we are discussing type complexity.
It seems like your main gripe is that writing the type annotations slows you down, so I'd be interested to know what you think of languages like OCaml, Elm, Gleam or Roc. These are languages which never (or almost never) require any type annotations because the compiler can always infer all the types. Most people using these languages tend to add type annotations to top-level functions anyway though.
It seems to me that this is equivalent to a language without a type checker that automatically generates a unit test for every line of your program that tests its type.
You're trying to minimize the power of the union by quoting dollar amounts, when the whole point of the union is to have power, and the whole point of unionization is to defeat superior dollar amounts by capturing the organizational memory that money cannot buy.
You cannot replace your entire gamedev team at once without destroying what makes your company, your company. You cannot respond to your entire gamedev team refusing to work other than by replacing them or by getting them to stop striking, either by aggressively union-busting or by negotiating with the union. That is the reason unions work at all.
It's not just about dollar amounts, it's about security and consequences. If a developer finds out that he got laid off his life is completely upended. If the CEO of microsoft finds out that the subsidiary of a subsidiary goes under, his life doesn't change. One of those two people is in a position of power so much greater than the other that they have absolutely nothing to fear from having to treat a small number of twice removed employees a little more fairly.
The whole point of the union is to have any power at all and to try to improve their working conditions, not to overpower the giants who rule over them. No one joins a union because they want to put themselves out of a job.
>You cannot respond to your entire gamedev team refusing to work other than by replacing them or by getting them to stop striking.
Funny thing. Pay people fairly and don't abuse them, and they don't strike. If they are striking, I have a lot more suspicion towards management than the workers.
That is as true as saying "work hard and produce good value and you wont get fired, if you are fired I have a lot more suspicion on the worker than the manager".
Sure most of the time people are fired for good reasons and most of the time people strike for good reasons, but not always.
You're pulling the old man card on CSS-in-JS? Putting your style logic in CSS is what CSS is for, CSS-in-JS is an annoying hack to make React work. What this is replacing is SCSS.
Maven was a great idea. Introduce a high barrier to entry for publishing packages. Paired with a search box operated by carrier pidgeon, this effectively meant what you were looking for didn't exist. Every time you had any kind of quirky need, you had to write it out by hand, or find someone smarter than you to do it for you, or worst of all buy it from a vendor for lots of money at horrible quality. People recite 'DSA is bad for coding challenges, when will I need to write a hash map', but once upon a time you did have to write a hash map here and there. Supply chain vulnerability is a cost, but the product was worth the cost: you can just import the darn package!
I need a map of key ranges to values with intelligent range merging, it is right there on crates.io to import, it has been there since 2016, Maven Central didn't get one until 2018. In the olden days either it was in Apache Commons or it didn't exist. Halcyon days those.
reply