Hacker Newsnew | past | comments | ask | show | jobs | submit | on_the_train's commentslogin

My perspective might be equally naive as I've rarely had contact with databases in my professional life, but 100ms sounds like an absolutely mental timeframe (in a bad way)

It's ok to have a hobby. Not everything needs to be minmaxed to extract the maximum amount of money from the system.


the question is, if we as a society want to encourage activities that are for the benefit of everyone.

a sport maybe a hobby. running a sportsclub is volunteer work. writing code for fun is a hobby, publishing and maintaining it for others should be volunteer work.


It's been the go-to syntax for 15 years now


Go-to? I've never seen a project use it, I've only ever seen examples online.


It's still been the standard since c++11 and I've been using it every since in all teams I've worked in.


Same here


Now I haven't touched C++ in probably 15 years but the definition of main() looks confused:

> auto main() -> int

Isn't that declaring the return type twice, once as auto and the other as int?


No. The auto there is doing some lifting so that you can declare the type afterwards. The return type is only defined once.

There is, however, a return type auto-deduction in recent standards iirc, which is especially useful for lambdas.

https://en.cppreference.com/w/cpp/language/auto.html

auto f() -> int; // OK: f returns int

auto g() { return 0.0; } // OK since C++14: g returns double

auto h(); // OK since C++14: h’s return type will be deduced when it is defined


What about

auto g() -> auto { return 0.0; }


0.0 is a double, so I would assume the return type of g is deduced to be double, if that is what you're asking.


I was more pointing out that the syntax was dumb for that particular example!


I really wish they had used func instead, it would have saved this confusion and allowed for “auto type deduction” to be a smaller more self contained feature


the standard c++ committee is extremely resistant to introducing new keywords such as "func", so as not to break reams of existing code.


Indeed. I am a frequent critic of the c++ committee’s direction and decisions. There’s no direction other than “new stuff” and that new stuff pretty much has to be in the library otherwise it will require changes that may break existing code. That’s fine.

But on the flip side, there’s a theme of ignoring the actual state of the world to achieve the theoretical goals of the proposal when it suits. Modules are a perfect example of this - when I started programming professionally modules were the solution to compile times and to symbol visibility. Now that they’re here they are neither. But we got modules on part. The version that was standardised refused to accept the existence of the toolchain and build tools that exist, and as such refused to place any constraints that may make implementation viable or easier.

St the same time we can’t standardise Pragma once because some compiler may treat network shares or symlinks differently.

There’s a clear indication that the committee don’t want to address this, epochs are a solution that has been rejected. It’s clear the only real plan is shove awkward functional features into libraries using operator overloads - just like we all gave out to QT for doing 30 years ago. But at least it’s standardised this time?


He rubs me the wrong way, too. Curl is overhyped and a pain to work with. And he's getting high on the "success" while crying about not being paid for something he offers for free. I think Americans have a nice phrase about having cake and eating it, too.


Not only is cURL not overhyped, it’s absolutely false that Daniel “[cries] about not being paid for something he offers for free”. He does get paid for cURL support.

https://daniel.haxx.se/job.html

He does criticise rich companies who don’t do anything to support cURL and demand preferential support, but that’s not the same thing (and does warrant criticism).


Daniel is also not an american


Why does that matter?


it doesn't I just read the gp comment wrong, my bad


No. People who are loud do that because they want to be loud. They want to hurt people. And they get off to weaklings being polite. The law is too slow and too forgiving for these destructive forces. We need to bring violence back in a big way.


If we're going to bring violence back in a big way then those with the least consideration for others (the ones you say want to hurt people) will have an outsized advantage in its deployment.


But the others would have advantage in numbers and the law.


But you just said the law is too slow?


That's the reason why polymorphism is sometimes described as slow. It's not really slow... But it prevents inlining and therefore always is a function call as opposed to sometimes no function call. It's not the polymorphism is slow. It's that alternatives can sometimes compile to zero


Every 8 methods knock out at least 1 cache line for you (on x64, at least). You're probably not calling 8 adjacent methods on the same exact type either, you're probably doing something with a larger blast radius. Which means sacrificing even more of caches. And this doesn't show up in the microbenchmarks people normally write because they vtables are hot in the cache.

So you're really banking on this not affecting your program. Which it doesn't, if you keep it in mind and do it sparingly. But if you start making everything virtual it should hit you vs. merely making everything noinline.


On the other hand, if the compiler can prove at compile-time what type the object must have at run-time, it can eliminate the dynamic dispatch and effectively re-enable inlining.


Which is why runtime polymorphism in Rust is very hard to do. The its focus on zero-cost abstractions means that the natural way to write polymorphic code is compiled (and must be compiled) to static dispatch.


Compilers will also speculatively devirtualize under some circumstances.

https://hubicka.blogspot.com/2014/02/devirtualization-in-c-p...


Pedantic, but I assume you're referring to virtual methods?

Ad hoc polymorphism (C++ templates) and parametric polymorphism (Rust) can be inlined. Although those examples are slow to compile, because they must be specialized for each set of generic arguments.


C++ tools can also devirtualize when doing whole-program optimization or tools like BOLT can promote indirect calls generated by any language.


Another one of these sickening pieces. Framing opposition to an expensive tech that doesn't work as "anti". I tried letting the absolutely newest models write c++ today again. Gpt 5.1 and opus 4.5. single function with two or less input parameters, a nice return value, doing simple geometry with the glm library. Yes the code worked. But I took as long fixing the weird parts as it would have taken me myself. And I still don't trust the result, because reviewing is so much harder than writing.

There's still no point. Resharper and clang-tidy still have more value than all LLMs. It's not just a hype, it's a bloody cult, right besides those nft and church of COVID people.


Did you try telling the model to write the unit tests first, watch them fail, then write a function that passes them?


Your comment sounds like John Glenn's quote "Get the girl to check the numbers… If she says they’re good, I’m ready to go." about Katherine Johnson to double check the calculations done by the first computers used by NASA. At that time in history, it was probably accurate and the safest thing to do, but we all know how computer evolved from that time and now we don't have human calculators anymore but rather human checking the correctness of the written code that will do the actual calculations.

IMO the only rebuttal to this can be that LLMs are almost at their peak and there is not going to be any possible significant breakthrough or steady improvement in the next years, in which case they will never become "the new computers".


But LLMs aren't advertised as some future thing. They're advertised as being almighty and replacing devs in great numbers. And that's simply not true. It's a fad like 3D movies


I know they are pumped and overhyped to death, indeed they are. But that does not mean that they already have some use today and that they can (or not) improve in the future.

I'm skeptical about LLMs as well but I also wanted to see what they are actually capable of doing and I vibe coded an Android app in Kotlin (from scratch) with Claude Code and Opus 4.5 and it basically worked. I'm pretty sure the code is horrible to the eyes of a Kotlin developer because I added so many feature by asking CC to do it over the last 2-3 weeks that it already desperately need a refactor.

But still, this is not something an autocomplete would be able to do for you.


> reviewing is so much harder than writing

This is what reams of the AI proponents fail to understand. "Amazing, I don't have to write code, 'only' review AI slop" is sitting backwards on the horse. Who the heck wants to do that?


Fun story to add: I can't get my heart rate measured. I get so nervous about it that I immediately double my heart rate. Of course it's impossible to communicate that with doctors. One even equipped me with a 24h heart monitor. Only to have my stupid brain go on overdrive and clock my heart at 120+ for the entire time, with 0 sleep. I literally fainted when getting ekg cables on me. I now have on record a heart condition without having one: I just get nervous from measurements lol


It's jokingly called 'white coat syndrome'. Any doctor who has a clue should understand this.


LCD is just the better technology. Besides what this article is about, OLED have awful burning and longevity problems. They're great for watching movies on the first month (and that's what gets them sold), but really not much else.


That’s not my experience. I've had an OLED TV going on 7 years now and it still looks better than any of my LCD screens.

My PC monitors are my only remaining LCD screens largely due to the text fringing issues mentioned in this article and bezel size.


lol


Oh another run of new small apps. Why not unleash this oh so powerful tools not on a jira ticket written two years ago, targeting 3 different repos in an old legacy moloch, like actual work?

It's always just the "Fibonacci" equivalent


Did some of that today. Extracting logic from Helm templates that read like 2000s PHP and moving it to a nushell script rendering values. Took a lot of guidance both in terms of making it test its own code and architectural/style decisions and I also use Sonnet, but it got there.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: