Hacker Newsnew | past | comments | ask | show | jobs | submit | gpderetta's commentslogin

Don't think so, this the "Garante della Privacy", two different institutions.

Different — but driven by the same mindset, the same nonsense, and a system run by recycled old-guard politicians.

Language design still has a huge impact on which optimizations are practically implementable.

The Mythical Sufficiently Smart Compiler is, in fact, still mythical.


Sure, but not all compilers are created equal and are going to go to the same lengths of analysis to discover optimization opportunities, or to have the same quality of code generation for that matter.

It might be interesting to compare LLVM generated code (at same/maximum optimization level) for Rust vs C, which would remove optimizer LOE as a factor and more isolate difficulties/opportunities caused by the respective languages.


Then again, often

  #pragma omp for 
is a very low mental-overhead way to speed up code.

Depends on the code.

OpenMP does nothing to prevent data races, and anything beyond simple for loops quickly becomes difficult to reason about.


No.

It is easy to divide loop body into computation and share info update, the latter can be done under #pragma omp critical (label).


Yes! gcc/omp in general solved a lot of the problems which are conveniently left out in the article.

The we have the anecdotal "They failed firefox layout in C++ twice then did it in Rust" < to this I sigh in chrome.


The Rust version of this is "turn .iter() into .par_iter()."

It's also true that for both, it's not always as easy as "just make the for loop parallel." Stylo is significantly more complex than that.

> to this I sigh in chrome.

I'm actually a Chrome user. Does Chrome do what Stylo does? I didn't think it did, but I also haven't really paid attention to the internals of any browsers in the last few years.


And the C++ version is add std::execution::par_unseq as parameter to the ranges algorithm.

This has the same drawbacks as "#pragma omp for".

The hard part isn't splitting loop iterations between threads, but doing so _safely_.

Proving an arbitrary loop's iterations are split in a memory safe way is an NP hard problem in C and C++, but the default behavior in Rust.


Well, if you are accessing global data with ranges, you are doing it wrong.

Naturally nothing on C++ prevents someone to do that, which is why PVS, Sonar and co exist.

Just like some things aren't prevented by Rust rather clippy.


Concurrency is easy by default. The hard part is when you are trying to be clever.

You write concurrent code in Rust pretty much in the same way as you would write it in OpenMP, but with some extra syntax. Rust catches some mistakes automatically, but it also forces you to do some extra work. For example, you often have to wrap shared data in Arc when you convert single-threaded code to use multiple threads. And some common patterns are not easily available due to the limited ownership model. For example, you can't get mutable references to items in a shared container by thread id or loop iteration.


> For example, you can't get mutable references to items in a shared container by thread id or loop iteration.

This would be a good candidate for a specialised container that internally used unsafe. Well, thread id at least; since the user of an API doesn't provide it, you could mark the API safe, since you wouldn't have to worry about incorrect inputs.

Loop iteration would be an input to the API, so you'd mark the API unsafe.


There’s split_at_mut to avoid writing unsafe yourself in this case.

Afaik it does all styling and layout in the main thread and offloads drawing instructions to other threads (CompositorTileWorker) and it works fine?

That does sound like Chrome has also either failed to make styling multithreaded in C++ (or haven't attempted it), while it was achieved in Rust?

pro capita?

constraints breed creativity.

For the art I suppose.

The biggest difference between the UK and other constitutional countries is that parliament power is pretty much absolute and it is not bound by any document or pre-existing law.

In theory at least. In practice the courts have hinted that there are limits even for the parliament, and if it were to overstep some unwritten rules, it would cause a constitutional crisis.


> if it were to overstep some unwritten rules

What rules are those?


Boris Johnson asking the Queen to prorogue parliament during Brexit debates is a solid recent example.

> define "store ordering". Does it affect loads in any way? Or simply just stores

It affects the visible ordering of remote stores to normal memory, so load are necessarily affected (it wouldn't make sense to guarantee a store order if unobservable).

Really, TSO is defined independently of x86 and in fact it took a while to actually prove that x86 was TSO. Concretely, how do architectures that claim (optional) TSO differ from each other at least for access to normal memory?


There is still interest in emulating x86 binaries on ARM. See the efforts from valve for example.

a tuple of closures closing over the same object ('let over lambda') is equivalent to an interface.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: