Thanks for sharing! I had come across similar kinds of issues on my annual LeetCode prep and this very clear articulation is very helpful. Props to the author for making this so easy to visualize.
Finding issues in large complex projects is generally easier than smaller projects. More code, more bugs. But its still difficult to find serious issues on the level of a sandbox escape in Chromium just because Google's long-running reward system means lots of people have spent lots of time looking into it, both manually and using automated fuzzer tools.
Back in ye olden days of 2014 I randomly stumbled upon a Chrome issue (wasn't trying to find bugs, was just writing some JavaScript code and noticed a problem) and reported it to Google and they paid me $1,500. Not bad for like half an hour's work to report the issue.
I feel like it's the opposite. In a huge project there's bound to be many weird interactions between components, and it's about picking the important/security relevant ones and finding edge cases. In this case the focus was on the interaction between the renderer process and the broker. That forms a security boundary so it makes sense to focus your efforts there - google will pay for such exploits since they can in theory, when combined with other exploits in the renderer process, lead directly to exploits that can be triggered just by opening a web page. So, yes, chrome is a huge project but the list of security-relevant locations to probe actually isn't actually all that long. That's not to diminish the researchers work, it still takes an insane amount of skill to find these issues.
Finding a problem that deserves a bug bounty reward is a very different beast to just finding quirks.
I read from one security researchers somewhere that professionals wouldn’t find enough bug bounty worthy problems in high enough frequency to pay their bills. So they’ll sometimes treat things like this more as a supplement to promote their CV rather than as a job itself.
Spotlight was bad back in the day, so I installed Alfred and started using that. Then Spotlight suddenly improved a lot, enough that it was usable for me, and I deleted Alfred. Then about five years ago something happened internally at Apple to the Spotlight team and it just got worse and worse and more difficult to use, making me regret deleting Alfred.
I wish Apple would just fix Spotlight. They don't seem to think it's worth fixing.
That is a good question. I like my dock uncluttered. I have it placed vertically on the left side, with only the apps I use every single day: Alacritty, Brave, Cursor, and Zoom. With Finder and Launchpad included, that's only six docked apps. Everything else I use Spotlight to open, so I feel the pain when the usability gets degraded or buggy.
I made my own distributed render orchestrator that supports Cycles + custom plugins. It uses Modal’s cloud compute APIs to spawn jobs on up to 20x containers with an L40S GPU (like 80% as fast as a 4090 with tons more VRAM) each. It ain’t cheap but it’s absurdly fast, and much easier in terms of cash flow than outright buying the equivalent GPUs.
I have a couple roombas from that era. If I sit and watch them, their path planning makes no sense. But if I just put them on a schedule to clean once a day, and don’t think about them beyond emptying their bin, I have continuously clean floors. Which, for me, is all I care about.
Not GP, but I use a Roborock S8 MaxV Ultra, which was the top of the line model earlier this year. I also just set it to run at night when I’m not paying attention to it. It’s… fine, I guess.
But if there is anything at all on your floor that will get stuck in its rollers, it will get stuck on it. Like 100% of the time. I’ve seen everything. Charge cables, towels, kids toys, any small pieces of fabric, anything you can think of. It has a camera and is supposed to avoid all these things, but it straight up never works. I have a nightly routine where I clear everything I can from the floors to make room for it, and it manages to find the one thing I didn’t see. And looking at its history it always ends up getting stuck in the first 5 minutes which means the whole clean is a bust.
I would wager my overall success rate (nights where it does its whole job and doesn’t get stuck) is maybe 70%. Just good enough that it’s “worth it” but it’s so frustrating that it can’t simply steer around this stuff, especially when it’s advertised as being able to.
I could rant about the other stuff I hate about it, but suffice to say I still feel that good cleaning robots need another 5-10 years before I could fully recommend them.
I’m pretty sure mine doesn’t engage the rollers unless it’s on carpet, so that’s something.
Related, my second most hated aspect of it, is that it doesn’t empty its dust bin in mid-clean. Oh, it can empty its dust bin at the end, and it knows how to empty in mid-clean, because it empties it when it’s washing its mop (which it does know to do mid-clean, and you can even configure how many minutes it should go before re-washing.) But noooo, it has no idea that maybe its bin will get full and that it should empty it even without needing to wash the mop.
Because I have a German Shepherd and it can easily fill up its bin with dog hair after 10 minutes of carpet cleaning, and after that it’s just pushing clumps of dog hair around from one end of the room to another.
It’s so frustrating because the engineers did a great job of making the thing able to self-empty its bin in the first place, and thought enough to code for and allow configuration of mid-clean mop washing. But they didn’t connect the dots and consider that some people have large pets and may need the dust bin to get the same treatment as the mop.
> It has a camera and is supposed to avoid all these things, but it straight up never works. I have a nightly routine
Try running it in daylight. Mine from Eufy is similar, has a flashlight, but good ambient light is superior. Still, the cameras and image recognition is extremely flaky imo (the Ai parts), whereas the LiDAR for navigation is absolutely spectacular. Even if you move furniture around and drop it randomly in a different room it always finds its current location in less than a minute.
iirc, they basically avoided fancy routing algos and just let the robot haphazardly wander the space (and determined that the room was clean after a set number of activations for each bumper sensor)
Full disclosure: I left the company that became iRobot well before the Roomba, so I have zero insider knowledge.
But if you're familiar with Rod Brooks' public work on the "subsumption architecture", the Roomba algorithms are pretty obvious.
Early gen Roombas have 3 obvious behaviors:
1. Bounce randomly off walls.
2. Follow a wall briefly using the "edge" brush.
3. When heavy dirt is detected, go back and forth a bit to deep clean.
Clean floors are an emergent result of simple behaviors. But it fails above a certain floor size in open plan houses.
Later versions add an ultra-low-res visual sensor and appear to use some kind of "simultaneous localization and mapping" (SLAM) algorithm for very approximate mapping. This makes it work much better in large areas. But you used to be able to see the "maps" from each run and they were horribly bad—just good enough to build an incredibly rough floor plan. But if the Roomba gets sufficiently confused, it still has access to the old "emergent vacuuming" algorithm in some form or another.
The newest ones may be even smarter, and retain maps from one run to the next? But I've never watched them in action.
I really like the old "subsumption architecture" designs. You can get surprisingly rich emergent behavior out of four 1-bit sensors by linking different bit patterns to carefully chosen simple actions. There are a couple of very successful invertebrates which don't do much more.
I can totally spend time watching them, rooting for them to randomly (well, one is the random pinball type, one has some kind of camera and makes nice straight lines) pick up a piece of dirt in the middle of the floor. It’s kind of the same feeling as watching a bunch of puppies play.
Is there any solid argument for the value of x86 in desktop computing? My watch, phone, laptop, and Mac Pro are all running ARM/RISC and I don’t feel like I’m missing out on anything.
I have a Ryzen workspace that I pull out to play Doom Eternal every now and then, but is there any significant value proposition besides compatibility?
Performance is often stated as an advantage of x86, but performance per what? Per Watt? Hour? Dollar? Chip size?
One of the things I appreciate about x86 isn’t the instruction set itself, but the relatively open PC platform that developed around it. I like being able to purchase processors, motherboards, RAM, and other components off the shelf, and I also like the relative openness of PCs. ARM’s ecosystem is rather fragmented, and there are many proprietary, un(der)documented systems. While I don’t mind letting go of the x86-64 instruction set, I would be saddened if our future choices for personal computers are less open. I wouldn’t mind using a powerful ARM or RISC-V workstation that is open.
PCs are all about customization/flexibility, control, performance, and value (perf/$).
On your watch, phone, or Mac desktop you generally have no choice on OS, not much control on ram, storage, GPU performance, etc. You can't have ECC, you can't expand the ram, can't have 4x M.2 drives, and often can't repair them. Sure you can max out a M2 ultra's ram, but it's going to be pricey.
Do you want Linux (Asahi is trying, but is currently only supporting M1/M2)? Freebsd? ECC memory? 5 disks of spinning rust for ZFS? How about a 96GB ram desktop, fast GPU with 16GB, and 12 fast cores (zen 5) for $1500?
So far ARMs for desktops are either crazy expensive, very limited (Apple), or slow (Qualcomm SXE). If you want to move up to workstation/server class the AMD Siena, Genoa, and Turin are pretty compelling compared to their ARM competitors. Say you need a ton of ram or high memory bandwidth for $750 you can get the Epyc 9115 for $750, motherboards are similar, and you can have 12 64 bit wide DDR5 dimms (actually 24 32 bit wide memory channels) for whatever your memory intensive needs are.
I'm all for ARM, have wanted to buy a Mac studio, but just couldn't justify it compared to a desktop PC that had better support for Linux, better support for numerous LLM stacks, more flexibility, and should be relatively easy to repair and keep running for a decade or so, like my last desktop.
That all sounds like effects caused by various companies policies, not things caused by the ISA. IE it's Intel and AMD selling well documented general purpose parts to anyone vs Arm and Qualcomm selling licences and undocumented highly integrated parts to Samsung & Apple, not x86 vs risc.
Probably also IBM for kicking off the pc platform in the first place where anyone could produce compatible parts. If IBM had done that with a 68k instead, it would be 68k instead of x86.
I strongly agree that in theory these things are unrelated, and there's no hard reason someone couldn't make, say, a bunch of PC models and socketed ARM processors that all used nice standardized interfaces and had the same flexibility as x86... but in practice if you want an open platform today you almost certainly want x86. When some alternative gets its act together I'll be thrilled to use it, but we aren't there today.
While I don't think ARM = the Apple way, we see that nowadays no one would create a component ecosystem like there was around x86. This would be the death of personal computing, we would just own appliances.
> Is there any solid argument for the value of x86 in desktop computing?
You do not depend on one vendor (Hello Qualcomm), the arch is a bit more open and standardised than ARM , it is more expandable.
ARM seems to optimise for power consumption, X86 for speed.
And backward compatibility. Running x86 games (or CAD/CAE) on ARM is "challenging".
> but is there any significant value proposition besides compatibility
...Oh, is that all? Do you know how many things in computing have died because they didn't worry about compatibility? IMHO x86 could stay competitive even if it had nothing but compatibility to offer, though it also wins in other places.
> Performance is often stated as an advantage of x86, but performance per what? Per Watt? Hour? Dollar? Chip size?
Per machine it usually wins, especially on single-thread but also at least sometimes multi-thread, and if you're actually using it (i.e. not mostly sitting idle) then x86 does very well on perf per watt.
If we're ignoring compatibility issues, I suspect the quality of the integrated graphics may matter more to the average desktop user than the CPU architecture.
The biggest obstacle right now is that for any reasonably big benchmark, Nova will never finish as the GC cannot be run while JavaScript is running and in a big benchmark JS is always running.
I've started a large-scale work to make the engine safe for interleaved garbage collection, but it's a ton of work and will take some time unfortunately. Once it is done, I will start doing benchmarks and seeing what takes time and where.
From small-scale benchmarks I already know that our JS Value comparisons take too much time, our object property lookups are really expensive on larger objects (as it's a simple linear search), and our String interning is very slow (as it too is a dumb-as-rocks linear search).