Hacker Newsnew | past | comments | ask | show | jobs | submit | densh's commentslogin

Great summary and I think your argument is sound.


> As a blind person, AI has changed my life.

Something one doesn't see in news headlines. Happy to see this comment.


Like many others, I too would very much like to hear about this.

I taught our entry-level calculus course a few years ago and had two blind students in the class. The technology available for supporting them was abysmal then -- the toolchain for typesetting math for screen readers was unreliable (and anyway very slow), for braille was non-existent, and translating figures into braille involved sending material out to a vendor and waiting weeks. I would love to hear how we may better support our students in subjects like math, chemistry, physics, etc, that depend so much on visualization.


For a physical view on this see:

https://www.reddit.com/r/openscad/comments/1p6iv5y/christmas...

The creator, https://www.reddit.com/user/Mrblindguardian/ has asked for help a few times in the past (I provided feedback when I could), but hasn't needed to as often of late, presumably due to using one or more LLMs.


I did a maths undergrad degree and the way my blind, mostly deaf friend and I communicated was using a stylized version of TeX markup. I typed on a terminal and he read / wrote on his braille terminal. It worked really well.


Thanks! Did you communicate in "raw" TeX, or was it compiled / encoded for braille? Can you point me at the software you used?


Yes, mostly raw TeX, just plain ascii - not specially coded for Braille. This was quite a long time ago, mid 1980's, so not long after TeX had started to spread in computer science and maths communities. My friend was using a "Versa Braille" terminal hooked via a serial port to a BBC Micro running a terminal program that I'd written. I cannot completely remember how we came to an understanding of the syntax to use. We did shorten some items because the Versa Braille only had 20 chars per "line".

He is still active and online and has a contact page see https://www.foneware.net. I have been a poor correspondent with him - he will not know my HN username. I will try to reach out to him.


Now that I've been recalling more memories of this, I do remember there being encoding or "escaped" character issues - particularly with brackets and parentheses.

There was another device between the BBC Micro and the "Versa Braille" unit. The interposing unit was a matrix switch that could multiplex between different serial devices - I now suspect it might also have been doing some character escaping / translation.

For those not familiar with Braille, it uses a 2x3 array (6 bits) to encode everything. The "standard" (ahem, by country) Braille encodings are super-sub-optimal for pretty much any programming language or mathematics.

After a bit of (me)memory refresh, in "standard" Braille you only get ( and ) - and they both encode to the same 2x3 pattern! So in Braille ()() and (()) would "read" as the same thing.

I now understand why you were asking about the software used. I do not recall how we completely worked this out. We had to have added some sort of convention for scoping.

I now also remember that the Braille terminal aggressively compressed whitespace. My friend liked to use (physical) touch to build a picture, but it was not easy to send spatial / line-by-line information to the Braille terminal.

Being able to rely on spatial information has always stuck with me. It is for this reason I've always had a bias against Python, it is one of the few languages that depends on precise whitespace for statement syntax / scope.


Thank you so much for all this detail. This is very interesting & quite helpful, and it's great you were able to communicate all this with your friend.

For anyone else interested: I wanted to be able to typeset mathematics (actual formulas) for the students that's as automated as possible. There are 1 or 2 commercial products that can typeset math in Braille (I can't remember the names but can look them up) but not priced for individual use. My university had a license to one of them but only for their own use (duh) and they did not have the staff to dedicate to my students (double duh).

My eventual solution was to compile latex to html, which the students could use with a screen reader. But screen readers were not fully reliable, and very, very slow to use (compared to Braille), making homework and exams take much longer than they need to. I also couldn't include figures this way. I looked around but did not find an easy open source solution for converting documents to Braille. It would be fantastic to be able to do this, formulas and figures included, but I would've been very happy with just the formulas. (This was single variable calculus; I shudder to think what teaching vector calc would have been like.)

FYI Our external vendor was able to convert figures to printed Braille, but I imagine that's a labor intensive process.

Partway through the term we found funding for dedicated "learning assistants" (an undergraduate student who came to class and helped explain what's going on, and also met with the students outside of class). This, as much or more than any tech, was probably the single most imapctful thing.


+1 and I would be curious to read and learn more about it.


A blind comedian / TV personality in the UK has just done a TV show on this subject - I haven't seen it, but here's a recent article about it: https://www.theguardian.com/tv-and-radio/2025/nov/23/chris-m...


Chris McCausland is great. A fair bit of his material _does_ reference his visual impairment, but it's genuinely witty and sharp, and it never feels like he's leaning on it for laughs/relying on sympathy.

He did a great skit with Lee Mack at the BAFTAs 2022[0], riffing on the autocue the speakers use for announcing awards.

[0]: https://www.youtube.com/watch?v=CLhy0Zq95HU


Hilariously, he beat the other teams in the “Say What You See” round (yes, really) of last year’s Big fat Quiz. No AI involved.

https://youtu.be/i5NvNXz2TSE?t=4732


Haha that's great!

I'm not a fan of his (nothing against him, just not my cup of tea when it comes to comedy and mostly not been interested in other stuff he's done), but the few times I have seen him as a guest on shows it's been clear that he's a generally clever person.


I remembered he was once a techie, and Wikipedia confirms that he (Chris McCausland) has a BSc Honours in Software Engineering.

https://en.wikipedia.org/wiki/Chris_McCausland


If you want to see more on this topic, check out (google) the podcast I co-host called Accessibility and Gen. AI.


Honestly, that’s such a great example of how to share what you do on the interwebs. Right timing, helpful and on topic. Since I’ve listened to several episodes of the podcast, I can confirm it definitely delivers.


Thanks for the recommendation, just downloaded a few episodes!;


Same! @devinprater, have you written about your experiences? You have an eager audience...


I suppose I should write about them. A good few will be about issues with the mobile apps and websites for AI, like Claude not even letting me know a response is available to read, let alone sending it to the screen reader to be read. It's a mess, but if we blind people want it, we have to push through inaccessibility to get it.


`Something one doesn't see` - no pun intended


I must be wrong, but can’t help but harbor a mild suspicion that your use of sight metaphors is not coincidental.


I have to believe you used the word see twice ironically.


What other accessibility features do you wish existed in video AI models? Real-time vs post-processing?


Mainly realtime processing. I play video games, and would love to play something like Legend of Zelda and just have the AI going, then ask it "read the menu options as I move between them," and it would speak each menu option as the cursor moves to it. Or when navigating a 3D environment, ask it to describe the surroundings, then ask it to tell me how to get to a place or object, then it guide me to it. That could be useful in real-world scenarios too.


Weird question, but have you ever tried text adventures? It seems like it's inherently the ideal option, if you can get your screen reader going.


Yep, they're nice. There are even online versions.


> Something one doesn't see in news headlines.

I hope this wasn't a terrible pun


No pun intended but it's indeed an unfortunate choice of words on my part.


My blind friends have gotten used to it and hear/receive it not as a literal “see“ any more. They would not feel offended by your usage.


Nah, best pun ever!


Have any studies been done on the use of newer or less popular programming languages in the era of LLMs? I'd guess that the relatively low number of examples and the overall amount of code available publicly in a particular language means that LLM output is less likely to be good.

If the hypothesis is correct, it sets an incredibly high bar for starting a new programming language today. Not only does one need to develop compiler, runtime, libraries, and IDE support (which is a tall order by itself), but one must also provide enough data for LLMs to be trained on, or even provide a custom fine-tuned snapshot of one of the open models for the new language.


Research takes some time, both to do but also to publish. In my area (programming languages), we have 4 major conferences a year, each with like a 6-to-8-month lag-time between submission and publication, assuming the submission is accepted by a double-blind peer review process.

I don't work in this area (I have a very unfavorable view of LLMs broadly), but I have colleagues who are working on various aspects of what you ask about, e.g., developing testing frameworks to help ensure output is valid or having the LLMs generate easily-checkable tests for their own generated code, developing alternate means of constraining output (think of, like, a special kind of type system), using LLMs in a way similar to program synthesis, etc. If there is fruit to be borne from this, I would expect to start seeing more publications about it at high-profile venues in the next year or two (or next week, which is when ICFP and SPLASH and their colocated workshops will convene this year, but I haven't seen the publications list to know if there's anything LLM-related yet).


ICFP and SPLASH are this week, actually! Here's the program website for anyone interested: https://conf.researchr.org/program/icfp-splash-2025/program-...

(I have a pretty unfavorable view of LLMs myself, but) a quick search for "LLM" does find four sessions of the colocated LMPL workshop that are explicitly about LLMs and AI agents, plus a spread of other work across the schedule. ("LMPL" stands for "Language Models and Programming Languages", so I guess that's no surprise.)


Well, I did post my comment last week when "next week" was accurate. ;) But thanks for linking the program!


Oh! The thread must have been boosted on a resubmission, or something, because for me it shows your comment as having only been posted yesterday D:



Do most people consider important for LLMs to be able to generate code for the language they use? I think I'd consider it a positive if they can't.


Just anecdotally, I'm more productive in languages that I know _and_ which have good LLM understanding, than in languages that I'm just experienced with.

As much as I dislike Go as a language, LLMs are very good at it. Java too somewhat, Python a fair amount but less (and LLMs write Python I don't like). Swift however, I love programming in, but LLMs are pretty bad at it. We also have an internal config language which our LLMs are trained on, but which is complex and not very ergonomic, and LLMs aren't good at it.


It's not only the amount of code but also the quality of the available code. If a language has a low barrier to entry (e.g. python, javascript), there will be a lot of beginner code. If a language has good static analysis and type checking, the available code is free of certain error classes (e.g. Rust, Scala, Haskell).

I see that difference in llm generated code when switching languages. Generated rust code has a much higher quality than python code for example.


> Not only does one need to develop compiler, runtime, libraries, and IDE support (which is a tall order by itself)

CC can do that by itself in a loop, in ~3mo apparently. https://cursed-lang.org/

I know it's a meme project, but still it's impressive. And cc is at the point where you can take the repo of that language, ask it to "make it support emoji variables", and 5$ later it works. So yeah ... pretty impressive that we're already there.


Not really, I’m into learning new languages but couldn’t care less about LLMs or IDEs.

And 99% of the time tooling isn’t built by the same person that builds the language compiler


I just can't believe how dysfunctional something as basic as search over settings is now.


Specifically?


For example let's say I want to go to display settings from search. I enter 'monitor' in search since I forgot how it's called. First results: accessibility, privacy and security, control center, and only 4-th category Displays. It's 8-th line if you count sub-categories.

I usually google where a particular setting is now since I don't use the exact same words and the settings search is very literal.


Some details on the Swiss side:

There are two variations of the B permit one can get. An unrestricted B permit isn't tied to a specific employer and provides a path toward permanent residence (C permit) within five years for EU citizens or ten years for non-EU citizens. Based on my experience, EU citizens almost always get an unrestricted permit and are treated relatively well by the immigration process: at their first application, they receive a five-year B permit, and at the first renewal five years later, they automatically get a C permit. As a EU citizen you just need to find a job, and your right to work is essentially unrestricted.

The non-EU path is quite different. A non-EU citizen only gets an unrestricted B permit if they prove they have special skills that are not currently available on the local job market. There is a yearly quota for such permits. One can also be unlucky and get an L permit, which is for temporary work only. Moreover, restricted B requires yearly renewal with a demonstration of ongoing employment at each renewal.

If you get a restricted B permit (or L), you don't have any direct path to a C permit, no matter how many years you've lived in Switzerland. You can complete your bachelor's, master's, and PhD degrees and continue working for a university as a contractor afterward, and still not be eligible for the path toward a C permit after over a decade of living in the country. To get a C permit, the last two years prior to the application must have been on an unrestricted B permit, working a full-time, unlimited-term job contract. The change to an unrestricted B permit requires you to have become a "special talent" during those prior years; otherwise, it won't be granted.


For anyone interested in playing with distributed systems, I'd really recommend getting a single machine with latest 16-core CPU from AMD and just running 8 virtual machines on it. 8 virtual machines, with 4 hyper threads pinned per machine, and 1/8 of total RAM per machine. Create a network between them virtually within your virtualization software of choice (such as Proxmox).

And suddenly you can start playing with distributed software, even though it's running on a single machine. For resiliency tests you can unplug one machine at a time with a single click. It will annihilate a Pi cluster in Perf/W as well, and you don't have to assemble a complex web of components to make it work. Just a single CPU, motherboard, m.2 SSD, and two sticks of RAM.

Naturally, using a high core count machine without virtualization will get you best overall Perf/W in most benchmarks. What's also important but often not highlighted in benchmarks in Idle W if you'd like to keep your cluster running, and only use it occasionally.


I've been saying this for years. When the last Raspberry Pi shortage happened people were scrambling to get them for building these toy clusters and it's such a shame. The Pi was made for paedogogy but I feel like most of them are wasted.

I run a K8s "cluster" on a single xcp-ng instance, but you don't even really have to go that far. Docker Machine could easily spin up docker hosts with a single command, but I see that project is dead now. Docker Swarm I think still lets you scale up/down services, no hypervisor required.


> I've been saying this for years. When the last Raspberry Pi shortage happened people were scrambling to get them for building these toy clusters and it's such a shame. The Pi was made for paedogogy but I feel like most of them are wasted.

You're describing people using RPis to learn distributed systems, and you conclude that these RPis are wasted because RPis were made for paedogogy?

> I run a K8s "cluster" on a single xcp-ng instance, but you don't even really have to go that far.

That's perfectly fine. You do what works for you, just like everyone else. How would you handle someone else accusing your computer resourcss of being wasted?


I’ve learned so much setting up a pi cluster. There is something so cool about seeing code run across different pieces of hardware.


The point was you don't need to wait for 8 Pis to become available when most people can get going straight away with what they already have.

If you want to learn physical networking or really need to "see" things happening on physically separate machines just get a free old PC from gumtree or something.


> The point was you don't need to wait for 8 Pis to become available when most people can get going straight away with what they already have.

You also don't need RPis to learn anything about programming, networking, electronics, etc.

But people do it anyways.

I really don't see what point anyone thinks they are making regarding pedogogy. RPis are synonymous with tinkering, regardless of how you cut it. Distributed systems too.


I think you misread my comment, maybe it's clearer if I say "(admittedly) the pi is meant for paedogogy (however) I feel like most of them are wasted".


No need for so much CPU power, any old quad core would work.


Old quad core won't have all the virtualisation extensions.


> Old quad core won't have all the virtualisation extensions.

Intel's first quad core was Kentsfield in 2006. It supports VT-x. AMD's first quad core likewise supports AMD-V. The newer virtualization extensions mostly just improve performance a little or do things you probably won't use anyway like SR-IOV.


Ivy Bridge is 13 years old today. You need to do the the things to buy something older than that in 2025.


Virtualization existed long before virtualization instructions. Not strictly necessary.


An old Xeon then.


Aren’t newer CPUs especially AMDs more energy efficient?


Newer CPUs have significantly better performance per watt under load, essentially by being a lot faster while using a similar amount of power. Idle CPU power consumption hasn't changed much in 10+ years simply because by that point it was already a single digit number of watts.

The thing that matters more than the CPU for idle power consumption is how efficient the system's power supply is under light loads. The variance between them is large and newer power supplies aren't all inherently better at it.


Also worth noting, as this is a common point for the homelabbers out there, fans in surplus enterprise hardware can actually be a significant source of not just noise, but power usage, even at idle.

I remember back in the R710 days (circa 2008 and Nehalem/Westmere cpu's) that under like 30% cpu load, most of your power draw came from fans that you couldn't spin down below a certain threshold without an firmware/idrac script, as well as what you mentioned about those PSU's being optimized for high sustained loads and thus being inefficient at near idle and low usage.

IIRC System Idle power profile on those was only like 15% CPU (that's combined for both CPUs), with the rest being fans, ram and the various other vendor stuff (iDrac, PERC etc) and low-load PSU inefficiencies.

Newer hardware has gotten better, but servers are still generally engineered for above 50% sustained loads rather than under, and those fans still can easily pull a dozen plus watts even at very low usage each in those servers (of course, depends on exact model), so, point being, splitting hairs over a dozen watts or so between CPU's is a bit silly when your power floor from fans and PSU inefficiencies alone puts you at 80W+ draw anyway, not to mention the other components (NIC, Drives, Storage controller, OoB, RAM etc). Also, this is primarily relevant for surplus servers, but lot of people building systems at home for the usecase relevant to this discussion often turn to or are recommended these servers, so just wanted to add this food for thought.


Yeah, the server vendors give negative fucks about idle power consumption. I have a ~10 year old enterprise desktop quad core with a full-system AC power consumption of 6 watts while powered on and idle. I've seen enterprise servers of a similar vintage -- from the same vendor -- draw 40 watts when they're off.


If the point is a multi-tasking sandbox, not heavy/sustained data-crunching, those old CPU's w/ boosting turned off or a mild underclock/undervolt (or an L spec which comes iwth that out of the box) really aren't any more power hungry than a newer Ryzen unless you intend on running whatever you buy at high load for long times. Yeah, on paper it still could be a double digit percentage difference, but in reality we're talking a difference of 10W or 20W if you're not running stuff above 50% load for sustained periods.

Again, lots of variables there and it really depends on how heavily you intend to use/rely on that sandbox as to what's the better play. Regional pricing also comes into it.


Yeah this is how I practiced Postgres hot standby and read replicas,

It was also how I learned to setup a Hadoop cluster, and a Cassandra cluster (this was 10 years ago when these technologies were hot)

Having knowledge of these systems and being able to talk about how I set them up and simulated recovery directly got me jobs that 2x and then 3x my salary, I would highly recommend all medium skilled developers setup systems like this and get practicing if you want to get up into the next level


Honestly why do you need so much cpu power? You can play with distributed systems just by installing Erlang and running a couple of nodes on whatever potato-level linux box you have laying around, including a single raspberry pi.


Tangentially related: I really expected running old MPI programs on stuff like the AMD multi-chip workstation packages to become a bigger thing.


I actually worked with some MPI code way back. What MPI programs are you referring to?


I don't know, but when I was playing with finite difference code as an undergrad in Physics, all of the docs I could find (it was a while ago, though) assumed that I was going to use MPI to run a distributed workload across the university's supercomputer. My needs were less, so I just ran my Boost.Thread code on the four cores of one node.

What if you had a single server with a zillion cores in it? Maybe you could take some 15 year old MPI code and run it locally -- it'd be like a mini supercomputer with an impossibly fast network.


I’m not thinking of one code in particular. Just, observing that in the multi-chiplet, even inside a CPU package we’re already talking over a sort of little internal network anyway. Might as well use code that was designed to run on a network, right?


Yes, but this is boring. Saying this as an owner of home server with ProxMox.


Hey, as someone who spent a few years reimplementing another language trying to decouple it from JVM (Scala JVM -> Scala Native), some pitfalls to avoid:

- Don't try to provide backwards compatible subset of JVM APIs. While this might seem tempting to support very important library X with just a bit of work, I'd rather see new APIs that are only possible with your language / runtime. Otherwise you might end up stuck in never-ending stream of requests to add one more JVM feature to get yet another library from the original JVM language running. Focus on providing your own unique APIs or bindings to native projects that might not be easy to do elsewhere.

- Don't implement your own GC, just use mmtk [1]. It takes a really long time to implement something competitive, and mmtk already has an extensible and pluggable GC design that gets some of the best performance available today [2] without much effort on your end.

- Don't underestimate complexity and importance of multi-threading and concurrency. Try to think of supporting some form of it early or you might get stuck single threaded world forever (see CPython). Maybe you don't do shared memory multi threading and then it could be quite easy to implement (as in erlang). No shared memory also means no shared heap, which makes GCs's life much easier.

- Don't spend too much time benchmarking and optimizing single threaded performance against JVM as performance baseline. If you don't have a compelling use case (usually due to unique libraries), the performance might not matter enough for users to migrate to your language. When you do optimize, I'd rather see fast startup, interactive environment (think V8), over slow startup but eventually efficient after super long warmup (like jvm).

I see that jank is already doing at least some of the things right based on the docs, so this message might be more of a dump of mistakes I've done previously in this space.

[1]: https://github.com/mmtk/mmtk-core

[2]: https://dl.acm.org/doi/pdf/10.1145/3519939.3523440


> Don't try to provide backwards compatible subset of JVM APIs.

Yeah, jank doesn't much with JVM APIs or the JVM at all. We have our own implementation of the compiler and runtime. It has similarities to Clojure's design, only because the object model somewhat demands that.

> Don't implement your own GC, just use mmtk [1].

Yep, already the plan. Currently using Boehm, but MMTK is the next upgrade.

> Don't underestimate complexity and importance of multi-threading and concurrency.

Clojure aids this in having STM, immutable data structures, etc. However, there are some key synchronization points and I do need to audit all of them. jank doesn't have multi-threading support yet, but we will _not_ go the way of Python. jank is Clojure and Clojurists expect sane multi-threading.

> Don't spend too much time benchmarking and optimizing single threaded performance against JVM as performance baseline.

This year, not much optimization has been done at all. I did some necessary benchmarking early on, to aid in some design decisions, but I follow this mantra:

1. Make it work

2. Make it correct

3. Make it fast

I'm currently on step 2 for most of jank. Thanks for sharing the advice!


Very cool project and I think you are doing it right. Best of luck with getting it off the ground!


But is it illegal to provide tools for decompilation? As in shooting people is illegal, but selling guns is not.


Shooting people is not illegal in the US -- I'm not sure this is the best analogy or there will be huge limitations when discussing decompilation efforts.

Plenty of people are shot or killed lawfully with firearms.


I doubt that will happen for a tool with an arbitrary usecase of assisting in research however some projects related to reverse-engineering have been censored under the DMCA takedown regime.


I update largely based on non performance criteria:

- new display tech

- better wireless connectivity

- updated protocols on ports (e.g., support for higher res displays and newer displayport/hdmi versions)

- better keyboard

- battery life

Once a few of those changes accumulate over 4+ generations of improvements that’s usually the time for me to upgrade.

My laptops so far: first 2008 plastic macbook, 2012 macbook pro, 2015 macbook pro, and M1 pro 16 currently. I skipped 2016-2020 generation which was a massive step backwards on my upgrade criteria, and updated to 2015 model in 2016 once I realized apple has lost their marbles and has no near plans on making a usable laptop at the time.

Also getting a maxed out configuration really helps the longevity.


You should check out Three Body Problem (the book, not the mediocre netflix adaptation).


I love that book


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: