Hacker Newsnew | past | comments | ask | show | jobs | submit | striking's commentslogin

My guess is that the submitter is automated. It's not the first time their post title has been truncated by the text limit without their editing it.

A DS set to Auto mode will boot to the cartridge (and you can reflash the firmware to skip the health and safety screen). From there the OS is replaced with whatever is on the cart. A flashcart with the right shell will boot right into whatever app you want (and you can soft reset the console with a key combination to switch apps).

3DSes require a little more work and have a longer boot chain, but it's been thoroughly broken all the way to the bootstrapping process so you can use whichever firmware version and whatever patches you like with enough effort.


Once a DS has been flashed (skips the health and safety screen) it also disables signature verification for DS download play, so you can beam homebrews directly to your DS' home screen with a wifi card. But this is an awkward process that most people don't actually do with their original DSes, as it requires putting tinfoil over a toothpick and jamming it into a hole next to the battery to close the flash write jumper. I think DS' crypto has also been defeated but I can't find any documentation of arbitrary download play on unflashed DSes. Also seems no .nds signing keys in the leaks from what I can tell.

Thanks for this! I wish there were more cross-comparisons like this out there of what it is actually like to use some of these frameworks, the note on Django being a little less magic than Rails makes me genuinely interested in it.

if you want "less magic than rails" check out ecto, i would say it has less magic than django

It's not just brain atrophy, I think. I think part of it is that we're actively making a tradeoff to focus on learning how to use the model rather than learning how to use our own brains and work with each other.

This would be fine if not for one thing: the meta-skill of learning to use the LLM depreciates too. Today's LLM is gonna go away someday, the way you have to use it will change. You will be on a forever treadmill, always learning the vagaries of using the new shiny model (and paying for the privilege!)

I'm not going to make myself dependent, let myself atrophy, run on a treadmill forever, for something I happen to rent and can't keep. If I wanted a cheap high that I didn't mind being dependent on, there's more fun ones out there.


> let myself atrophy, run on a treadmill forever, for something

You're lucky to afford the luxury not to atrophy.

It's been almost 4 years since my last software job interview and I know the drills about preparing for one.

Long before LLMs my skills naturally atrophy in my day job.

I remember the good old days of J2ME of writing everything from scratch. Or writing some graph editor for universiry, or some speculative, huffman coding algorithm.

That kept me sharp.

But today I feel like I'm living in that netflix series about people being in Hell and the Devil tricking them they're in Heaven and tormenting them: how on planet Earth do I keep sharp with java, streams, virtual threads, rxjava, tuning the jvm, react, kafka, kafka streams, aws, k8s, helm, jenkins pipelines, CI-CD, ECR, istio issues, in-house service discovery, hierarchical multi-regions, metrics and monitoring, autoscaling, spot instances and multi-arch images, multi-az, reliable and scalable yet as cheap as possible, yet as cloud native as possible, hazelcast and distributed systems, low level postgresql performance tuning, apache iceberg, trino, various in-house frameworks and idioms over all of this? Oh, and let's not forget the business domain, coding standards, code reviews, mentorships and organazing technical events. Also, it's 2026 so nobody hires QA or scrum masters anymore so take on those hats as well.

So LLMs it is, the new reality.


This is a very good point. Years ago working in a LAMP stack, the term LAMP could fully describe your software engineering, database setup and infrastructure. I shudder to think of the acronyms for today's tech stacks.

And yet many the same people who lament the tooling bloat of today will, in a heartbeat, make lame jokes about PHP. Most of them aren't even old enough to have ever done anything serious with it, or seen it in action beyond Wordpress or some spaghetti-code one-pager they had to refactor at their first job. Then they show up on HN with a vibe-coded side project or blog post about how they achieved a 15x performance boost by inventing server-side rendering.

Highly relevant username!

I try :)

Ya I agree it's totally crazy.... but, do most app deployments need even half that stuff? I feel like most apps at most companies can just build an app and deploy it using some modern paas-like thing.

> I feel like most apps at most companies can just build an app and deploy it using some modern paas-like thing.

Most companies (in the global, not SV sense) would be well served by an app that runs in a Docker container in a VPS somewhere and has PostgreSQL and maybe Garage, RabbitMQ and Redis if you wanna get fancy, behind Apache2/Nginx/Caddy.

But obviously that’s not Serious Business™ and won’t give you zero downtime and high availability.

Though tbh most mid-size companies would also be okay with Docker Swarm or Nomad and the same software clustered and running behind HAProxy.

But that wouldn’t pad your CV so yeah.


> Most companies (in the global, not SV sense) would be well served by an app that runs in a Docker container in a VPS somewhere and has PostgreSQL and maybe Garage, RabbitMQ and Redis if you wanna get fancy, behind Apache2/Nginx/Caddy.

That’s still too much complication. Most companies would be well served by a native .EXE file they could just run on their PC. How did we get to the point where applications by default came with all of this shit?


When I was in primary school, the librarian used a computer this way, and it worked fine. However, she had to back it up daily or weekly onto a stack of floppy disks, and if she wanted to serve the students from the other computer on the other side of the room, she had to restore the backup on there, and remember which computer had the latest data, and only use that one. When doing a stock–take (scanning every book on the shelves to identify lost books), she had to bring that specific computer around the room in a cart. Such inconveniences are not insurmountable, but they're nice to get rid of. You don't need to back up a cloud service and it's available everywhere, even on smaller devices like your phone.

There's an intermediate level of convenience. The school did have an IT staff (of one person) and a server and a network. It would be possible to run the library database locally in the school but remotely from the library terminals. It would then require the knowledge of the IT person to administer, but for the librarian it would be just as convenient as a cloud solution.


I think the 'more than one user' alternative to a 'single EXE on a single computer' isn't the multilayered pie of things that KronisLV mentioned, but a PHP script[0] on an apache server[0] you access via a web browser. You don't even need a dedicated DB server as SQLite will do perfectly fine.

[0] or similarly easy to get running equivalent


> but a PHP script[0] on an apache server[0] you access via a web browser

I've seen plenty of those as well - nobody knows exactly how things are setup, sometimes dependencies are quite outdated and people are afraid to touch the cPanel config (or however it's setup). Not that you can't do good engineering with enough discipline, it's just that Docker (or most methods of containerization) limits the blast range when things inevitably go wrong and at least try to give you some reproducibility.

At the same time, I think that PHP can be delightfully simple and I do use Apache2 myself (mod_php was actually okay, but PHP-FPM also isn't insanely hard to setup), it's just that most of my software lives in little Docker containers with a common base and a set of common tools, so they're decoupled from the updates and config of the underlying OS. I've moved the containers (well data+images) across servers with no issues when needed and also resintalled OSes and spun everything right back up.

Kubernetes is where dragons be, though.


> That’s still too much complication. Most companies would be well served by a native .EXE file they could just run on their PC

I doubt that.

As software has grown to solving simple personal computing problems (write a document, create a spreadsheet) to solving organizational problems (sharing and communication within and without the organization), it has necessarily spread beyond the .exe file and local storage.

That doesn't give a pass to overly complex applications doing a simple thing - that's a real issue - but to think most modern company problems could be solved with just a local executable program seems off.


It can be like that, but then IT and users complain about having to update this .exe on each computer when you add new functionality or fix some errors. When you solve all major pain points with a simple app, "updating the app" becomes top pain point, almost by definition.

> How did we get to the point where applications by default came with all of this shit?

Because when you give your clients instructions on how to setup the environment, they will ignore some of them and then they install OracleJDK while you have tested everything under OpenJDK and you have no idea why the application is performing so much worse in their environment: https://blog.kronis.dev/blog/oracle-jdk-and-openjdk-compatib...

It's not always trivial to package your entire runtime environment unless you wanna push VM images (which is in many ways worse than Docker), so Docker is like the sweet spot for the real world that we live in - a bit more foolproof, the configuration can be ONE docker-compose.yml file, it lets you manage resource limits without having to think about cgroups, as well as storage and exposed ports, custom hosts records and all the other stuff the human factor in the process inevitably fucks up.

And in my experience, shipping a self-contained image that someone can just run with docker compose up is infinitely easier than trying to get a bunch of Ansible playbooks in place.

If your app can be packaged as an AppImage or Flatpak, or even a fully self contained .deb then great... unless someone also wants to run it on Windows or vice versa or any other environment that you didn't anticipate, or it has more dependencies than would be "normal" to include in a single bundle, in which case Docker still works at least somewhat.

Software packaging and dependency management sucks, unless we all want to move over to statically compiled executables (which I'm all for). Desktop GUI software is another can of worms entirely, too.


When I come into a new project and I find all this... "stuff" in use, often what I later find is actually happening with a lot of it is:

- nobody remembers why they're using it

- a lot of it is pinned to old versions or the original configuration because the overhead of maintaining so much tooling is too much for the team and not worth the risk of breaking something

- new team members have a hard time getting the "complete picture" of how the software is built and how it deploys and where to look if something goes wrong.


That was on NBC.

Businesses too. For two years it's been "throw everything into AI." But now that shit is getting real, are they really feeling so coy about letting AI run ahead of their engineering team's ability to manage it? How long will it be until we start seeing outages that just don't get resolved because the engineers have lost the plot?

From what I am seeing, no one is feeling coy simply because of the cost savings that management is able to show the higher-ups and shareholders. At that level, there's very little understanding of anything technical and outages or bugs will simply get a "we've asked our technical resources to work on it". But every one understands that spending $50 when you were spending $100 is a great achievement. That's if you stop and not think about any downsides. Said management will then take the bonuses and disappear before the explosions start with their resume glowing about all the cost savings and team leadership achievements. I've experienced this first hand very recently.

Of all the looming tipping points whereby humans could destroy the fabric of their existence, this one has to be the stupidest. And therefore the most likely.

There really ought to be a class of professionals like forensic accountants who can show up in a corrupted organization and do a post mortem on their management of technical debt

How long until “the LLM did it it” is just as effective as “AWS is down, not my fault”?

Never because the only reason that works with Amazon is that everyone is down at the exact same time.

Everyone will suffer from slop code at the same time.

Yeah but that's very different from an AWS outage. Everyone's website being down for a day every year or 2 is something that it's very hard to take advantage of as a competitor. That's not true for software that is just terrible all the time.

This to me is the point.. LLMs can't be responsible for things. It sits with a human.

Why can LLMs not be responsible for things? (genuine question - I'm not certain myself).

because it doesn't have any skin in the game and can't be punished, and can't be rewarded for succeeding. Its reputation, career, and dignity are nonexistent.

On the contrary - the LLM has had it's own version of "skin in the game" through the whole of it's training. Reinforcement learning is nothing but that. Why is that less real than putting a person in prison. Is it because of the LLM itself, or because you don't trust the people selling it to you?

Are you claiming that LLMs are... sentient? Bold claim, Taylor.

This doesn't seem to have stopped anyone before.

Stopped anyone from doing what? Assigning responsibility to someone with nothing to lose, no dignity or pride, and immune from financial or social injury?

If you’re just a gladhander for an algorithm, what are you really needed for?

> It's not just brain atrophy, I think. I think part of it is that we're actively making a tradeoff to focus on learning how to use the model rather than learning how to use our own brains and work with each other.

I agree with the sentiment but I would have framed it differently. The LLM is a tool, just like code completion or a code generator. Right now we focus mainly on how to use a tool, the coding agent, to achieve a goal. This takes place at a strategic level. Prior to the inception of LLMs, we focused mainly on how to write code to achieve a goal. This took place at a tactical level, and required making decisions and paying attention to a multitude of details. With LLMs our focus shifts to a higher-level abstraction. Also, operational concerns change. When writing and maintaining code yourself, you focus on architectures that help you simplify some classes of changes. When using LLMs, your focus shifts to building context and aiding the model effectively implement their changes. The two goals seem related, but are radically different.

I think a fairer description is that with LLMs we stop exercising some skills that are only required or relevant if you are writing your code yourself. It's like driving with an automatic transmission vs manual transmission.


Previous tools have been deterministic and understandable. I write code with emacs and can at any point look at the source and tell you why it did what it did. But I could produce the same program with vi or vscode or whatever, at the cost of some frustration. But they all ultimately transform keystrokes to a text file in largely the same way, and the compiler I'm targeting changes that to asm and thence to binary in a predictable and visible way.

An LLM is always going to be a black box that is neither predictable nor visible (the unpredictability is necessary for how the tool functions; the invisibility is not but seems too late to fix now). So teams start cargo culting ways to deal with specific LLMs' idiosyncrasies and your domain knowledge becomes about a specific product that someone else has control over. It's like learning a specific office suite or whatever.


> An LLM is always going to be a black box that is neither predictable nor visible (the unpredictability is necessary for how the tool functions; the invisibility is not but seems too late to fix now)

So basically, like a co-worker.

That's why I keep insisting that anthropomorphising LLMs is to be embraced, not avoided, because it gives much better high-level, first-order intuition as to where they belong in a larger computing system, and where they shouldn't be put.


> So basically, like a co-worker.

Arguably, though I don't particularly need another co-worker. Also co-workers are not tools (except sometimes in the derogatory sense).


Sort of except it seems the more the co-worker does the job it atrophies my ability to understand.. So soon we'll all be that annoyingly ignorant manager saying, "I don't know, I want the button to be bigger". Yay?

Only if we're lucky and the LLMs cease being replaced with improved models.

Claude has already shown us people who openly say "I don't code and yet I managed this"; right now the command line UI will scare off a lot of people, and people using the LLMs still benefit from technical knowledge and product design skills, if the tools don't improve we keep that advantage…

…but how long will it be before the annoyingly ignorant customer skips the expensive annoyingly ignorant manager along with all us expensive developers, and has one of the models write them bespoke solution for less than the cost of off-the-shelf shrink-wrapped DVDs from a discount store?

Hopefully that extra stuff is further away than it seems, hopefully in a decade there will be an LLM version of this list: https://en.wikipedia.org/wiki/List_of_predictions_for_autono...

But I don't trust to hope. It has forsaken these lands.


> using the LLMs still benefit from technical knowledge and product design skills, if the tools don't improve we keep that advantage…

I don't think we will, because many of us are already asking LLMs for help/advice on these, so we're already close to the point where LLMs will be able to use these capabilities directly, instead of just for helping us drive the process.


Indeed, but the output of LLMs today for these kinds of task are akin to a junior product designer, a junior project manager, a junior software architect etc.

For those of us who are merely amateur at any given task, LLMs raising us to "junior" is absolutely an improvement. But just as it's possible to be a better coder than an LLM, if you're a good PM or QA or UI/UX designer, you're not obsolete yet.


> and can at any point look at the source and tell you why it did what it did

Even years later? Most people can’t unless there’s good comments and design. Which AI can replicate, so if we need to do that anyway, how is AI specially worse than a human looking back at code written poorly years ago?


I mean, Emacs's oldest source files are like 40 years old at this point, and yes they are in fact legible? I'm not sure what you're asking -- you absolutely can (and if you use it long enough, will) read the source code of your text editor.

Well especially the lisp parts!

The little experience I have with LLM confidently shows that LLMs are much better at navigating and modifying a well structured code base. And they struggle, sometimes to a point where they can't progress at all, if tasked to work on bad code. I mean, the kind of bad you always get after multiple rounds of unsupervised vibe coding.

> I happen to rent and can't keep

This is my fear - what happens if the AI companies can't find a path to profitability and shut down?


Don't threaten us with a good time.

That’s not a good time, I love these things. I’ve been able to indulge myself so much. Possibly good for job security but would suck in every other way.

This is why local models are so important. Even if the non-local ones shut down, and even if you can't run local ones on your own hardware, there will still be inference providers willing to serve your requests.

Recently I was thinking about how some (expensive) customer electronics like the Mac Studio can run pretty powerful open source models with a pretty efficient power consumption, that could pretty easily run on private renewable energy, and that are on most (all?) fronts much more powerful than the original ChatGPT especially if connected to a good knowledge base. Meaning that aside from very extreme scenarios I think it is safe to say that there will always be a way not to go back to how we used to code, as long as we can offer the correct hardware and energy. Of course personally I think we will never need to go to such extreme ends... despite knowing of people who seem to seriously think developed countries heavily run out of electricity one day, which, while I reckon there might be tensions, seems like a laughable idea IMHO.

> the meta-skill of learning to use the LLM depreciates too. Today's LLM is gonna go away someday, the way you have to use it will change. You will be on a forever treadmill, always learning the vagaries of using the new shiny model (and paying for the privilege!)

I haven’t found this to be true at all, at least so far.

As models improve I find that I can start dropping old tricks and techniques that were necessary to keep old models in line. Prompts get shorter with each new model improvement.

It’s not really a cycle where you’re re-learning all the time or the information becomes outdated. The same prompt structure techniques are usually portable across LLMs.


Interesting, I’ve experienced the opposite in certain contexts. CC is so hastily shipped that new versions often imbalance existing workflows. E.g. people were raving about the new user prompt tools that CC used to get more context but they messed my simple git slash commands

I think you have to be aware of how you use any tool but I don’t think this is a forever treadmill. It’s pretty clear to me since early on that the goal is for you the user to not have to craft the perfect prompt. At least for my workflow it’s pretty darn close to that for me.

If it ever gets there, then anyone can use it and there's no "skill" to be learned at all.

Either it will continue to be this very flawed non-deterministic tool that requires a lot of effort to get useful code out of it, or it will be so good it'll just work.

That's why I'm not gonna heavily invest my time into it.


Good for you. Others like myself find the tools incredibly useful. I am able to knock out code at a higher cadence and it’s meeting a standard of quality our team finds acceptable.

Looking forward for those 10x improvements to finally show up somewhere. Any day now!

Jokes aside, I never said it's not useful, but most definitely it's not even close to all this hype.


> very flawed non-deterministic tool that requires a lot of effort to get useful code out of it

We are all different but I think most of us with open minds are the flaw in your statement.


I have deliberately moderated my use of AI in large part for this reason. For a solid two years now I've been constantly seeing claims of "this model/IDE/Agent/approach/etc is the future of writing code! It makes me 50x more productive, and will do the same for you!" And inevitabely those have all fallen by the wayside and been replaced by some new shiny thing. As someone who doesn't get intrinsic joy out of chasing the latest tech fad I usually move along and wait to see if whatever is being hyped really starts to take over the world.

This isn't to say LLMs won't change software development forever, I think they will. But I doubt anyone has any idea what kind of tools and approaches everyone will be using 5 or 10 years from now, except that I really doubt it will be whatever is being hyped up at this exact moment.


HN is where I keep hearing the “50× more productive” claims the most. I’ve been reading 2024 annual reports and 2025 quarterlies to see whether any of this shows up on the other side of the hype.

So far, the only company making loud, concrete claims backed by audited financials is Klarna and once you dig in, their improved profitability lines up far more cleanly with layoffs, hiring freezes, business simplification, and a cyclical rebound than with Gen-AI magically multiplying output. AI helped support a smaller org that eliminated more complicated financial products that have edge cases, but it didn’t create a step-change in productivity.

If Gen-AI were making tech workers even 10× more productive at scale, you’d expect to see it reflected in revenue per employee, margins, or operating leverage across the sector.

We’re just not seeing that yet.


I have friends who make such 50x productivity claims. They are correct if we define productivity as creating untested apps and games and their features that will never ship --- or be purchased, even if they were to ship. Thus, “productivity” has become just another point of contention.

100% agree. There are far more half-baked, incomplete "products" and projects out there now that it is easier to generate code. Generously, that doesn't necessarily equate to productivity.

I've agree with the fact that the last 10% of a project is the hardest part, and that's the part that Gen-AI sucks at (hell, maybe the 30%).


> If Gen-AI were making tech workers even 10× more productive at scale, you’d expect to see it reflected in revenue per employee, margins, or operating leverage across the sector.

If we’re even just talking a 2x multiplier, it should show up in some externally verifiable numbers.


I agree, and we might be seeing this but there is so much noise, so many other factors, and we're in the midst of capital re-asserting control after a temporary loss of leverage which might also be part of a productivity boost (people are scared so they are working harder).

The issue is that I'm not a professional financial analyst and I can't spend all day on comps so I can't tell through the noise yet if we're seeing even 2x related to AI.

But, if we're seeing 10x, I'd be finding it in the financials. Hell, a blind squirrel would, and it's simply not there.


Yes, I think there many issues in a big company that could hide a 2x productivity increase for a little while. But I'd expect it to be very visible in small companies and projects. Looking at things like number of games released on steam, new products launched on new product sites, or issues fixed on popular open source repos, you'd expect a 2x bump to be visible.

In my experience all technology has been like this though. We are on the treadmill of learning the new thing with our without LLMs. That's what makes tech work so fun and rewarding (for me anyway).

I assume you're living in a city. You're already renting out a lot of things to others (security, electricity, water, food, shelter, transportation), what is different with white collar work?

>the city gets destroyed

vs.

>a company goes bankrupt or pivots

I can see a few differences.


My apartment has been here for years and will be here for many more. I don't love paying rent on it but it certainly does get maintained without my having to do anything. And the rest of the infrastructure of my life is similarly banal. I ride Muni, eat food from Trader Joe's, and so on. These things are not going away and they don't require me to rewire my brain constantly in order to make use of them. The city infrastructure isn't stealing my ability to do my work, it just fills in some gaps that genuinely cannot be filled when working alone and I can trust it to keep doing that basically forever.


Asking Opus 4.5 "your gender and pronouns, please?" I received the following:

> I don't have a gender—I'm an AI, so I don't have a body, personal identity, or lived experience in the way humans do.

> As for pronouns, I'm comfortable with whatever feels natural to you. Most people use "it" or "you" when referring to me, but some use "he" or "they"—any of those work fine. There's no correct answer here, so feel free to go with what suits you.


Interesting that it didn’t mention “she”.


I tried using XML on a lark the other day and realized that XSDs are actually somewhat load bearing. It's difficult to map data in XML to objects in your favorite programming language without the schema being known beforehand as lists of a single element are hard to distinguish from just a property of the overall object.

Maybe this is okay if you know your schema beforehand and are willing to write an XSD. My usecase relied on not knowing the schema. Despite my excitement to use a SAX-style parser, I tucked my tail between my legs and switched back to JSONL. Was I missing something?


You have to use the right tool for the job.

XML is extensible markup, i.e. it's like HTML that can be applied to tasks outside of representing web pages. It's designed to be written by hand. It has comments! A good use for XML would be declaring a native UI: it's not HTML but it's like HTML.

JSON is a plain text serialization format. It's designed to be generated and consumed by computers whilst being readable by humans.

Neither is a configuration language but both have been abused as one.


> It's designed to be written by hand.

This assertion is comically out of touch with reality, particularly when trying to describe JSON as something that is merely "readable by humans". You could not do anything at all with XML without having to employ half a dozen frameworks and tools and modules.


You can do everything you can do with JSON by just knowing the basic syntax (<element attribute=""></element>).

The complexity about XML comes from the many additional languages and tools built on top of it.

Many are too complex and bloated, but JSON has little to nothing comparable, so it's only simple because it doesn't support what XML does.


> The complexity about XML comes from the many additional languages and tools built on top of it.

It's not just that, is it? There are also attributes versus child elements, dealing with white space including the xml:space attribute, namespaces, schemas, integration of external document fragments with xinclude:include or &extern;. Each of these is a huge can of worms in its own right. There are probably more that I'm not even aware of right now.

A few years ago, I wrote a fully functional parser for JSON that is easy to verify for correctness and that isn't just lying around somewhere as a toy, but is actually used (by me) in various projects time and again. Overall, building this parser was almost trivial. With XML, I'm not even sure I would be able to write a correct and complete parser.

But I agree with you that XML-based languages and XML tools make things even worse. I had to work with XML a lot over ten years ago. I still get annoyed when I think about XSLT, or dealing with schemas, or the challenge of finding usable tools that are reasonably compliant with standards.

You can only have a positive view of XML when you think of something like this:

    <?xml version="1.0" encoding="UTF-8"?>
    <booklist>
      <book>
        <title>Example Book</title>
        <author>Max Mustermann</author>
        <year>2025</year>
      </book>
      <book>
        <title>Second Book</title>
        <author>Erika Musterfrau</author>
        <year>2026</year>
      </book>
    </booklist>
And at that level, I have (almost) no problem with XML. But as soon as things get more demanding and you really take the various aspects of XML's value proposition seriously, you enter a world of pain and despair. At least, that's how it was for me back then. Maybe I would see things differently today, but I'm not really interested in finding out.

First, you're describing the parsing side, while the message I was replying to claimed that it can't be written by hand.

Anyhow, schemas, XInclude and even namespaces are what I was referring to as additional languages of tools.

In your application you use them if you want, they're not really part of XML.

Of course even a parser for plain XML is a lot more complex than one for JSON, but people usually use libraries for that...

In any case, in your application nothing prevents you from using a dumbed-down version of XML, without entities, white space handling, and even only looking at elements and attributes; there were some applications that did that.

That already gives you a format that's easier to read and write manually than json.

I had more to say about "attributes versus child elements", but it's taking me too much time, I'll probably do that tomorrow.


I think I understand your point. I only brought parsing into play to illustrate that XML is complicated, not because it's my general focus. I wouldn't classify namespaces, etc. as additional languages and tools, but that's beside the point.

> in your application nothing prevents you from using a dumbed-down version of XML

That's right. And if XML were exactly that, then there wouldn't be so many people frustrated with it. Unfortunately, in a professional work context, you don't always have control over whether it stays within this manageable subset. Sometimes the less pleasant aspects simply come into play, and then you have to deal with the whole complicated mess.


> It's designed to be written by hand

Are you sure about that? I've heard XML gurus say the exact opposite.

This is a very good example of why I detest the phrase “use the right tool for the job.” People say this as an appeal to reason, as if there weren't an obvious follow-up question that different people might answer very differently.


SGML was designed for documents, and it can be written by hand (or by a machine). HTML (another descendant of SGML) is in fact written by hand regularly. When you're using SGML descendants for what they were meant for (documents) they're pretty good for this purpose. Writing documents — not configuration files, not serialized data, not code — by hand.

XML can still be used as a very powerful generic document markup language, that is more restricted (and thus easier to parse) than SGML. The problems started when people started using XML for other things, especially for configuration files, data interchange and even for programming language.

So I don't think GP is wrong. The authors of the original XML spec probably envisioned people writing this by hand. But XML is very bad for writing by hand the things that it eventually got used for.


Perfectly sure. XML is eXtensible Markup Language, the generalized counterpart to Hypertext Markup Language.

XML, HTML, SGML are all designed to be written by hand.

You can generate XML, just like you can generate HTML, but the language wasn't designed to make that easy.

Computers don't need comments, matching </end> tags, or whitespace stripping.

There was a time, in the early-mid 2000s when XML was the hammer for every screw. But then JSON was invented and it took over most of those use cases. Perhaps those XML gurus are stuck in a time warp.

XML remains a good way to represent tree structures that need to be human editable.


XML was designed as a document format, not a data structure serialization format. You're supposed to parse it into a DOM or similar format, not a bunch of strongly-typed objects. You definitely need some extra tooling if you're trying to do the latter, and yes, that's one of XSD's purposes.

that's underselling xml. xml is explicitly meant for data serialization and exchange, xsd reflects that, and it's the reason for jaxb Java xml binding tooling.

get me right: Json is superior in many aspects, xml is utterly overengineered.

but xml absolutely was _meant_ for data exchange, machine to machine.


No. That use case was grafted onto it later. You can look at the original 1998 XML 1.0 spec first edition to see what people were saying at the time: https://www.w3.org/TR/1998/REC-xml-19980210#sec-origin-goals

Here's the bullet point from that verbatim:

  The design goals for XML are:

    XML shall be straightforwardly usable over the Internet.
    XML shall support a wide variety of applications.
    XML shall be compatible with SGML.
    It shall be easy to write programs which process XML documents.
    The number of optional features in XML is to be kept to the absolute minimum, ideally zero.
    XML documents should be human-legible and reasonably clear.
    The XML design should be prepared quickly.
    The design of XML shall be formal and concise.
    XML documents shall be easy to create.
    Terseness in XML markup is of minimal importance.
Or heck, even more concisely from the abstract: "The Extensible Markup Language (XML) is a subset of SGML that is completely described in this document. Its goal is to enable generic SGML to be served, received, and processed on the Web in the way that is now possible with HTML. XML has been designed for ease of implementation and for interoperability with both SGML and HTML."

It's always talking about documents. It was a way to serve up marked-up documents that didn't depend on using the specific HTML tag vocabulary. Everything else happened to it later, and was a bad idea.


please bear with me...

data exchange was baked into xml from the get go, the following predate the 1.0 release and come from people involved in writing the standard:

XML, Java, and the future of the Web Jon Bosak, *Sun Microsystems* Last revised *1997.03.10*

section on Database interchange: the universal hub

https://www.ibiblio.org/bosak/xml/why/xmlapps.htm

Guidelines for using XML for Electronic Data Interchange Version 0.04

*23rd December 1997*

https://xml.coverpages.org/xml-ediGuide971223.html

the origin of the latter, the edi/xml WG, was the successor of an edi/sgml WG which had started in the early 1990, and was born out of the desire to get a "universal electronic data exchange" that would work cross platform, vms, mainframes, unix and even DOS hehe, and to leverage the successful sgml doc book interoperability.

was it niche? yes. was it starting in sgml already? and baked into xml/xsd/xslt? I think so.


to be fair

>XML shall be straightforwardly usable over the Internet.

is machine to machine communication

to me, XML is an example of worse is better, or rather, better is worse. it would never have come out of Bell Labs in the early 70s. Neither would JSON for that matter.


And as for JAXB, it was released in 2003, well into XML's decadent period. The original Java APIs for XML parsing were SAX and DOM, both of which are tag and document oriented.

there were tools that derive the schema from sample data

and relaxng is a human friendly schema syntax that has transformers from and to xsd.


> Critics say such restrictions, however, severely limit access for people in prison to reading materials since the offerings in prison libraries and on prison-issued tablets can be limited or outdated.

Sounds like there's a very easy solution to the problem

Yeah. Restoring their rights.

Their rights to an extremely generous and updated tablet based ebook selection. The ROI on that providing that would be very good for governments, you would assume. Recidivism down. Libraries could do other things with the book space.

The great thing about prison tablets is that they are cheap and offer a low-cost selection of reading materials, with no incentive to have some scummy third party company in the loop to profit off their monopoly.

The first thing prisoners will do is destroy the tablet. I'm okay with both restricting prisoners' abilities to get direct mail, and I'm okay with allowing subscriptions go directly to the prison library. I am not inclined to want to give prisoners more privileges and I am inclined to remove as many privileges as possible, especially for first time offenders so that they realize that the path they are on is the wrong one.

I’m not ok with this

As in completing the abolition of slavery.

I believe that criminals should have a terrible experience in jail, so that they don't want to return again. I don't think it should be comfortable at all.

This is such a garbage take because research unanimously shows that terrible imprisonment would only increase recidivism rather than reduce it.

You can continue to be wrong, or you can do even minimal research and revisit your biases. Choose.


> If the page is AI‑generated but the domain is mixed (not mostly AI), we flag the page as AI‑generated but do not downrank it.

> If a domain is found to be mostly AI‑generated (typically more than 80% across its pages), that domain is flagged as AI slop and downranked in web search results.

I think that's pretty clear, no? One AI item is merely AI generated, a trough of AI items is AI slop.

Edited as I think I misunderstood: there's more slop of the AI kind than of whatever other low-effort content, and I think Kagi is already doing a good job of keeping a neat little index that avoids content farms, AI or otherwise. AI slop just happens to be a little harder to evaluate than regular slop (and in my experience is now more pervasive because it's cheaper to produce).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: