Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

From a web developer's perspective, I think the problem is that the people who built these systems were too nice.

Let me explain.

Web browsers have the option to reject syntax errors in HTML and immediately fail, but they don't. Similarly, I think compilers could be a bit slower, check a few more things, and refuse to produce an output if a pointer could be null.

I'm just a code monkey, so I don't have all the answers, but is it possible to have a rule like "when in doubt, throw an error"? The code might be legal, but we aren't here to play code golf. Why not deny such "clever" code with a vengeance and let us mortals live our lives? Can compilers do that by default?



A web developer without knowing the full history.

Netscape Navigator did, in fact, reject invalid HTML. Then along came Internet Explorer and chose “render invalid HTML dwim” as a strategy. People, my young naive self included, moaned about NN being too strict.

NN eventually switched to the tag soup approach.

XHTML 1.0 arrived in 2000, attempting to reform HTML by recasting it as an XML application. The idea was to impose XML’s strict parsing rules: well-formed documents only, close all your tags, lowercase element names, quote all attributes, and if the document is malformed, the parser must stop and display an error rather than guess. XHTML was abandoned in 2009.

When HTML5 was being drafted in 2004-onwards, the WHATWG actually had to formally specify how browsers should handle malformed markup, essentially codifying IE’s error-recovery heuristics as the standard.

So your proposal has been attempted multiple times and been rejected by the market (free or otherwise that’s a different debate!).


We do that, and then people complain the type checker is too hard to satisfy, and go back to dynamic languages.


This... is why Rust exists, yes.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: