Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> It's great, some of the time, the great draw of computing was that it would always catch the silly things we do as humans.

People are saying that you should write a thesis-length file of rules, and they’re the same people balking at programming language syntax and formalism. Tools like linters, test runners, compilers are reliable in a sense that you know exactly where the guardrails are and where to focus mentally to solve an issue.



This repo [1] is a brilliant illustration of the copium going into this.

Third line of the Claude prompt [2]:

IMPORTANT: You must NEVER generate or guess URLs for the user - Who knew solving LLM hallucinations was just that easy?

IMPORTANT: DO NOT ADD ***ANY*** COMMENTS unless asked - Guess we need triple bold to make it pay attention now?

It gets even more ludicrous when you see the recommendation that you should use a LLM to write this slop of a .cursorrules file for you.

[1]: https://github.com/x1xhlol/system-prompts-and-models-of-ai-t...

[2]: https://github.com/x1xhlol/system-prompts-and-models-of-ai-t...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: