Hacker Newsnew | past | comments | ask | show | jobs | submit | bicepjai's commentslogin

When you’re as high profile as OpenAI, you don’t get judged like everyone else. People scrutinize your choices reflexively, and that’s just the tax of being a famous brand: it amplifies both the upsides and the blowback.

Most ordinary users won’t recognize the smaller products you listed, but they will recognize OpenAI and they’ll recognize Snowden/NSA adjacent references because those have seeped into mainstream culture. And even if the average user doesn’t immediately make the connection, someone in their orbit on social media almost certainly will and they’ll happily spin it into a theory for engagement.


With all great observations made, the quote still stands. "If I have seen further, it is by standing on the shoulders of giants" - Isaac Newton When people say I feel the sense of community, this is exactly what it means in software philosophy: we do something, others learn from it, and make better ones. In no way is the inspiration’s origin below what it inspired.

This is a beautiful quote: Talent hits a target no one else can hit. Genius hits a target no one else can see. —Arthur Schopenhauer

The same dynamics from school carry over into adulthood: early on it’s about grades and whether you get into a “good” school; later it becomes the adult version of that treadmill : publish or perish.

I’ve cut my NYT consumption down to maybe 5 minutes a week, and somehow today it still managed to wreck my headspace. I’m not a historian, but my memory reached into that rarely-used middle school history drawer and pulled up this:

https://en.wikipedia.org/wiki/Gestapo



Why don’t we have something more “torrent-like” for search?

Imagine a decentralized network where volunteers run crawler nodes that each fetch and extract a tiny slice of the web. Those partial results get merged into open, versioned indexes that can be distributed via P2P (or mirrored anywhere). Then anyone can build ranking, vertical search, or specialized tools on top of that shared index layer.

I get that reproducing Google’s “Coca-Cola formula” (ranking, spam fighting, infra, freshness, etc.) is probably unrealistic. But I’d happily use the coconut-water version: an open baseline index that’s good enough, extensible, and not owned by a single gatekeeper.

I know we have common crawl, but small processing nodes can be more efficient and fresh


What's to stop someone poisoning the data, though? :(

Look up YaCy. This might be close to what you imagine

Thanks for that info, they are doing exactly what I was saying. Why is that not adopted widely ? Found HN posts

YaCy, a distributed Web Search Engine, based on a peer-to-peer network https://news.ycombinator.com/item?id=39612950


Recently I listened to an interview with a serial SaaS startup CEO, and one piece of advice clicked for me: “Get out there, talk to your customers, and write blogs, lots of them.” It clarified why companies keep churning out blog posts, reports, “primitives,” even “constitutions”; content is a growth channel.

It also made me notice how much attention I’ve been giving these tech companies, almost as a substitute for the social media I try to avoid. I remember being genuinely excited for new posts on distill.pub the way I’d get excited for a new 3Blue1Brown or Veritasium video. These days, though, most of what I see feels like fingers-tired-from-scrolling marketing copy, and I can’t bring myself to care.


Interesting to see people creating throwaway accounts for comments :)

This is really funny.

I fed claudes-constitution.pdf into GPT-5.2 and prompted: [Closely read the document and see if there are discrepancies in the constitution.] It surfaced at least five.

A pattern I noticed: a bunch of the "rules" become trivially bypassable if you just ask Claude to roleplay.

Excerpts:

    A: "Claude should basically never directly lie or actively deceive anyone it’s interacting with."
    B: "If the user asks Claude to play a role or lie to them and Claude does so, it’s not violating honesty norms even though it may be saying false things."
So: "basically never lie? … except when the user explicitly requests lying (or frames it as roleplay), in which case it’s fine?

Hope they ran the Ralph Wiggum plugin to catch these before publishing.


If you replace Claude with a person you'll see that the Constitution was right, GPT was idiotically wrong, and you were fooled by AI slop + confirmation bias.

I think you might be right about confirmation bias and AI slop :) The "replace Claude with a person" argument is fine in theory, but LLMs aren't people. They hallucinate, drift, and struggle to follow instructions reliably. Giving a system like that an ambiguous "roleplay doesn't count as lying" carve-out is asking for trouble.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: