Hacker Newsnew | past | comments | ask | show | jobs | submit | cagenut's commentslogin

spacex is one thing but xai accomplished what? the most racist csam prone llm?

I'm not aware of this - What's that?

Probably shouldn't speak to the brilliance of xAI engineers when you've never heard of their work

Is whatever that is their work?

Not just that, it's their one and only product, to my knowledge

Well, an LLM is a mirror, right? Maybe you were just using it wrong? Can you give any examples of what you used it for that led you to believe it's what you said?

I don't think your view is based on personal experience, but you get my, point, yes?

The feeling I get about you here is you simply dislike his companies and Musk and am enjoying seeing him get what he deserves, right? Which I think is the personal mirror of the "state feeling" behind the current official actions.

More broadly, your comments and many others like it in these threads, identify a narrow band of content with the product as a whole. And the implication being if you disagree with hatred against Musk / xAI, you must be a pervert. Which is intended as a reputational threat to intimidate people into not voicing support.

But if an LLM is used to create bad content by some, does that mean the only content it can create is bad? Does that mean that every user is using it to create bad content?

If xAI has a problem with bad content, they need better controls. I don't think these state efforts nor discourse are about the bad content. I think that issue is just a vector through which to assert pressure. I think it's because people in power want control over something that is, annoyingly to them, resisting control. And not in a way that's about "bad content", but in a way that's about inconvenient-to-them content.


I don't have an X account, so no, nothing I've said is based on personal experience. You should read the news. Google "XAI France"

Or just keep pretending I'm making things up, but I'm not.

My opinions here are not related to musk, except so far as he encouraged people to use his chatbot for disturbing, illegal ends


No, I know that. I wasn't actually implying you were personally across this. Just highlighting how personal experience differs. I don't think you're making it up, at all, I just think there's a larger story, and more nuance to the product overall.

Fair enough on your musk views - did he really encourage people to do disturbing stuff? Can you point to that? I have not seen it.


obviously you're not a devops eng, I think you're wildly under-estimating how much of business critical code pre-ai is completely orphaned anyway.

the people who wrote it were contractors long gone, or employees that have moved companies/departments/roles, or of projects that were long since wrapped up, or of people who got laid off, or the people who wrote it simply barely understood it in the first place and certainly don't remember what they were thinking back then now.

basically "what moron wrote this insane mess... oh me" is the default state of production code anyway. there's really no quality bar already.


I am a devops engineer and understand your point. But there's a huge difference: legacy code doesn't change. Yeah occasionally something weird will happen and you've got to dig into it, but it's pretty rare, and usually something like an expired certificate, not a logic bug.

What we're entering, if this comes to fruition, is a whole new era where massive amounts of code changes that engineers are vaguely familiar with are going to be deployed at a much faster pace than anything we've ever seen before. That's a whole different ballgame than the management of a few legacy services.


after a decade of follow-the-sun deployments by php contractors from vietnam to costa rica where our only qa was keeping an eye on the 500s graph, ai can't scare me.

That's actually a good comparison. Though even then, I imagine you at least have the ability to get on the phone and ask what they just did. Whereas LLM would just be like, "IDK, that was my twin brother. I'd ask him directly, but unfortunately he has been garbage collected. It was very sad. Would you like a cookie?"

I wonder if there's any value in some system that preserves the chat context of a coding agent and tags the commits with a reference to it, until the feature has been sufficiently battle tested. That way you can bring them back from the dead and interrogate them for insight if something goes wrong. Probably no more useful than just having a fresh agent look at the diff in most cases, but I can certainly imagine scenarios where it's like "Oh, duh, I meant to do X but looks like I accidentally did Y instead! Here's a fix." way faster than figuring it out from scratch. Especially if that whole process can be automated and fast, worst case you just waste a few tokens.

I'm genuinely curious though if there's anything you learned from those experiences that could be applied to agent driven dev processes too.


it was basically a mindless loop, very prime for being agent driven:

  - observe error rate uptick
  - maybe dig in with apm tooling
  - read actual error messages
  - compare what apm and logs said to last commit/deploy
  - if they look even tangentially related deploy the previous commit (aka revert)
  - if its still not fixed do a "debug push", basically stuff a bunch of print statements (or you can do better) around the problem to get more info
I won't say that solves every case but definitely 90% of them.

I think your point about preserving some amount of intent/context is good, but also like what are most of us doing with agents if not "loop on error message until it goes away".


it has been amazing to watch how much of agentic ai is driven by "can you write clear instructions to explain your goals and use cases" and "can you clearly define the rules of each step in your process."


People Make Games did a mini documentary on almost exactly what you're asking: https://www.youtube.com/watch?v=4PHT-zBxKQQ

Its three years old so things have slightly matured.


you could stuff the racks full of server-rack batteries (lfp now, na-ion maybe in a decade) and monetize the space and the high capacity grid connect

most of the hvac would sit idle tho


everybody read 'bullshit jobs' and basically agreed

at first that just meant many of us adopted a middle-aged-coasting career strategy after covid and/or having kids

but now management is agreeing


Meanwhile new grads can't even start their careers to begin with and are left scambling to even take a step into adulthood. They missed the boar. kids aren't even on the horizon. What does that say?


hmmm thats 200g in the wrong direction


wag that dog


almost, what you're seeing there is the too cute by half smug nugget of wisdom tone, which is really the trademark of the self-styled "writer", but because self-styled writers wrote most of the internet it has reflected onward in becoming the trademark llm tone. but there are still og hacks in the game!


it was only ever a distraction/cover-story for cutting medicaid

it worked


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: