Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't see how Claude helped the debugging at all. It seemed like the author knew what to do and it was more telling Claude to think about that.

I've used Claude a bit and it never speaks to me like that either, "Holy Cow!" etc. It sounds more annoying than interacting with real people. Perhaps AIs are good at sensing personalities from input text and doesn't act this way with my terse prompts..



Even if the chatbot served only as a Rubber Ducky [1], that's already valuable.

I've used Claude for debugging system behavior, and I kind of agree with the author. While Claude isn't always directly helpful (hallucinations remain, or at least outdated information), it helps me 1) spell out my understanding of the system (see [1]) and 2) help me keep momentum by supplying tasks.

[1] https://en.wikipedia.org/wiki/Rubber_duck_debugging


A rubber ducky demands that you think about your own questions, rather than taking a mental back seat as you get pummeled with information that may or may not be relevant.


I assure you that if you rubber duck at another engineer that doesn't understand what you're doing, you will also be pummeled with information that may or may not be relevant. ;)


That isn't rubber duck debugging. It's just talking to someone about the problem.

The entire point of rubber duck debugging is that the other side literally cannot respond - it's an inanimate object, or even a literal duck/animal.


I don't think that's right. When you explain a technical problem to someone who isn't intimately familiar with it you're forced to think through the individual steps in quite a bit of detail. Of course that itself is an acquired skill but never mind that.

The point or rubber duck debugging then is to realize the benefit of verbally describing the problem without needing to interrupt your colleague and waste his time in order to do so. It's born of the recognition that often, midway through wasting your colleague's time, you'll trail off with an "oh ..." and exit the conversation. You've ended up figuring out the problem before ever actually receiving any feedback.

To that end an LLM works perfectly well as long as you still need to walk through a full explanation of the problem (ie minimal relevant context). An added bonus being that the LLM offers at least some of the benefits of a live person who can point out errors or alert you to new information as you go.

Basically my quibble is that to me the entire point of rubber duck debugging is "doesn't waste a real person's time" but it comes with the noticeable drawback of "plastic duck is incapable of contributing any useful insights".


> When you explain a technical problem to someone who isn't intimately familiar with it you're forced to think through the individual steps in quite a bit of detail.

The point of Rubber Ducking (or talking/praying to the Wooden Indian, to use an older phrase that is steeped in somewhat racist undertones so no longer generally used) is that it is an inanimate object that doesn't talk back. You still talk to it as if you were explaining to another person, so are forcing yourself to get your thoughts in order in a way that would make that possible, but actually talking to another person who is actively listening and actually asking questions is the next level.


I guess I can see where others are coming from (the LLM is different than a literal rubber duck) but I feel like the "can't reply" part was never more than an incidental consequence. To me the "why" of it was always that I need to solve my problem and I don't want to disturb my colleagues (or am unable to contact anyone in the first place for some reason).

So where others see "rubber ducking" as explaining to an object that is incapable of response, I've always seen it as explaining something without turning to others who are steeped in the problem. For example I would consider explaining something to a nontechnical friend to qualify as rubber ducking. The "WTF" interjections definitely make it more effective (the rubber duck consistently fails to notify me if I leave out key details).


To that end a notepad works just as well.


In reality vim is my usual approach. But I think LLMs are better in a lot of regards.


Oh it can definitely be a person. I've worked with a few!


Cue obligatory Ralph Fiennes "You're an inanimate fucking object".


Before you expand the definition to every object in the universe, maybe we could call it parrot debugging.


I'm not saying you should do this, but you can do this:

https://gist.github.com/shmup/100a7529724cedfcda1276a65664dc...


Amusingly that looks less like "rubber duck debugging" and more like "socratic questions". Which certainly isn't a bad thing.


That is so true I wanted to "fix it", granted, I'm not even using these at the moment, but I appreciated the idea

https://github.com/shmup/metacog-skills/


Lol not bad


They also don’t waste electricity, water, drive up the prices of critical computer components, or DDOS websites to steal their content.


Not to defend the extravagant power use of the AI datacenters, but I invite you to look up the ecological footprint of a human being.


The human being in this scenario exists either way.

The AI does not.


> Even if the chatbot served only as a Rubber Ducky [1], that's already valuable.

I use the Other Voices for that. I can't entirely turn them off, I might as well make use of them!


Rubber Ducky is a terrific name for a GPT.

Also, always reminds me of Kermit singing "...you make bath time so much fun!..."


Maybe Kermit has sung it at some point, but that's Ernie's song usually


Ha. Well spotted. I totally hallucinated that.



You’re absolutely right!


Claude is much faster at extracting fields from a pcap and processing them with awk than I am!


Have you tried wireshark?


AIs are exceptional at sensing personalities from text. Claude nailed it here, the author felt so good about the "holy cow" comments that he even included them in the blog post. I'm not just poking this, but saying that the bots are fantastic sycophants.


No they aren't. Current LLMs always have that annoying over-eager tone.

The comment about Claude being pumped was a joke.


It depends how much the LLM has been beaten into submission by the system prompt.


ChatGPT set to "terse and professional" personality mode is refreshingly sparse on the "you're absolutely right" bullshit


It's like I keep saying, it probably wasn't a good idea to give our development tools Genuine People Personalities...


I think the blog post itself was partially written with LLMs, which explains some of the odd narrative style.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: