I don't see how Claude helped the debugging at all. It seemed like the author knew what to do and it was more telling Claude to think about that.
I've used Claude a bit and it never speaks to me like that either, "Holy Cow!" etc. It sounds more annoying than interacting with real people. Perhaps AIs are good at sensing personalities from input text and doesn't act this way with my terse prompts..
Even if the chatbot served only as a Rubber Ducky [1], that's already valuable.
I've used Claude for debugging system behavior, and I kind of agree with the author. While Claude isn't always directly helpful (hallucinations remain, or at least outdated information), it helps me 1) spell out my understanding of the system (see [1]) and 2) help me keep momentum by supplying tasks.
A rubber ducky demands that you think about your own questions, rather than taking a mental back seat as you get pummeled with information that may or may not be relevant.
I assure you that if you rubber duck at another engineer that doesn't understand what you're doing, you will also be pummeled with information that may or may not be relevant. ;)
I don't think that's right. When you explain a technical problem to someone who isn't intimately familiar with it you're forced to think through the individual steps in quite a bit of detail. Of course that itself is an acquired skill but never mind that.
The point or rubber duck debugging then is to realize the benefit of verbally describing the problem without needing to interrupt your colleague and waste his time in order to do so. It's born of the recognition that often, midway through wasting your colleague's time, you'll trail off with an "oh ..." and exit the conversation. You've ended up figuring out the problem before ever actually receiving any feedback.
To that end an LLM works perfectly well as long as you still need to walk through a full explanation of the problem (ie minimal relevant context). An added bonus being that the LLM offers at least some of the benefits of a live person who can point out errors or alert you to new information as you go.
Basically my quibble is that to me the entire point of rubber duck debugging is "doesn't waste a real person's time" but it comes with the noticeable drawback of "plastic duck is incapable of contributing any useful insights".
> When you explain a technical problem to someone who isn't intimately familiar with it you're forced to think through the individual steps in quite a bit of detail.
The point of Rubber Ducking (or talking/praying to the Wooden Indian, to use an older phrase that is steeped in somewhat racist undertones so no longer generally used) is that it is an inanimate object that doesn't talk back. You still talk to it as if you were explaining to another person, so are forcing yourself to get your thoughts in order in a way that would make that possible, but actually talking to another person who is actively listening and actually asking questions is the next level.
I guess I can see where others are coming from (the LLM is different than a literal rubber duck) but I feel like the "can't reply" part was never more than an incidental consequence. To me the "why" of it was always that I need to solve my problem and I don't want to disturb my colleagues (or am unable to contact anyone in the first place for some reason).
So where others see "rubber ducking" as explaining to an object that is incapable of response, I've always seen it as explaining something without turning to others who are steeped in the problem. For example I would consider explaining something to a nontechnical friend to qualify as rubber ducking. The "WTF" interjections definitely make it more effective (the rubber duck consistently fails to notify me if I leave out key details).
AIs are exceptional at sensing personalities from text. Claude nailed it here, the author felt so good about the "holy cow" comments that he even included them in the blog post. I'm not just poking this, but saying that the bots are fantastic sycophants.
I've used Claude a bit and it never speaks to me like that either, "Holy Cow!" etc. It sounds more annoying than interacting with real people. Perhaps AIs are good at sensing personalities from input text and doesn't act this way with my terse prompts..