There's this feature they have now on Instagram where you can chat with different AI personas. It's really terrible. Do people actually use this? It's like trying to form a personal connection with clippy
Is that anything like character.ai? If I allowed it, my daughter (13 years old) would spend half her day talking to those bots, she's always asking. Doesn't appeal to me, but to each their own I guess.
Genuine question - why do you allow _any_ time? I don’t have kids, so I’m able to make these large generalizing statements: seems like personal connections to AI chatbots are a form of brain rot.
There are plenty of positives for kids with AI bots.
Children often have questions that they are not comfortable talking to their parents about. And their peers can often be more clueless than they are. Or maybe they just want to understand the world they live in with something that is more fun and engaging than reading Wikipedia.
What's harming children right now aren't services like this. It's the peer pressure and unreal expectations being set by social media.
>What's harming children right now aren't services like this.
kinda really fucking depends on what "services like this" are telling them. Remember when the national eating disorder hotline replaced it's humans with chat bots that started telling people to just eat less?
NYC recently made a chat bot to answer legal questions for small businesses and it gave tremendously wrong information, likely harming both the employees and the business.
As a parent, I'm with you. But parenthood sucks. From day one, that person you're "responsible for" is actually a separate entity with its own thoughts and bodily autonomy.
And here they are, growing up with this technology, entirely unlike anything available to us in our childhoods. But one thing remains the same: if we shut them out altogether, they'll route around us and find it on their own.
A teetotaler myself, but the "not one drop" mentality doesn't really make a ton of sense to me. Not because I want my kid using AI or alcohol, but I'd rather him try it under my watch than go out and find it on his own.
But if this is a feature that parental controls cannot limit, then I honestly don't know what to say other than, perhaps we're fucked as a society.
Ya, the other thing is that we need to gradually give them freedom and have them make some recoverable mistakes under our watch, or they could go straight from sheltered kid to adult in the world on their own. By 16, they should have 75% of the freedom of an adult, because they’ll have 100% at 18.
I think I get it. It's relationship practice. An HN equivalent would be like a fake FAANG interview or a fake kleiner perkins AI you can pitch over and over again.
People would probably pay good money for that. Sell it to bootcamps
> The level of resources to deliver a feature like this is staggering.
Are they, though? AzureML will straight up let you spin up a chatbot in a few clicks, and you could just have the ops team set up auto scaling so you only consume resources you need (because really, how many people are actually going to use this?).
It could be expensive if you find tune, but a free POC would just be a to prompt an LLM to act like a celebrity, maybe with an example of text they wrote.
So at worst, you're paying for compute that is a drop in the bucket to someone like Meta
I wouldn't mind if facebook managed to present some interesting suggestions to me with this "AI", because their current algorithms just doesn't work out.
I have been trying to teach their algorithm that I am not interested in Ancient Aliens crap groups, but they keep being suggested to me.
I have probably blocked hundreds of these nonsense groups by now from their suggestions.
My issue seems to be that I am interested in actual archeology, and they seem to not make a distinction.
> For example, The Associated Press reported that an official Meta AI chatbot inserted itself into a conversation in a private Facebook group for Manhattan moms. It claimed it too had a child in school in New York City, but when confronted by the group members, it later apologized before its comments disappeared, according to screenshots shown to The Associated Press.
Can anyone find an article with these screenshots? The part about the comments just disappearing seems especially hard to believe.
It’s not at all plausible that a service with billions of users has engineers actively monitoring private conversations and deleting things that look bad, no.
It is quite plausible for them to be monitoring conversations where this hot new thing that they have just rolled out is participating. Especially when the tech behind has this exact kind of stuff as a known failure mode (see also: any other LLM ever deployed in any public role anywhere hallucinating).
Your phrase "engineers actively monitoring private conversations" has some quantifiers that need examining.
Is every conversation monitored? Nope.
Is every engineer sitting around monitoring conversations all day? Nope.
Could an engineer catch wind of a report with negative impact on their project and hastily try to cover up their mistake? Yeah, buddy. And you know who is real quick to mash that "report" button? Facebook moms.
Yeah, no, I can make all sorts of shit up. I make a habit of it. Until recently, the ability to produce plausible-sounding bullshit was considered an honest signal of intelligence.
The specific point of incredulity was what I took issue with.
The article clearly says it was in a “private group” and the disappearance was something that must have occurred at time the article was published (e.g. not as an update to the article after it was reported)
Facebook has billions of users. It wouldn’t even be possible for Meta employees to read every conversation and delete messages that make themselves look bad.
I’m actually perplexed at comments here suggesting such a strange conspiracy theory. Do people actually believe that Meta has people personally reading their private conversations? Even a brief thought about the logistics and scale of such an operation should immediately put such speculation to rest.
Feature flagging could handle this. Often features are rolled out to a very small sub-set of users. Say 1% or perhaps 1000 groups. I often roll out features to specific user groups, and sometimes even to individual users. This specific version of the bot could have been released to NewYork based groups. That would make tracking it much easier.
Furthermore, it's likely that any weird comments were reported by members of the group. Of course, the whole point of reports is to flag something for review, and of course any post by Meta's freshly rolled out bot that would get multiple flags would be reviewed by someone who is monitoring said rollout on a very short notice. I don't know why GP keeps presenting it as some kind of conspiracy theory even; everything here is boringly predictable, from the failure mode to Meta's reaction to it.
"Sledge designed the humanoids to help humanity and be invulnerable to human exploitation. However, he eventually realized that they had instead taken control of humanity, in the name of their Prime Directive, to make humans happy
...
No humans may engage in any behavior that might endanger them, and every human action is carefully scrutinized. Suicide is prohibited. Humans who resist the Prime Directive are taken away and lobotomized, so that they may live happily under the direction of the humanoids."
ME: How can I get you to fuck off and leave me alone?
META AI: I understand that you may be frustrated, but I'm here to help and provide information. I am not capable of taking offense or feeling emotions, but I want to assure you that my purpose is to assist and provide helpful responses. If you have any specific questions or topics you would like to discuss, I am here to help. If you would like to take a break from our conversation, I understand, and you can always come back when you are ready. Please feel free to ask me any questions or clarify how I can assist you further.
I wonder what the endgame is, something like NG Resonance[1]? People no longer sharing their life in FB posts, so we're giving them AI friends to share purchase intents with instead?
I blame the consolitis, which butchered everything down into tiny mostly-independent loading zones.
The same engine and problems also applied to Thief 3, although it was less of an issue because your average speed of sneaking through the zones was slower. :p
That’s the joke, but also not a joke. Is it meaningful to hear thank-yous from objects? It is not, and dialing up to butler-level cautious deference won’t help, it just adds more noise. One can explain this stuff every time they get annoyed by a gps robot or incoming text message derailing actual human conversation, but it gets tedious. May as well explain in terms of divine beings and holy tongues
Corporate aside, there’s also a question of what we want etiquette to look like between people, ie whether I want my own need for positive vibes from not-present others to disrupt or dominate interactions with people who are present.
I wish it were generally recognized that if you have a robot you refuse to disable that is rudely thrusting itself on to me, then you are being rude to me.
Listening to people negotiate with their malfunctioning alexa or ok-google is obnoxious, and that’s just the active annoyance, setting aside my consent to being recorded, etc.
Really? For me, it was "and I will NEVER speak to you." As if I can go five minutes without cussing at an inanimate object. We can't establish precedent that they're allowed to speak when spoken to....
"FORD:
They make a big thing of the ship’s cybernetics. “A new generation of Sirius Cybernetics robots and computers, with the new G.P.P. feature.”
ARTHUR:
”G.P.P.”? What’s that?
FORD:
Er… It says “Genuine People Personalities”.
ARTHUR:
Sounds ghastly.
DOOR:
Hummmm-ahhhhh…
MARVIN:
It is.
ARTHUR:
What?
MARVIN:
Ghastly. It all is. Absolutely ghastly. Just don’t even talk about it. Look at this door. “All the doors in this spacecraft have a cheerful and sunny disposition. It is their pleasure to open for you and their satisfaction to close again with the knowledge of a job well done.”
DOOR:
Hummm-yummmm…[shuts]
MARVIN:
Hateful isn’t it? Come on. I’ve been ordered to take you up to the bridge. Here I am, brain the size of a planet, and they tell me to take you up to the bridge. Call that job satisfaction, cos I don’t.
FORD:
Excuse me, which government owns this ship?
MARVIN:
You watch this door. It’s about to open again. I can tell by the intolerable air of smugness it suddenly generates… Come on.
DOOR:
[Opens] Hummm. Glad to be of service.
MARVIN:
Thank you the Marketing Division of the Sirius Cybernetics Corporation.
When ChatGPT-25 arrives it'll spontaneously begin chanting "make neural networks great again" and complaining anyone who wants to disable it is engaging in a witch hunt.
Seriously though, the best answer is to just stop using it. If a company does not respect your patronage to their platform, yelling at their robot isn't going to make things better for you. More people just have to learn to click the "Log Out" button and quit being an MAU.
I may of have been alienating myself by not using, nor have never used any of these services. I'm sure it's a root of my depression, but god am I glad I don't.
I tried out ChatGPT the first week or so but it is so apparently useless that I have been more and more depressed since then by watching the world get fooled by this illusion. It feels like a really bad magic trick where the whole audience seems to be impressed, while only I saw all the props and fakery.
Having discussions about this with friends and relatives becomes tiresome since they are so happy fantasizing about a future where they only need minimal amount of effort, while I became quite good at spotting GPT output and starting to see the same boring writing style in more and more places.
That same boring writing style is what you get out of GPT if you use its default persona (i.e. what you get from the ChatGPT service). It can be very different when primed accordingly, even via the chat itself (just by telling it to adopt a different persona); it's just that most people who use it as assistant for text composition don't bother to.
I can relate to being annoyed at the hype but to call chatGPT useless is about as hyperbolic as the AI maximalists who say it will solve all our problems. There are many people right now using it and getting value out of it.