Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Meta says you can't turn off its new AI tool on Facebook, Instagram (globalnews.ca)
56 points by cannibalXxx on April 20, 2024 | hide | past | favorite | 61 comments


There's this feature they have now on Instagram where you can chat with different AI personas. It's really terrible. Do people actually use this? It's like trying to form a personal connection with clippy


Is that anything like character.ai? If I allowed it, my daughter (13 years old) would spend half her day talking to those bots, she's always asking. Doesn't appeal to me, but to each their own I guess.


Genuine question - why do you allow _any_ time? I don’t have kids, so I’m able to make these large generalizing statements: seems like personal connections to AI chatbots are a form of brain rot.


There are plenty of positives for kids with AI bots.

Children often have questions that they are not comfortable talking to their parents about. And their peers can often be more clueless than they are. Or maybe they just want to understand the world they live in with something that is more fun and engaging than reading Wikipedia.

What's harming children right now aren't services like this. It's the peer pressure and unreal expectations being set by social media.


>What's harming children right now aren't services like this.

kinda really fucking depends on what "services like this" are telling them. Remember when the national eating disorder hotline replaced it's humans with chat bots that started telling people to just eat less?


NYC recently made a chat bot to answer legal questions for small businesses and it gave tremendously wrong information, likely harming both the employees and the business.


It would gladly tell you it's OK to serve human meat in a restaurant.


As a parent, I'm with you. But parenthood sucks. From day one, that person you're "responsible for" is actually a separate entity with its own thoughts and bodily autonomy.

And here they are, growing up with this technology, entirely unlike anything available to us in our childhoods. But one thing remains the same: if we shut them out altogether, they'll route around us and find it on their own.

A teetotaler myself, but the "not one drop" mentality doesn't really make a ton of sense to me. Not because I want my kid using AI or alcohol, but I'd rather him try it under my watch than go out and find it on his own.

But if this is a feature that parental controls cannot limit, then I honestly don't know what to say other than, perhaps we're fucked as a society.


Ya, the other thing is that we need to gradually give them freedom and have them make some recoverable mistakes under our watch, or they could go straight from sheltered kid to adult in the world on their own. By 16, they should have 75% of the freedom of an adult, because they’ll have 100% at 18.


I think I get it. It's relationship practice. An HN equivalent would be like a fake FAANG interview or a fake kleiner perkins AI you can pitch over and over again.

People would probably pay good money for that. Sell it to bootcamps


Yeah, you've got the right idea. Kinda like practicing making out with your hand... only instead of your hand, it's a drug dealer.


I imagine the steps to get here were:

1) You're an ambitious meta employee trying to get promoted

2) You know leadership is hyped about AI

3) All of the low hanging fruit is gone


The level of resources to deliver a feature like this is staggering.

There is no reality in which Musk and the SLT are not personally aware of this.

And these features are all just about getting regular feedback to improve the model for future use cases.


> The level of resources to deliver a feature like this is staggering.

Are they, though? AzureML will straight up let you spin up a chatbot in a few clicks, and you could just have the ops team set up auto scaling so you only consume resources you need (because really, how many people are actually going to use this?).

It could be expensive if you find tune, but a free POC would just be a to prompt an LLM to act like a celebrity, maybe with an example of text they wrote.

So at worst, you're paying for compute that is a drop in the bucket to someone like Meta


I think this being based on Llama makes it tenable. GPT4 and Gemini are monolithic multipod models. Multiple instances of Llama run per machines.


How is it staggering? Meta probably threw together llama with a few hundred react components and then shipped it.


I do believe that GossipGPT will be huge.


Do people actually want this?


I wouldn't mind if facebook managed to present some interesting suggestions to me with this "AI", because their current algorithms just doesn't work out.

I have been trying to teach their algorithm that I am not interested in Ancient Aliens crap groups, but they keep being suggested to me.

I have probably blocked hundreds of these nonsense groups by now from their suggestions.

My issue seems to be that I am interested in actual archeology, and they seem to not make a distinction.


First llm users will be willing to pay to not use


Snapchat beat them to it. Snapchat's "My AI" bot is pinned to the top of your friends list and cannot be unpinned unless you pay for Snapchat+.


> For example, The Associated Press reported that an official Meta AI chatbot inserted itself into a conversation in a private Facebook group for Manhattan moms. It claimed it too had a child in school in New York City, but when confronted by the group members, it later apologized before its comments disappeared, according to screenshots shown to The Associated Press.

Can anyone find an article with these screenshots? The part about the comments just disappearing seems especially hard to believe.



You don't think it's plausible that an engineer would try to delete shit that makes then look bad?


It’s not at all plausible that a service with billions of users has engineers actively monitoring private conversations and deleting things that look bad, no.


It is quite plausible for them to be monitoring conversations where this hot new thing that they have just rolled out is participating. Especially when the tech behind has this exact kind of stuff as a known failure mode (see also: any other LLM ever deployed in any public role anywhere hallucinating).


Your phrase "engineers actively monitoring private conversations" has some quantifiers that need examining.

Is every conversation monitored? Nope.

Is every engineer sitting around monitoring conversations all day? Nope.

Could an engineer catch wind of a report with negative impact on their project and hastily try to cover up their mistake? Yeah, buddy. And you know who is real quick to mash that "report" button? Facebook moms.


you don’t think it’s plausible a moms group on facebook got trolled by some random person?


Yeah, no, I can make all sorts of shit up. I make a habit of it. Until recently, the ability to produce plausible-sounding bullshit was considered an honest signal of intelligence.

The specific point of incredulity was what I took issue with.


moderators deleting spam is hard to believe?


Why is it hard to believe that comments written by an obviously malfunctioning bot would be deleted by Meta?


The article clearly says it was in a “private group” and the disappearance was something that must have occurred at time the article was published (e.g. not as an update to the article after it was reported)

Facebook has billions of users. It wouldn’t even be possible for Meta employees to read every conversation and delete messages that make themselves look bad.

I’m actually perplexed at comments here suggesting such a strange conspiracy theory. Do people actually believe that Meta has people personally reading their private conversations? Even a brief thought about the logistics and scale of such an operation should immediately put such speculation to rest.


Feature flagging could handle this. Often features are rolled out to a very small sub-set of users. Say 1% or perhaps 1000 groups. I often roll out features to specific user groups, and sometimes even to individual users. This specific version of the bot could have been released to NewYork based groups. That would make tracking it much easier.


Furthermore, it's likely that any weird comments were reported by members of the group. Of course, the whole point of reports is to flag something for review, and of course any post by Meta's freshly rolled out bot that would get multiple flags would be reviewed by someone who is monitoring said rollout on a very short notice. I don't know why GP keeps presenting it as some kind of conspiracy theory even; everything here is boringly predictable, from the failure mode to Meta's reaction to it.


You can turn it off - just move to the EU :)


Annoying.


"Sledge designed the humanoids to help humanity and be invulnerable to human exploitation. However, he eventually realized that they had instead taken control of humanity, in the name of their Prime Directive, to make humans happy

...

No humans may engage in any behavior that might endanger them, and every human action is carefully scrutinized. Suicide is prohibited. Humans who resist the Prime Directive are taken away and lobotomized, so that they may live happily under the direction of the humanoids."

https://en.wikipedia.org/wiki/With_Folded_Hands


Speaking of which Ive getting some AI bs on whatsapp


ME: How can I get you to fuck off and leave me alone?

META AI: I understand that you may be frustrated, but I'm here to help and provide information. I am not capable of taking offense or feeling emotions, but I want to assure you that my purpose is to assist and provide helpful responses. If you have any specific questions or topics you would like to discuss, I am here to help. If you would like to take a break from our conversation, I understand, and you can always come back when you are ready. Please feel free to ask me any questions or clarify how I can assist you further.

There you have it. Meta AI is a stalker.


I wonder what the endgame is, something like NG Resonance[1]? People no longer sharing their life in FB posts, so we're giving them AI friends to share purchase intents with instead?

[1] https://deusex.fandom.com/wiki/NG_Resonance


Fancy seeing a reference to the least played game in the Deus Ex series!

It's a shame they had to cut the open world concept of the original and simplify it a ton just so they could fit on Xbox


It had some ridiculously large shoes to fill.

Like for what it is, it sure has some warts, but it's actually pretty decent, or at least not AS bad as most probably remember it being.


I blame the consolitis, which butchered everything down into tiny mostly-independent loading zones.

The same engine and problems also applied to Thief 3, although it was less of an issue because your average speed of sneaking through the zones was slower. :p



I like the spirit but the ugly righteous fervour has me seeking another sign.


That’s the joke, but also not a joke. Is it meaningful to hear thank-yous from objects? It is not, and dialing up to butler-level cautious deference won’t help, it just adds more noise. One can explain this stuff every time they get annoyed by a gps robot or incoming text message derailing actual human conversation, but it gets tedious. May as well explain in terms of divine beings and holy tongues


I think I agree but that's because business-interests always take things to their gross extreme.

Humans love anthropomorphizing things and I don't think that's a bad thing. A well placed smiley face really does spark positive ~vibes~


Corporate aside, there’s also a question of what we want etiquette to look like between people, ie whether I want my own need for positive vibes from not-present others to disrupt or dominate interactions with people who are present.

I wish it were generally recognized that if you have a robot you refuse to disable that is rudely thrusting itself on to me, then you are being rude to me.

Listening to people negotiate with their malfunctioning alexa or ok-google is obnoxious, and that’s just the active annoyance, setting aside my consent to being recorded, etc.


I think what I dislike about it is that it implies even actual non-human sentience must be oppressed.

Being angry at chintzy fake stuff, on the other hand, is a much more limited and reasonable proposition.


Really? For me, it was "and I will NEVER speak to you." As if I can go five minutes without cussing at an inanimate object. We can't establish precedent that they're allowed to speak when spoken to....


Douglas Adams got it at least partly right"

"FORD: They make a big thing of the ship’s cybernetics. “A new generation of Sirius Cybernetics robots and computers, with the new G.P.P. feature.”

ARTHUR: ”G.P.P.”? What’s that?

FORD: Er… It says “Genuine People Personalities”.

ARTHUR: Sounds ghastly.

DOOR: Hummmm-ahhhhh…

MARVIN: It is.

ARTHUR: What?

MARVIN: Ghastly. It all is. Absolutely ghastly. Just don’t even talk about it. Look at this door. “All the doors in this spacecraft have a cheerful and sunny disposition. It is their pleasure to open for you and their satisfaction to close again with the knowledge of a job well done.”

DOOR: Hummm-yummmm…[shuts]

MARVIN: Hateful isn’t it? Come on. I’ve been ordered to take you up to the bridge. Here I am, brain the size of a planet, and they tell me to take you up to the bridge. Call that job satisfaction, cos I don’t.

FORD: Excuse me, which government owns this ship?

MARVIN: You watch this door. It’s about to open again. I can tell by the intolerable air of smugness it suddenly generates… Come on.

DOOR: [Opens] Hummm. Glad to be of service.

MARVIN: Thank you the Marketing Division of the Sirius Cybernetics Corporation.

DOOR: You’re welcome. Hummmm…[shuts]"


When ChatGPT-25 arrives it'll spontaneously begin chanting "make neural networks great again" and complaining anyone who wants to disable it is engaging in a witch hunt.


How else will it harvest your data and pretend it's still providing valuable service


In America, service uses you!

Seriously though, the best answer is to just stop using it. If a company does not respect your patronage to their platform, yelling at their robot isn't going to make things better for you. More people just have to learn to click the "Log Out" button and quit being an MAU.


Not just Meta of course, but all these companies.

This is the bold new world they are building for us, in a nutshell.


Sealioning as a service.


I may of have been alienating myself by not using, nor have never used any of these services. I'm sure it's a root of my depression, but god am I glad I don't.


I'm with you.

I tried out ChatGPT the first week or so but it is so apparently useless that I have been more and more depressed since then by watching the world get fooled by this illusion. It feels like a really bad magic trick where the whole audience seems to be impressed, while only I saw all the props and fakery.

Having discussions about this with friends and relatives becomes tiresome since they are so happy fantasizing about a future where they only need minimal amount of effort, while I became quite good at spotting GPT output and starting to see the same boring writing style in more and more places.


That same boring writing style is what you get out of GPT if you use its default persona (i.e. what you get from the ChatGPT service). It can be very different when primed accordingly, even via the chat itself (just by telling it to adopt a different persona); it's just that most people who use it as assistant for text composition don't bother to.


I can relate to being annoyed at the hype but to call chatGPT useless is about as hyperbolic as the AI maximalists who say it will solve all our problems. There are many people right now using it and getting value out of it.


I've never used Facebook, Instagram, WhatApp, Snapchat, ChatGPT, or basically any mainstream social media app or platform.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: