Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What's protecting smaller online spaces from AI?


Nothing is bulletproof, but more hands-on moderation tends to be better at making pragmatic judgement calls when someone is being disruptive without breaking the letter of the law, or breaks the rules in ways that take non-trivial effort to prove. That approach can only scale so far though.


Essentially, gatekeeping. Places that are hard to access without the knowledge or special software, places that are invite-only, places that need special hardware...


Or places with a terminally uncool reputation. I'm still on Tumblr, and it's actually quite nice these days, mostly because "everyone knows" that Tumblr is passé, so all the clout-chasers, spammers and angry political discoursers abandoned it. It's nice living, under the radar.


Another important factor is whether the place is monetizable. Places where you can't make money are less likely to be infested with AI.


Or a place that can influence a captive audience. Bots have been known to play a part in convincing people of one thing over another via the comments section. No direct money to be made there but shifting opinions can lead to sales, eventually. Or prevent sales for your competitors.


Not enough financial upside for it to be worth the trouble.


The fact it's text only means we only get AI text and not images, I suppose. lmao.


Economics. Slop will only live where there's enough eyeballs and ad revenue to earn a profit from it




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: