Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Have you pondered the moral implications of your startup?
27 points by ecommercematt on Jan 18, 2008 | hide | past | favorite | 34 comments
I'm working on a start-up that, if successful, has potential to be a major reference of human knowledge (think wikipedia in terms of vastness and impact, although it isn't particularly similar to wikipedia). We won't be able to control who uses our resource, and as it is community-driven, the specific direction our site will take is unknowable.

Call us naive optimists, but we're convinced that, for the most part, our startup will be used for good purposes. We're strongly against censorship, and we don't think censorship really works on the web anyway, so we'll be leaving this up to chance.

An example of a startup that might be slipping to the darkside is Reddit. I know Reddit was backed by YC, so anything perceived as a bash might not be well-received here. Nonetheless, I'm certainly not alone in thinking that the Reddit community has taken a turn for the worse. My biggest problem with Reddit today is its lameness, however, Reddit is not just lame, it is increasingly used to propagate toxic misinformation and hate speech. I wouldn't be so troubled if I didn't see so many members of the community at Reddit embracing those sentiments.

I'm pretty sure that the Reddit team doesn't condone hate speech, and I'm not proposing they censor the site (it wouldn't really be possible, anyway), but it makes me wonder:

What do they think about this segment of the community they built? Do they wish they'd done anything differently? Is there anything my team can or should do to minimize our site's use for what we consider to be immoral purposes? Has anybody here spent much time thinking about this issue? Have you come to any conclusions?



I'm surprised how often this comes up in YC startups, actually. I'm not sure if this is true of founders in general (it might be) but practically everyone we've funded not only wants to not be evil, but actively wants to make the world better.

This was certainly true of the Reddits. And in fact I think they succeeded. If you make a site where everyone can say what they want, some people are going to say things other people don't like. But isn't this a net improvement over the preceding model, where there were a few narrow channels for the distribution of news, and the companies that controlled them controlled the news? I'm not sure what you mean by "hate speech," and I doubt you are either, but I think we're net ahead if we have a world in which its harder for the powerful to suppress news, even if a few people take advantage of this new openness to say things that offend others. In fact, some of the best ideas started out that way.


I chose to join my current startup precisely because of the moral implications. It's a "change the world" (i.e, the real world) kind of an idea, the linchpin of which is technology-based. I wish I say more about it than that, but we're hush-hush about it for the time being.

It's interesting that I'm not alone here, that other founders here (YC-funded or otherwise -- we're not) also aim to improve the world somehow with their companies.

I saw an article recently in the WSJ [1] about how young people now are more philanthropic than previous generations. Whereas in the past, philanthropy was the domain of the rich (think Andrew Carnegie), the Internet now allows individuals, even children, to each contribute effectively in small ways. Combined together in large numbers, these small contributions can be significant.

Of course, I'm not only talking about money here. By "contribute", I also mean knowledge (e.g, Wikipedia), resources (e.g, OLPC "buy one get one"), and so on. The "new philanthropy" is all about the sum of small, individual contributions. I think this trend in the startups around here are part of this.

[1]: http://online.wsj.com/article/SB118765256378003494.html?mod=...


It doesn't surprise me that this is a common concern in YC startups. It also doesn't surprise me that the Reddit guys gave the issue consideration.

I agree that limiting the ability to suppress news, even if some people take advantage of increasing openness to advance harmful agendas, is a positive trend. I used to regularly derive enjoyment from Reddit, and occasionally, I still do. Nonetheless, I think Reddit's struggle with various forms of abuse (by idiots, racists, spammers, etc.), is relevant to those of us working on community driven web-based projects (even if "community-driven," and "web-based" are the only traits in common with Reddit, as is the case with my startup). If I were one of Reddit's founders, I'd be troubled by its current state, and I want to do whatever I can to prevent similar problems from happening to our site.

As to what I meant by hate speech, I'll lean on Justice Potter Stewart's words regarding the definition of pornography:

"I shall not today attempt further to define the kinds of material I understand to be embraced within that shorthand description; and perhaps I could never succeed in intelligibly doing so. But I know it when I see it..."

http://en.wikipedia.org/wiki/I_know_it_when_I_see_it

There is plenty of hate speech on Reddit, and I know it when I see it.

The first time I discovered your (pg's) existence was when I received a link to your essay "What You Can't Say."

http://paulgraham.com/say.html

You said in that essay "like every other era in history, our moral map almost certainly contains a few mistakes." I agree with you. Nonetheless, I'm sure that it isn't a mistake to consider white supremacist propaganda, for example, to be harmful to society.

I'd be surprised if you haven't encountered hate speech on Reddit, but perhaps you haven't. I imagine it wouldn't take you long to find, if you tried, as I have stumbled across it quite regularly by accident. Regardless, the particulars of my critique of Reddit aren't relevant. Figuring out how to make a community good, and keep it good, is my goal. The consensus here appears to be that an ounce of bad sprinkled in with tons of good is still worth it. I agree, but I'm still going to think about how to reduce that ounce to a gram, or less. Any ideas would be appreciated.


Nonetheless, I'm sure that it isn't a mistake...

If you read the essay carefully, you'd understand that in every era "right thinking people" are sure that the things they want to ban are bad. In 1700 after explaining how broad-minded you were you'd be saying "Nevertheless, I'm sure it isn't a mistake to consider atheistic propaganda to be harmful to society." And you'd be wrong.

The whole point of the essay is that you have to step out of yourself to have any hope of seeing beyond the prejudices of your time, and that this is extremely hard. Your casual use of blanket labels for forbidden ideas is a sign you don't appreciate the difficulty of the problem here.

You'll notice I have never said what kinds of speech I think should be banned. That's because I've seen enough to know that that that second clause following "I'm pretty open-minded, but..." is very likely to be mistaken. Like someone saying that some open mathematical problem will never be solved, you're setting yourself up to look like a fool to future generations.

So your use of "sure" to me is very convincing evidence that your filters will generate a lot of false positives. I spent a whole month thinking about this problem. WYCS took the longest of any essay I've written. And I would be very reluctant to use that word "sure" in this kind of situation. So either you understand this stuff so much better than me that you've passed through uncertainty and back into certainty, or you simply have the confidence in your opinions that everyone is born with.


You're up against hindsight bias here, PG. (http://www.overcomingbias.com/2007/08/hindsight-deval.html) People don't realize how absurd the future looks when you have to predict it in advance. (http://www.overcomingbias.com/2007/09/stranger-than-h.html) 500 BCE seems much stranger than 2008 CE, which seems very normal by comparison - so people look back and see a steady progression toward normality, things getting less absurd over time, and they expect this trend to continue. (http://www.overcomingbias.com/2007/09/why-is-the-futu.html)

People don't realize how counterintuitive moral changes look when you have no advance idea of where you're heading. (http://www.overcomingbias.com/2007/03/archimedess_chr.html) So they don't use the kind of cognitive strategies that would have been necessary for, say, Archimedes of Syracuse to question slavery. (http://www.overcomingbias.com/2007/03/chronophone_mot.html)


I enjoyed the hell out of your WYCS essay, and the care with which you wrote it shows. I'm not contesting the arguments made in that essay. If, in this thread, however, you're contending that all ideas are completely relative, and that we don't have any right to pass judgments on them, then I have to disagree. If you're arguing that people should be allowed to speak freely in society at large, no matter how strange or offensive their ideas might be, then I agree. I don't think speech of any kind should be banned, and I fully appreciate how difficult it is to come to genuine truth. Aside from the existence of the self, what is truly knowable, anyway?

Be that as it may, in order to assess the quality of a community-driven site, one has to place value judgements on its content. Sometimes these value judgements fall short of perfection, but they're necessary and unavoidable.

It is interesting that you raised the prospect of filters yielding false positives. If you were talking about my judgement as an individual, then I'll admit that I'm blinded by the human condition, but I'd like to point out that I relied on blanket labels to avoid getting into a discussion of minutia of specific posts on Reddit. If you were referring to filters integrated into the software we'll use for our startup, our approach to avoid the "garbage" (we can agree that there is garbage out there, and avoiding it is a good thing, right?) is to narrow the focus of the site, rather than to build in karma-based influence restrictions and enhancements of individual users.

Hopefully I'll be posting a link here fairly soon, so all of my vague mumbo-jumbo will make more sense, and it'll take less imagination to see how our site could be used for negative purposes and how that might be limited.


Yes, I have pondered the moral implications of my startup.

Cryptography comes with a price: The bad guys get to use it too. Where my startup is concerned, this means that organized crime would have access to TLA-quality secure backups (and by "backing up" data from one system and "restoring" it to another, secure communication, too).

In the end I decided that since (a) criminals would need to pay for the service (which makes tracking them down easier), and (b) I'm going to be logging IP addresses, the extent to which this would assist criminals is fairly limited; and that the legitimate needs people have for secure backups far outweigh the potential for abuse.


Er, not to mention which, anyone who wants secure commo can just use GnuPG and friends.


You must also consider the potential for abuse by governments or other individuals against your users. This is particularly relevant to online backups. You aren't facing a big ethical problem at all if you ask me. Privacy is a basis of US laws, and all that.


You can't make people good.

You can make yourself a better person and lead by example, but you can't stop people from doing something like spreading misinformation and hate speech. It has been going on since the beginning of civilization. If your startup has the right moral and socially beneficial intentions, then that is really the most you can do. If someone comes along and uses your tool for evil, that doesn't mean you shouldn't have created your tool.

In scientific and technological research, there are often amazing uses that benefit humanity. For example, splitting the atom resulted in nuclear power which is arguably a great invention for mankind. However, due to human nature, it created the greatest weapon mankind has seen. You cannot blame the researcher for facilitating this, you can only blame human nature for abusing the technology.


> created the greatest weapon mankind has seen

Stretching it a bit... I think the guy who made machine guns happen slept worse at night. You must realize though, that both were obvious inventions that were going to happen no matter what.



Stop worrying. Now.

Every nanosecond you worry about this is a resource permanently lost from where it belongs: your startup.

What if Henry Ford hesitated because he worried about drunk drivers? Or Edison and Bell hesitated because they worried about drug dealers and criminals? We'd still have automobiles, lights, and phones. But you would have never heard of any of them.

You have little or no control over this. Have a little faith in others; stepping forward to confront evil is their thing. In the meantime, do yours.


I think that if there is something you can do to prevent or discourage the use of your creation for evil purposes, then you should do everything in your power to do so.

Sometimes doing so is also critical to keeping your users, especially if they entrust you with something useful but dangerous.

At Loopt I think about things like this a lot, and I hope that doing so has made the service safer and harder to subvert for common questionable or evil purposes.

Simply ignoring the issues would have been negligence.


Thinking about how to nurture your community is not a waste of time. The right community is vitally important to many startups, certainly not less important than the technology (Reddit and Wikipedia are both good examples).

He has control over the community in the way he implements its mechanisms of interaction. Crash course here:

http://www.joelonsoftware.com/articles/BuildingCommunitieswi...


Thanks for your feedback. I agree that protracted hand-wringing over the potential outcomes of my startup would be a waste of time. Nonetheless, I think it is worth pondering the issue.

In a sense, the specific problems I raised with Reddit are just a subset of a larger issue: community-driven sites rotting at their core. Sometimes they rot at their core due to racists (and other types of bigots and hate mongers) spouting off, and sometimes they rot at their core due to idiots being idiots, and sometimes they rot at their core due to spammers.

I raised the topic because I've heard plenty of ideas on how to reduce the number/influence of idiots and spammers on community-driven sites, but I have heard far less about how to prevent hate from taking hold. This is important not just because racism and other forms of bigotry harm society, but because prominently featured prejudice-based hatred diminishes the credibility of the information a given community creates/organizes just as much as idiots and spammers do.

Our concept is not really like Reddit (or Wikipedia), and we think it is less vulnerable to the rot-inducing influences I referenced above.


The hottest fires in hell are reserved for those who remain neutral in times of moral crisis.


How is building something people can use (or possibly misuse) the same as remaining neutral?


It's a moot point anyway because most of the time when you make a moral stand in your product you end up making the experience for its audience better. The tricky part is knowing which types of people you'll lose or gain based on those moral stands.


Long time lurker, finally registered to respond to this.

On controversial matters like race, religion or abortion, no matter what your position is and how sure you are about it, there almost certainly exists someone on the other side who's better educated/informed and just as sure as you. My ideal community site would allow the most extreme opinions, but not let any group jam the signal for others. For example, the Wikipedia page "Race and intelligence" would be much improved if it were split into two pages side by side, each editable by only one of the factions.


That's an interesting perspective. The problem that initially jumps out at me is what to do when there aren't two clear sides to an issue. Any ideas on handling complicated, muddled, and contentious issues with multiple sides?



Yes, there are things you can do to drive the community in the way you want, without resorting to censorship.

There are softer measures you can take - make the votes of trusted editors (and people who vote like them) count for more in the ranking algo in the Reddit's case, for example. Is this ethical? Well, you have to decide for yourself.

A few examples to think of might help:

- Western civilization generally spreads its values along with its technology. Is this Ok?

- Is it ethical to support democracy in cultures that don't grow it themselves?

- If you are the government of some country and have a tribe practicing female genital mutilation and considering it normal (including the women), what is the ethical thing to do?

- Is Scientology ethical? Is it ethical to constraint them?


I'm not proposing they censor the site (it wouldn't really be possible, anyway

Sure it is. The problem is that censors are always idiots: They define things as "good" and "evil" rather than "true" and "false". This tends to leave them and their compadres reading nothing but falsehood.


Censorship usually is not associated with filtering true vs. false information. From wikipedia:

Censorship is the suppression or deletion of material, which may be considered objectionable, harmful or sensitive, as determined by a censor.

"Censorship" implies a value judgement, not just filtering out content that is false (although even then, it can be hard to determine what is unquestionably untrue). That's why it's so subjective and difficult.


I think defining things as "true" and "false" is just as shackling though. You end up generalizing something that shouldn't be reduced to Boolean logic. You can't understand Wikipedia by looking at its bytecode, just as you can't understand a human by reading their genome. They're more than the sum of their parts.


Regardless of the criteria for censorship, it'd be ineffective on a community like Reddit.

Wikipedia's process for ensuring quality of content is admirable, if imperfect. They benefit from the community's clearly defined constraints and overall mission.


Just curious, can you give some examples of hate speeches on reddit? I'm thinking of the recent James Watson controversies, but that would seem pretty fuzzy. Any clarifications are appreciated.


You can't throw out accusations of hate speech at the reddit community without some specifics which I've yet to see you mention in this post.


That which lasts is self-correcting.


If I understand it correctly, the idea with Reddit is that the community can decide what news it wants to see, by posting links and then filtering them. If the community wants hate speach (or more of those Ron Paul links) and Reddit serves up hate speach, then isn't the algorithm working?

I suppose you could actively ban users who skew the community in that direction -- though you'd have an endless calvacade of sock puppets to deal with.


My current toy idea is a real-time captcha trading market. On the one hand, spammers might pay to use it. On the other hand, it will inspire websites to come up with something better than captchas.

In any case, making a real-time market is too fun to worry about implications, don't you think? :)


"Things happen, what the hell!" - Terry Pratchet, The Hogfather.


Bah, humbug. ;-)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: