> I think startups could certainly offer bug bounties. It is a nice gesture, and more importantly, platforms like HackerOne provide researchers a legal path to the disclosure of vulnerabilities.
Unfortunately most startups are obsessed with growth and don't really care about security, privacy, etc.
I disagree for another reason. I don't think bug bounty is the best fit for a few reasons:
* HackerOne is very expensive for most startups, IIRC it's around 30k USD/year only to be hosted on their platform (the price is probably different for different companies though.)
* If one doesn't want to use a platform like HackerOne you're in a tax hell doing payments to people all over the world
* The cost of paying for the bugs reported which can be massively expensive if you want to be competitive.
* The time it takes to handle all reports is massive. You might imagine you get cool and interesting research but 99.9% are people copy-pasting output from automated scanners, people reporting on out of scope things, usually low severity with things like "missing x-frame-options" or just straight up false positives. It's both frustrating, demotivating and time consuming.
I think a self hosted responsible disclosure program is better and more sustainable for most startups. Add a security.txt [0] to the website. You might still get a bunch of low quality reports but at least you give people a structured way of disclosing findings.
I actually don't know the exact details, I don't work with finance but that's one of the problems we identified when looking into it.
I think it has do do with how bug bounties generally work. Since bug hunters aren't sending an invoice from a registered company you have to pay employer tax or similar. But as I mentioned, I'm not entirely sure since I don't work with that part. It might also differ depending on where the company is registered.
Users broadly have no way of analyzing security + privacy. Even most software developers don't have the time or expertise to reverse engineer and analyze even one app, let alone every release of every app they use. They just have to take it on trust. For things like cars we have mandatory standards for vehicle design, crash testing requirements, fuel efficiency and emissions standards etc. to try and make sure that people can expect a certain level of performance and safety. For software there's nothing like that.
I think users have no idea what’s involved in software development, but that they expect any company takes care of the important things in order to bring a product to market.. that includes security and privacy. the “users don’t care” argument, i believe, is a cop out to make ourselves feel good.
I believe users won't care until it's common for there to be repercussions from not caring. If not caring means my odds of having my savings account stolen are %20 a year then people would start caring. As it is there are few directly attributable repercussions. Just vague worries about future problems.
As it is what are the odds of any one user running into issues because of lack of security and privacy? It seems fairly low.
It seems like an opportunity for Apple (assuming they're actually better) to run some scare ads (99% of people scammed via there computer were running Windows/Android) if that's true. If it was true a good ad campaign could get people to care?
For internet stuff how about 98% of the people who got their bank accounts hacked were hacked by leaks of data on Facebook. (probably not true and not provable)
Maybe we need some security insurance who will then audit software and only insure customers that use certified software? They'd have an incentive for their audits to be good because they pay out if it turns out the software is not secure And if their market was big enough then software creators would want to be certified.
It's even possible some standard sandboxes could help make it easy to certify. Add this sandbox to your app and you're certified? Maybe some OSes that already have sandboxes would automatically get certified but server side you'd need audits?
> the “users don’t care” argument, i believe, is a cop out to make ourselves feel good.
Strongly disagree. It's just the simple reality that most users don't care about security. The vast majority of potential consumers in the world don't choose digital products based on security. I always see this security angle touted on Hacker News, but I'm quite frankly shocked that people here don't have the self-awareness to realize that we live in an uber-tech geek's echo chamber.
Have you ever met an "average" Facebook user? They really, truly, do not understand or care about security. I'm very confident that even if you sat one down and walked them through all of the implications of what poor security even means, they would walk away and not change their behavior whatsoever.
The whole "users don't care" is really ignoring consumer's cognitive dissonance on security.
Adopting the stance that "vast majority of potential consumers in the world don't choose digital products based on security" time-and-time again bites organizations in the ass when there's a breach.
> time-and-time again bites organizations in the ass when there's a breach.
The bite isn't very hard though. The largest data breach of the 21st century in terms of users was Adobe and it cost them just 2 million in legal.
The only painful data breach I can think of financially has been Equifax. Everyone else just sent out a "reset your password" email, paid for a couple lawyers and PR people, and went on with their companies.
Can you name a company killed by a data breach? I can't think of one.
> but that they expect any company takes care of the important things in order to bring a product to market.
They show that expectation by withholding their money or not using the product when something bad happens. I don't see them doing much of that when companies have data breaches.
Agreed. There's just no way consumers can be expected to usefully verify things like privacy. In the real world, "it should just work" expectations get baked into things like building codes, the Uniform Commercial Code, commoditization rules, and food safety standards.
If we don't come up with something like that as an industry, eventually somebody else is going to do it for us. And we won't like that one bit.
There already are companies that advertise privacy as an edge, but either they are sexy enough for consumers or they aren't. DuckDuckGo isn't sexy. Firefox isn't sexy. Linux-based desktops aren't sexy.
If these things caught the sparkle in customer's eyes, we'd all know it by now.
I think this is an important point. Startups trying to make privacy ‘cool’ need to pay more attention to branding. While I love DDG, the name and the whole flavor of their site will necessarily limit their reach to technical people.
In terms of popular media, I thought Mr. Robot did a pretty good job of making privacy/security tech seem sexy, even if the main character is usually bypassing it.
I would agree with you, except the number of people still using facebook after cambridge analytica kinda proves that people don't actually care, even the ones that say they do.
If a massive privacy breach that potentially shaped history is not sufficient and all the follow up stories on it were not sufficient, customers have basically decided that they practically don't care for the purpose of conducting business operations.
What percentage of the general public do you imagine could give a coherent summary of Cambridge Analytica scandal? I'd be amazed if it were as high as 5%. I get that a lot of people here know about it. But it had no direct impact on most users. It's not surprising that they don't understand this any better than a host of other subtle but important things.
People can't all devote time and energy to everything that's important. It's too big a world. We shouldn't conclude that those things don't matter to them. Our current emergency is a fine example: until a pandemic happened, few knew enough to worry about it. But that doesn't mean we were indifferent to the outcome.
> People can't all devote time and energy to everything that's important.
Well if the users can't spend any time on it, it isn't sufficiently important for the business to care about it as the users will be briefly miffed and then go on to what they consider important.
User unhappiness matters only so far as it changes behavior.
Industrial food companies were taking significant liberties with food quality and safety. And it was apparently fine! People appeared not to care. Then Upton Sinclair's book The Jungle happened to catch the popular imagination, leading to a wave of outrage and investigations. People still "didn't care" in the sense of, say, not buying industrially produced food. But they did very much care, leading to a regulatory regime that has lasted more than a century.
Users do care, but they tend to guess the quality of the entire product, including security, from the slick look of the product or marketing pages with fancy words. In reality, all of these are hard to verify even for tech savvy users.
Amazon/MS/Google should do a better job of making it hard to leave things unprotected. They are no longer a new service and no longer have the excuse of having to avoid friction.
I think users care, given the outrage when leaks or “creepy” ads occur. But the problem is that users have an impossible time reasonably evaluating apps for security and privacy, so there’s very little market incentive for app makers to make secure and private apps.
I work in a startup and unfourtunately we don’t have cash lying around to offer any bug bounty that would be reasonably priced. And, I think, that applies propably to most of the startups.
Unfortunately most startups are obsessed with growth and don't really care about security, privacy, etc.