”What these companies are doing is illegal in Europe but they do not care," said Ms Eckert, adding that the research had kicked off a debate in Germany about how to curb the data gathering habits of the firms.
I think it’s important to be skeptical towards legislation as a solution to these things. The EU/UK cookie law is a cautionary tale, for example. After all that talk we ended up with a law that (effectively) mandates a boilerplate nag screens and no change in behaviour. Even if it had clearer language to distinguish allowable-illegal cookie use, it would still be very difficult to enforce.
I don’t mean to say legislation has no part to play. Just saying that the politician outrage to legislation sausage factory has produced some duds in this area. I wouldn’t count on a solution coming from this direction.
Speaking of enforcement… Most countries have an advertising standards authority. They create the rules and such. If an ad is (for example) a blatant lie, they can call up the Press/TV/Radio station and get the ad removed. Online, it’s not obvious what authority they have, or how they would enforce that authority at all.
Where advertising standards are still not broken is regulated industries. If a locally regulated bank advertises “one weird trick to double your savings,” the advertising standards people can go to the regulator. They have a number to call, genuine threats to make. ..enough to promote self policing.
Online, even reputable newspapers allow shockingly crappy ads. Sleazy data collection, snake oils, fake products, click farms, scams even fake news (ironically). Real shyster stuff.
This is on the visible end of the online advertising stick, the ad content itself. We already have legislation and a custom of rules. Still, enforcement is nonexistent. Dealing with the unseen data collection end of this stick is even harder.
I think it's important to recognise how good - and effective - data protection laws are in some countries. The biggest challenge is US companies flagrantly ignoring them. In other words, the main thing holding Europe back from protecting data is that the USA is so lax. I think in that regard, more legislation could be massively beneficial.
Err, it's not clear that EU law applies to US companies online, nor should it. There are many things illegal in the EU that are quite allowable in the US including a lot of freedom of speech issues.
One of the issues anytime we have legislation on the internet is how it applies. If XYZ company is based out of the US, markets itself to the US, but has a generally available website, does EU law apply to it at all? Even if an EU citizen uses it?
Alternatively, can every country force their laws on websites based out of the EU?
I agree, but that is going to change. There is new European legislation coming up (GDPR, effective May 2018) which will be a lot more restrictive than the current privacy laws. It will apply to any organization that monitors the online behavior of people in Europe, even if the company itself is in the US or somewhere else. With fines up to EUR 20M or 4% of worldwide turnover there will be a large incentive to comply. Already you can see big companies like Google and Mailchimp adapting to it.
I think it's fairly clear. If there are two parties to a transaction in two different countries, you can generally apply the laws of either country. That's the nature of international commerce.
The US for one feels entitled to go after gambling companies incorporated abroad.
If an American company doesn't like it, it can simply stop doing business in the world's largest market.
> Err, it's not clear that EU law applies to US companies online, nor should it. There are many things illegal in the EU that are quite allowable in the US including a lot of freedom of speech issues.
> The regulation applies if the data controller (organization that collects data from EU residents) or processor (organization that processes data on behalf of data controller e.g. cloud service providers) or the data subject (person) is based in the EU.
It's fairly clear that, absent a world government, national laws apply to companied outside of the nation based on the combination of will and ability of the particular nation to enforce penalties on foreign violations.
The ability side tends to be very high if the company at issue has any substantial assets held in the country trying to enforce the rules, or in a another country with an interest in keeping that country happy.
In the end, it's pretty simple: online companies that make money in the eu (even just selling ads) have something to lose and can be forced to comply. Others can't.
In some cases, even the opposite. I used to use self-destructing cookies but stopped after so many websites required using cookies to stop showing that message. I know there are extensions for removing those, but the point is that they made more difficult to avoid what they wanted to avoid in the first place.
Makes me think that laws like this should be more like programming (for humans) and easily reverted when they are found to cause bugs in behavior but somehow laws have to be fixed with new bills on top of the old.
The law books are the biggest bunch of out of date legacy code that nobody understands apart from a few overpaid experts and is in need of a massive refactoring.
So what else do you propose we should do? Demand that the companies self-police out of their deep commitment to ethics and communal wellbeing - and then be shocked and outraged when against all odds they don't?
I don't actually accept "the current state of affairs is bad" as a justification for new regulations.
The entire point of e.g. the cookie example is that often regulation is not only annoying but ineffective. You don't get to say "this is bad, self-enforcement doesn't work, therefore we have an obligation to pass a law that also won't work".
I agree that self-policing won't work, but it's entirely fair to say "all of the options I see are unacceptably bad, we need to talk about finding a new path instead of choosing any of these".
> I don't actually accept "the current state of affairs is bad" as a justification for new regulations.
And I don't accept "a previous regulation turned out to be pointless and annoying" as an argument against trying new regulations to fix "the current bad state of affairs." I don't think there's some magical third way to resolve issues with the failure of self-policing that doesn't involve regulation. Such arguments tend to have little positive outcome and usually just serve to discourage any attempts to actually fix problems.
Law, regulation, and enforcement are the solutions to reconciling the mismatch between selfish bad behavior and the public good. However, these regulations will undoubtedly have to be iterated on until a workable and effective solution is found.
> I don't think there's some magical third way to resolve issues with the failure of self-policing that doesn't involve regulation.
I'm still hopeful about finding a magical third way, but I admit I'm short on answers. "Find a tech solution" is the common one, but it feels like a coinflip at best - with smart people putting their talents on both sides of anonymity, there's no real reason to count on an eventual victory for privacy. (And, troublingly, each retrenchment seems to raise the skill requirement for privacy - it's entirely out of reach of most average web users.)
> Law, regulation, and enforcement are the solutions to reconciling the mismatch between selfish bad behavior and the public good. However, these regulations will undoubtedly have to be iterated on until a workable and effective solution is found.
And here's where we differ, I guess. I don't think that the cookies regulation is a one-off case of "pointless and annoying", nor do I think it will be iterated on until an effective solution is found. I think "iteration until success" is entirely absent from the history of tech regulation.
I think that regulators are currently too slow, uninformed, and biased to effectively address any but the most egregious technological issues - unless the right answer is "outlaw any behavior even vaguely resembling this" the task is simply beyond the bodies involved. The number of well-crafted, impactful, unsubverted laws protecting computer privacy and security is approximately zero. I think that this state of affairs can't be substantially changed with outreach or lobbying, but will remain a basically fundamental aspect of first-world governments for at least a decade.
---
So... I guess my answer for now is hopelessness. I agree that the current state of affairs is bad, self-policing is hopeless, and no third solution is forthcoming. But I also think regulation is generally both destructive and useless, which is marginally worse than nothing at all. I guess that too is "discouraging any attempts to actually fix problems", but I'll be damned if I see a way out.
In spirit of findings new paths, just an idea (and I'm not sure at all that this is a practical solution):
fail2ban---a tool commonly used in servers to automatically block remote attackers such as spammers---can be configured to automatically send an attack report to some e-mail address when it detects an attack.
A browser extension, such as PrivacyBadger, could do a similar thing when it detects a tracker. Let the report be sent to a law enforcement agency that is setup to deal with this kind of problem. If this agency collects enough data, then it can do some data mining to find, for example, the most violent trackers. If that works, maybe such a tool can be used to give law enforcement some actual power to punish these companies.
I'm not sure how practical the "to law enforcement" stop is here, for two reasons. First, lots of these trackers are based in places where their data collection-and-sales are legal - the EU makes decent efforts to pursue foreign actors, but it's a serious limitation nonetheless. Second, it's not clear how often illegal action can be demonstrated. The authors of the DefCon presentation here only identified one of the ten extensions they suspected because the rest couldn't be identified beyond reasonable doubt; with data being gathered by so many actors identifying one is hard.
Even so, producing a bank of highly-suspicious extensions could be truly valuable. Law enforcement (sometimes) has the resources to pursue these things beyond a DefCon presentation - honeypotting might make it possible to advance suspicion to proof by providing fake data to specific trackers. And publicizing high-likelihood suspicions might produce publicity against bad actors and moves to transparency and guarantees by better actors.
Sometimes you have to accept that legislation isn't going to fix a problem. Piracy, spamming, and so on aren't going to go away, even if we catch people periodically.
You can certainly get the largest firms to comply, but the internet is full of people trying to trick you with fraud and viruses. You're not going to make those people care.
> And sometimes you have to accept that there will be zero change until legislation hits the big companies with an even bigger stick.
Were the problem parties in this article the "big companies"? Nope. They're already flouting existing laws. Are new laws going to stop them when the existing ones don't? Obviously not.
Regulations can't defend hordes of gullible suckers.
It's a two part problem: Consumers need to be equipped to defend themselves with basic critical thinking, in addition to regulation that punish abusive entities.
I haven't read it (or the underlying legislation being amended), so I don't know.
The eprivacy directive amendment (the "cookie law" I was referring to as a dud) didn't show it's flaws until it was implemented by sites in what now appears to be the recommended manner^. Basically, it allowed a generic (and IMO useless) interpretation of informed consent,
Most of the discussion (parliamentary and otherwise) was around tracking and privacy. What seemed to have fueled it was retargetting. To the best of my knowledge, google analytics, FB advertising or any of the other major ad networks have not made meaningful changes to their systems as a result of the legislation.
I think a change to browser defaults could have (still can) have a more meaningful effect than this legislation and all its regional children.
The overview you link to, describe the legislation's intent. I agree with the intent. The problems are not there. The problems are with effectiveness in practice. As I said, I don't know the details of this commandment so I don't have those details for you.
In the legislation this amends, many of the problems arise from the "user initiated" approach. You have a right to see what data FB have on you, but you need to know FB have info on you to ask about.
It also (as this article suggests) needs to deal in a lot of grey area around anonymous data.
I'm not against laws. I'm not a hater. I'm just skeptical that these issues can be resolved solely or even primarily via legislation.
I think it's appropriate to be skeptical of any particular piece of legislation in the same way that it's reasonable to be skeptical of a proof of a centuries old conjecture. It doesn't follow that you should be skeptical of mathematics as a system.
I think it’s important to be skeptical towards legislation as a solution to these things. The EU/UK cookie law is a cautionary tale, for example. After all that talk we ended up with a law that (effectively) mandates a boilerplate nag screens and no change in behaviour. Even if it had clearer language to distinguish allowable-illegal cookie use, it would still be very difficult to enforce.
I don’t mean to say legislation has no part to play. Just saying that the politician outrage to legislation sausage factory has produced some duds in this area. I wouldn’t count on a solution coming from this direction.
Speaking of enforcement… Most countries have an advertising standards authority. They create the rules and such. If an ad is (for example) a blatant lie, they can call up the Press/TV/Radio station and get the ad removed. Online, it’s not obvious what authority they have, or how they would enforce that authority at all.
Where advertising standards are still not broken is regulated industries. If a locally regulated bank advertises “one weird trick to double your savings,” the advertising standards people can go to the regulator. They have a number to call, genuine threats to make. ..enough to promote self policing.
Online, even reputable newspapers allow shockingly crappy ads. Sleazy data collection, snake oils, fake products, click farms, scams even fake news (ironically). Real shyster stuff.
This is on the visible end of the online advertising stick, the ad content itself. We already have legislation and a custom of rules. Still, enforcement is nonexistent. Dealing with the unseen data collection end of this stick is even harder.