Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Fuck Nuance [pdf] (kieranhealy.org)
129 points by networked on March 27, 2016 | hide | past | favorite | 68 comments


I call this the "abstraction paradox".

There is no way that an abstraction can fully equate what it abstracts. "Apple" could never equal a real apple. But abstraction allows us to think about real apples. In science, all theories and models are abstractions. Hence, they are always approximations, and can be invalidated or nullified with edge cases that can indeed be backed by evidence.

But the opposite is also true. If a theory is based on evidence, then it will always be an approximation of the truth to some degree. Hence, there will always be hard evidence that validates the theory also -- ideally the evidence the theory is based on.

The ultimate test is application. If a theory or model has purpose, as in, has a use, then the value of that theory can be measured by its consequences. Otherwise, frankly, academics arguing over what has no consequence will have no consequence outside the circles of those who argue over it. I sense "fuck nuance" as a message to them.

We still use Newtonian mechanics where objects are big and slow enough. But we also use general relativity, quantum mechanics, quantum field theory, etc, where more accuracy is required. So we already know theories have limits and degrade with constraints, and we switch theories.

This is true of EVERY THEORY. And that is the abstraction paradox.

If you're arguing over someone just to prove them wrong, congratulations, you've just proved you're completely redundant. So fuck you. (in the spirit of the author)


>Otherwise, frankly, academics arguing over what has no consequence will have no consequence outside the circles of those who argue over it.

Sociology is mostly a series of status games played by academics. It's part of the humanities, which means the methods of play are peer group signalling and persuasive story-telling, not reality testing.

It's not engineering, where models actually have to predict reality with at least a minimal baseline of accuracy and reliability.

Engineers don't have a problem with nuance, and are suspicious of toy models that don't include details - because they're just not useful.

If you model flight characteristics for an airliner but don't deal with edge cases, the plane can fall out of the sky.

If you offer a grand social theory of X but get the details wrong, no one really cares, because sociologists have very little influence on practical policy, and the influence they do have is carefully chosen for political applicability by non-sociologists.

Occasionally useful insights fall out of social psychology and sociology. But they have limited applicability because our understanding of social psychology is pre-Newtonian - some things seem to work, but no one is really sure why - and our understanding of politics and economics is barbaric and barely better than medieval, on a good day.


Your entire post is a series of sociological assertions about academics and engineers without scientific evidence. I'm sure the irony was intentional.


That entire post can be reduced to two core assertions:

1) Details matter in engineering because products will actually be manufactured according to those details and if the details are wrong then planes would fall out of the sky. (Which we know they rarely do.)

2) Details don't matter in social theory because practitioners don't use academic theories to make policy. The theories can't be tested because they aren't used. Most social science "theories" are therefore a hypothesis without an experiment.

The only statement about people or social behavior is that engineers put their theories into practice with more frequency and rigor than sociologists, which is empirically true.


I see this problem as such: sciences beyond physics have phenomenon dependent on many more variables, many of which are unknown

On Physics, beyond the obvious, the rest can be derided as noise (you won't have an exact measure of anything, sometimes this is a new phenomenon yet to be discovered, or something known but overlooked in this situation, but usually "noise")

Now when you depend on more variables, your first guess might only predict the result 60% of the time (as an example). Better than random choice, but still has a low predictability. Then you add more "obvious" variables and can result in, 80% prediction, which is good, but not as good as you can get with physics.

This can be social sciences or even something more hard as biology or medicine.

In the end it's an issue of overfitting/underfitting


This. It's absolutely about overfitting/underfitting. Just as our eyes see a lot more but can only focus on one thing, abstractions are just that. They focus on specific things. But the problem with the physical reality is that it does not end, and everything is connected. Every time we make an abstraction we are severing the subject from it's surroundings which physically are inseparable. They are only theoretically separable. Hence scientific research begins with closing an otherwise open system.

Even when we weigh an apple, it's weight is influenced by the gravitational pull of the sun and everything else. Hence what matters depends on application. If you're buying apples by weight, we can safely exclude these influences as inconsequential to the price of your apples. Good enough is what is fitting, and they make good abstractions and good science.

It's easy to lose track of utility in social sciences, because history only happens once, and theories often appear to have no predicting power or future use. But if anything, social sciences provide us with introspection. It tells us who we are and how we've been. If you already practice this on a personal level, then you already know how valuable this is at any scale. If not, you should begin to practice it on yourself first, because the value of things is most easily admired through experience.


Abstractions need not be approximations. Schroedinger's equation is an abstraction, but as far as we know it's exact.


Schrodinger's equation doesn't even include special relativistic effects. It's not exact (at least not if you mean it exactly describes nature in general). Neither is Dirac's equation; it includes special relativistic effects for a single particle, but not general relativistic effects or multi-particle interactions (at least as far as I understand it).


The general form using the unspecified Hamiltonian operator is compatible with relativity.


The moment the theory is the reality, it's exact. Like cartoons. Like definitions. Like math. But physics and science is about reality. It's about the physical apple, not the word apple. A real apple is physical. There is no amount of theory or symbolism that will produce a real apple in the minds and systems that simulate it. That's the paradox.


>A real apple is physical. There is no amount of theory or symbolism that will produce a real apple in the minds and systems that simulate it. That's the paradox.

Actually, I see two problems with this notion.

First, there's no "paradox". That "no amount or theory or symbolism will produce a real apple" is not paradoxical. If anything, it's common sense.

Second, this common sense might very well be wrong (and that would be a real paradox this time).

In fact, what you call "real apple" might itself be a simulation (along with all the universe we know) -- created by higher order beings.

Still, you might say, our theories, us being inhabitants of the (simulated) universe, won't produce a real apple for us.

But who's to say that? It depends on the rules of the simulated universe. And we can't say with 100% certainty we know the rules of ours (if it's one such).

E.g. the rules could very well allow the thoughts and symbolism of its inhabitants to create new objects in the universe.

In that sense, the universe would be like Lisp (where data are code), or like using "reflection".


What is Schroedinger's equation an abstraction of? What details does it abstract from? How can it be that the omission of those details does not result in an inexact representation of the unabstracted, more complete, situation?

BTW, there are no exact solutions to the Schroedinger equation of most interesting systems: we only get numerical approximations to the values of interest from it.


https://en.wikipedia.org/wiki/Hartree%E2%80%93Fock_method

Some of those numerical solutions were really important historically and lead to the further development of numerical techniques.

Back to main thread: I'm thinking of Lakatos' Proofs and Refutations, itself a play on Popper's Conjectures and Refutations, designed itself to escape from 'induction' based thinking, e.g. Hans Reichenbach

Long time since I've thought of those, thanks.


Schroedinger's equation isn't a theory. It's a law. And like other laws (F = ma) doesn't account for a bunch of other things. So yes, it still is an approximation.


Abstractions of social sciences do need to be approximations :-)


Well, science has not your "abstraction paradox".

Science uses a notion called domain of validity for their abstraction. Domain of validity gives conditions for which there is actually no ambiguity.

There are domains where science knows it knows nothing. It is called transitions.

In science you take more time learning these "domain of validity" than the actual laws. In fact, you have to learn to calculate them by yourself.

When Einstein's relativity apply? When you go to fast.

When is a photon behaving like a corpuscule ? When you try to count them. When does photon behave like a wave? When you try to measure the interferences.

When does deterministic laws fails? When you have to much independent entities, then you use statistical physics. According to the possibility for entities to share the same energy level you then choose your distribution to integrate all the states...

Science is not about knowing formula, it is about knowing when you can apply them.


The abstraction paradox is with regard to any abstraction. Theories are abstractions. But science is not just theories. It's application as you say. Arguments should be over accuracy and application. To simply prove a theory wrong is redundant because it will always be possible. It's what can be referred to as a "dick move".

Your tone was misleading, but everything you said backs the theory. I must thank you.


    A map is not the territory it represents, but, if correct, it has a similar structure to the territory, which accounts for its usefulness.
    — Alfred Korzybski, Science and Sanity (1933, p. 58)


> Verisimilitude is a philosophical concept that distinguishes between the relative and apparent (or seemingly so) truth and falsity of assertions and hypotheses. The problem of verisimilitude is the problem of articulating what it takes for one false theory to be closer to the truth than another false theory

https://en.wikipedia.org/wiki/Verisimilitude


I realized as I was reading this that I know very little about sociology as a discipline. In particular, I'm unaware of any of its "Great Theories", a la General Relativity for physics, or Darwinian Evolution for biology.

Any candidates? I'm not trying to be snide, I really would like to learn more about this. The Wikipedia page for sociology describes "theories" that sound more like schools of thought.


I would posit the social signalling theories of Robin Hanson as perhaps a theory of this magnitude.

The problem is, though most of Hanson's theories have plausible explanations and "feel" right, there is very little experimental evidence and very few studies are directed in that area. In fact, you might even say it is a symptom of that type of signalling itself: as homo-hypocritus, we don't want signalling theories to be proven true so we make up reasons for deriding signalling explanations as not rigorous enough to even merit being properly tested or studied. Of course, this too is no better, because then do we just actually have no evidence to believe in signalling explanations, or are we using signalling excuses to suppress the meaning of that evidence? It's somewhat self-defeating.

The next most important theory I would posit is Prospect Theory by Kahneman and Tversky. But there again, many aspects of it have not been successfully replicated, and too often limited experiments are generalized too much.

One set of social theories that I would love to see given more quantitative foundations is the research from the book Moral Mazes. Robert Jackall carried out some lengthy longitudinal studies in which he collected anecdotal narrative and interview data from employees at all seniority levels through a couple different firms. He was able to see people rise up the corporate ladder, people get fired, executives fired and replaced, large layoffs, etc., and conduct informal interviews with employees all up and down the ladder.

Based on his interviews, he put together a series of theories about how morality and ethical identity form within bureaucracy, and particularly how managers develop an understanding of ethical obligations to subordinates, and how certain factions of an organization (notably HR) come to function when there are competing concerns between ethical expectations and bureaucratic mandates.

In every job I've had, it's been a frustrating experience of basically saying, "Yep, I know this chapter from Moral Mazes ..." because companies operate almost as if they've read the book as a field guide.

Yet at the same time, it's still just a qualitative study that is scaffolded by academic sociology. If it were actually possible to collect that kind of data from corporations, I'd love to see some theories from the book explored more quantitatively.


Great comment. I'm saving it for when I look this stuff up. :)


The thing about sociology is that there simply aren't definitive theories akin General Relativity in sociology. At best there are broad methodological schools and approaches, (such as Structuralism, Utilitarianism, etc). The difference is none of these schools and approaches have been or could (within present constraints) be established by experiments as definitive as Newtonian physics, General Relativity, the atomic theory of matter.

I'm not saying sociology is useless - rather it seems to work by combining a broad approach with specific statistical analysis to get some idea that the specific analysis has some merit.

The non-definitive quality of social science at this level is why sociologists talk about nuance. Just finding a correlation tends to really sketchy for causation - having a reason that the correlation would be causation is better than nothing but still a thin reed.


The other thing that makes sociology different from those theories, is that for anything more specific/advanced, it needs a system of values.

When you measure magnetic forces or examine the behavior of gravity, you don't need that.

But when you study society, you need to stand on some foundations of what's desired, good etc -- at least in the very basic sense.

And even something as simple "good is what's useful for society" doesn't answer that, because we then can ask "useful towards what end?", etc. Is something like murder justified? What about war? etc.

Those questions are not (and will never be) settled. At the most basic, the different schools come from different such understandings.


Latour's "Reassembling the social" is an interesting, non-ancient perspective.

I mean, people in the social sciences tend to tell you to read Durkheim and Weber and Marx while no one would tell someone to learn calculus on Newton, Cauchy or even Landau.


It seems like memetics wants to be for sociology what genetics is for evolution and biology, but as far as I know they haven't been able to trace "memes" back to their neurological roots. It's an interesting idea but won't dislodge prevailing schools of thought until/unless it has its own Watson and Crick moment.


Or, as the old chestnut goes, the old guard dies off.


Historical materialism



> I realized as I was reading this that I know very little about sociology as a discipline.

...You wrote in a text field on a website with other readers who will be able to upvote or downvote your comment.

Sociology tends to have less of an emphasis on theory than some other fields, but that doesn't mean there isn't a ton of research that's useful and relevant.


You want to know more about physics? You wrote that while sitting at a desk from which things will fall if you drop them off.


My point was that there is a ton of research relevant to each component of that interaction... E.g. how text affects us emotionally, how peer feedback impacts motivation, the properties of synchronous vs asychronous communication, etc.

I was merely trying to evoke "the sociological imagination".


Well, I'm sure there is, but it doesn't really give him any idea how to find it.


This article is a truly fantastic critique of how modern social sciences (besides economics) are carried out and publicized in modern society. Any critique one dislikes - typically "your theory predicts X which is false" - can be dismissed by "nuance", "things are more complicated", "that's different in unspecified way", etc.

In statistical terms, these dismissals are little more than a call for overfitting - if I can arbitrarily add more dimensions to a theory, I can make it fit any data set.

This is a fantastic article that can be applied in so many places.

In applied social sciences this critique is far easier to work around. "Yeah, my theory is pretty one-dimensional and does lack nuance. My code is in $REPO - feel free to run an experiment and see if your multidimensional theory generates any alpha over mine."


Why do you exclude economics? I see economics as practically leading the way with regard to dismissals like you suggest.

One example I have first-hand experience with is the low-volatility anomaly. CAPM-like theories predict a roughly linear, positive-sloping relationship between risk and return.

Yet decades worth of various empirical measurements of the riskiness of assets and portfolios unequivocally demonstrates that it is a negative-sloping relationship (e.g. for your willingness to bear an additional unit of risk, you actually receive less return).

But everyone says basically the same thing. "Things are more complicated in real markets."

It would be reasonable to say that CAPM is a mathematical result, and the evidence suggests the mathematical model required for CAPM simply fails to correspond to reality. But many finance folks, especially in academia, won't admit it. For them, CAPM must also be a descriptive theory.

Two sources that illustrate what I'm talking about:

[0] Baker, Bradley, and Wurgler, "Benchmarks as Limits to Arbitrage: Understanding the Low-Volatility Anomaly" FAJ Volume 61, 2011. < http://www.cfapubs.org/doi/pdf/10.2469/faj.v67.n1.4 >

[1] Baker, Malcolm P. and Bradley, Brendan and Taliaferro, Ryan, The Low Risk Anomaly: A Decomposition into Micro and Macro Effects (September 13, 2013). Financial Analysts Journal. < http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2210003 >

*disclaimer: The authors of these two papers were my bosses in a previous role, and I wrote the code and generated the results for the second paper.


What you are describing in economics - while perhaps a failure mode - is the exact opposite of what this article discusses. The critique of this article is that valid abstractions and theories are rejected due to a lack of "nuance", rather than a failure to explain and predict reality:

Second is the ever more extensive expansion of some theoretical system in a way that effšectively closes it oš from rebuttal or disconrmation by anything in the world...It is an evasion of the demand that a theory be refutable.

Your critique of some economic theories is a totally different one - that they make specific predictions which don't come true.


Yes, but as far as the economists are concerned, asking for someone to rectify CAPM with the low-volatility anomaly is just nuance.

The author of this piece is saying, Let's not just throw unconstrained nuance at a theory: let's look at a holistic and sort of complexity-adjusted assessment of how well it's doing before we start asking it to behave like a Swiss Army knife, with a special mode to satisfy every empirical whim.

I'm saying that this -- the argument against nuance of this type -- is used to defend otherwise flatly wrong theories. (And I misread your comment to be saying that about other fields of social science too.)

In other words, when someone says, "But wait a minute, can you augment CAPM to handle the particular messy real-world case that I care about?" the response is: sorry, but we have to Occam's Razor away your request, we can't tailor it to everything.

The problem is that that messy real-world case is not some bespoke, uncommon corner, but is the most up-front qualitative property of the whole theory.

I guess what I'm saying is that the author says we should Occam's Razor-ify requests for nuance, yet in economics people masquerade as if they are doing legit Occam's Razoring (we can't tailor our analysis to every whim) when really they are just protecting pet theories that simply are bad.

It's actually a caution against the way the OP's advice about constraining nuance can be misused and politically abused to defend models that are simply wrong.


>t would be reasonable to say that CAPM is a mathematical result, and the evidence suggests the mathematical model required for CAPM simply fails to correspond to reality. But many finance folks, especially in academia, won't admit it.

In other words, "Things are more complicated in real markets". I don't see any problem with wanting descriptive theories. Economists are interested in building models whose parameters can be tweaked in ways that correspond in a clear meaningful way to a possible actions (e.g. change government expenditure on X by Y). If you try to just maximize predictive power you're missing the point. Economics can be used to predict the future, but I would say the main goal of the field is to produce theory that will provide valid methods of altering the future.


So you're saying that we need wrong theories that can guide policymakers in the right direction? That sounds like a bad idea. It's basically "I want you to do this thing. Let me invent a theory that makes me sound more persuasive. Don't worry about the details!"

Yes, we need theories that are easy enough to reason about that they provide useful insight, but I don't think this is even close to being too complicated.


This doesn't make a lot of sense in this example with CAPM. The economic theory provides something that is abjectly wrong. It is literally claiming there is a positive relationship between two quantities when the evidence suggests the relationship is negative.

The theory is grossly qualitatively wrong, not merely just on the edge of being quantitatively wrong in a few cases or something.


At first "overfitting" was the concept that leaped to mind as I read it, but overfitting is sort of something different. You have a complicated data set, and when you overfit, you keep adding dimensions of freedom, and then setting those dimensions of freedom to produce the original values that were passed in, such that it fails to correctly predict any other instance drawn from some conceptual "same distribution".

In this case, the theories do the "adding many dimensions of freedom" part, but they don't really fix the values; they probably haven't got data to fix in the first place, and it's not mathematical enough to be that precise about the fitting.

I think it's more like these theories are often not so much hypotheses themselves, but the descriptions of hypotheses spaces, and the addition of "nuance" means that in a lot of cases, what sounds like a "theory" is really just the description of a space that happens to contain all or most of the possibility space in question. As much fun as it may be to do that, and as much as it may even be a necessary first step to a real theory, in practice that is not generally the difficult part of the process. (There are exceptions to that which may leap to mind but I submit they would be the exceptions, rather than the rule.) When one is working with fuzzy words (even if by necessity!) and not math, the fuzzy words make it very easy to make this mistake, I think. English does not have a strong distinction between a hypothesis space and a specific hypothesis in that space, as evidenced by the very fact I have to dip into jargon to even make that point!

From that perspective, the solution that leaps to my mind is implement the observation that in many ways, the important thing about a theory is not what it describes, but what it excludes. It is too easy and cognitively tempting to try to create a theory that, when you fiddle with the parameters, describes everything you see, but the real questions are, what is excluded by your theory? Can I fiddle with the knobs in your theory to describe something that you say ought to exist, but does not seem to? Is your theory even well-enough specified that these are meaningful questions to ask about it?

I don't know the answers to these questions for sociology. I have to admit I've tried to engage with it a couple of times, because what fan of Isaac Asimov doesn't want to learn about some real-life psychohistory, but I've always found it difficult to get traction with with my engineer mind. I can engage from a philosophical point of view, but that is not generally what I was looking for; I've got, well, philosophy itself for that. (Repeat previous sentence for history, psychology, and economics.)

To p4wnc6's question, I'd submit that as dismal as the Dismal Science may sometimes be, it does often meet this bar. I fear economists themselves may often be quite sluggish at recognizing when their own theories are contradicted by the evidence because the theories are saying that what is currently happening (in the general sense, not the literal March 27, 2016 sense) is excluded by the theory, but that is a human problem, not an economic theory problem. Economic theories do exclude things; if supply rises and demand holds steady, the vast majority of possibilities where price goes up are excluded, and if price were to go up, the theories would be useful in investigating how such a thing actually happened by examining what such theories would still permit (e.g., perhaps supply isn't really rising because the new supply is actually a higher quantity of lower quality, and too much of the demand is for a higher quality than the new supply is supplying which means "real" supply is dropping, or something like that). They certainly aren't perfect and a given theory may even be wrong, but, well, the very fact that a theory can be wrong is evidence that it is falsifiable, and thus not "nuanced" into infinite psudo-explanatory power.


It is a fantastic article, but I don't think excessive, obsessive nuance is what plagues most people here. In any event, don't take it too far. In dynamical system, your theory can be simple and predict observation 100% yet be completely wrong -- not just a little wrong -- and fail to model any of the actual dynamics (e.g. x = 0 and x = t are both completely predictive, yet entirely false, models of x" = 0). It is those simplified theories that are no more than empirical observation, and suffer from another kind of overfitting.

> In statistical terms, these dismissals are little more than a call for overfitting. can arbitrarily add more dimensions to a theory, I can make it fit any data set.

In order to fit any social data set, you may need dozens if not hundreds of variables. If your data set contains many millions of data points, telling you that there may be four variables instead than two is certainly not "a call for overfitting"; it is no more excessive nuance than the entirety of special+general relativity. Or, don't use this as an excuse to stop questioning your theory once it predicts behavior in one setting. You may have completely missed the mark by misjudging the order of the equation. Let alone, when your theory doesn't even fit the data we have -- just the data you have -- it's nothing more than amateurism; don't confuse amateur smart-assedness with any sort of profundity; it is often wrong in the most boring way: uninformed wrong.

Intuition that comes from years of study of human behavior may help a lot, but some believe they can come up with theories without spending the time to earn their intuition first. Whether overly nuanced or over-simplified, I think the chances that any of their theories are correct -- especially if they put little effort in forming them -- are low.

And one thing is certain: the less you know, the far more likely it is that you'll err on the side of over-simplification than over-nuance. This paper is a warning to experts, who know enough to err with over-care. Amateurs (of the non-humble kind) are indeed completely protected from this particular kind of error by their ignorance, especially if they don't have the basic tools to even assess the system's level of complexity.

This paper is a call against obsessive, exaggerated nuance. Not a call for oversimplification and sloppiness. And while one can err on both sides, historically, errors of over-simplification (when it comes to social theories) have been much more detrimental to the human race than errors of nuance. The latter kind may cause boring science; the former has led to atrocities. I think that many of the calls for more nuance are just warnings against careless errors of the first kind, simply because they are far more dangerous than errors of the second kind.


In any event, don't take it too far. In dynamical system, your theory can be simple and predict observation 100% yet be completely wrong -- not just a little wrong -- and fail to model any of the actual dynamics (e.g. x = 0 and x = t are both completely predictive, yet entirely false, models of x" = 0). It is those simplified theories that are no more than empirical observation, and suffer from another kind of overfitting.

No, this is not overfitting at all. This is failing to fit the data. Based on your true model of x''=0, x=0.5t and x=-t are also valid data sets which can be observed but which are not predicted.

And one thing is certain: the less you know, the far more likely it is that you'll err on the side of over-simplification than over-nuance.

When this is the case, there is only one valid response: "your theory predicts X but experiment shows Y."

"You don't have enough expertise", "you haven't been hazed with years of study", "you are an amateur", etc, are simply ad hominem attacks. So is blaming scientists with simple theories for "atrocities".

The fact is that lots of highly successful social theories have very few variables - supply&demand, EMH, psychometrics (g) are all great examples. They make testable predictions which accurately predict the world. Lack of nuance only becomes an issue when you want to go from 85% to 95% accurate.


> This is failing to fit the data.

This is assuming that the non-fitting data is not ignored, if it is known to the theorist at all. If anything, the bigger problem among social science amateurs is obliviousness to data or lack of intuition to understand how biased collected data is simply because they're unaware of historical data. Hence, they come up with theories like x = t.

> there is only one valid response: "your theory predicts X but experiment shows Y."

Sure, except that the falsifying experiment took place 890 years ago, you likely don't know about it, and if you did, you'd find excuses (possibly good ones) for dismissing it as irrelevant. The problem is, of course, that experiments are very hard to conduct, and all-but-impossible to conduct under lab conditions.

The reason intuition is so important is that we know with absolute certainty that some things that seem so natural and stable now, used to be diametrically different, and they changed because people changed them. If you know from experience that 80% of coefficients are known to vary considerably over time and as a result of human action, you tend to be very skeptical when someone claims they have found a "natural law" of human society.

> So is blaming scientists with simple theories for "atrocities".

Perhaps it's ad hominem, but it really happened and more than once (except the atrocities were real and required no quotation marks). I don't know what else to tell you.

> They make testable predictions which accurately predict the world

Maybe, and maybe they theorize that x = t when, in fact, x" = 0. Showing that x = -t may also be a valid solution requires making huge changes, among them changing the perception that the theories are correct, which in itself perpetuates the path of the system. The only thing you can do is point to historical trends, but historians, most of them ever so careful, just sigh in exasperation, while the masses believe the "science". Just today I read something wonderful about perception of the sexes[1], yet I believe many might find simple models that perfectly predict the world they see around them, not realizing it is no more than the result of people who made it this way so-and-so years ago, and they can make it different yet again.

It's like having a reverse Cassandra complex only in reverse, and it can be just as frustrating.

Do you have any doubts that Netflix's prediction engine shapes people preferences more than it passively predicts them? I am not saying that it doesn't suggest movies people enjoy, only that it could have made completely different suggestions that people may have enjoyed even more. How do you measure the effectiveness of this algorithm against all others (and there are way more possible algorithms that can be tested with the sample they have)? Or do you just declare it successful and accurate?

[1]: https://www.reddit.com/r/AskHistorians/comments/4c5l63/in_th...


> In statistical terms, these dismissals are little more than a call for overfitting - if I can arbitrarily add more dimensions to a theory, I can make it fit any data set.

Sure. However, this leads me to believe it is statistics that are the problem, and not nuance. You are in my opinion admitting that nuance invalidates most statistics, or at least allows for heavy manipulation of the statistics to fit any narrative. This is precisely why public policy should not be made using statistics. Society should not be likened to a turing machine. People, history and so many topics of discussion are full of nuance, detail and perspective; and disregarding nuance (or perspectives) is insulting to the truth.


No, on the contrary I'm saying that calls for nuance are often excuses to do bad statistics and bad science.

Rather than throwing away our theories because they predict X and the world does Y, they instead say "actually the theory only applies under vague unspecified circumstances which you'd understand if you were as nuanced as I". Whether or not the theory fits the data, they cry "intersectionality" or "humans are so complex".

In any case, your post begs the question. If we should ignore statistics, how should we make decisions? Emotionally pleasing verbal narratives?

See also the related essay, "Pragmatism is Poison". It argues that the word "pragmatism" is used in software a lot to shut down discussion of bad choices, much the same way "nuance" is used to shut down discussion of good theories. https://einarwh.wordpress.com/2016/03/10/pragmatism-is-poiso...


No true scottsman will allow their theory to be critiqued. A true scottsman is more nuanced.


> I'm saying that calls for nuance are often excuses to do bad statistics and bad science.

Absolutely. But at least as often, calls for nuance are calls for better statistics and better science. How do you know which is which? Judgement. You can only say "fuck nuance" once you've demonstrated good judgment.


I would distill my understanding of this paper as "Strong Opinions, Weakly Held"

http://www.saffo.com/02008/07/26/strong-opinions-weakly-held...


... with Weird capitalization.


  By calling for a theory to be more comprehensive, or for an explanation to
  include additional dimensions, or a concept to become more flexible and
  multifaceted, we paradoxically end up with less clarity. A further odd
  consequence is that the apparent scope of theories increases even
  as the range of their explanatory application narrows.
This seems analogous to poor imperative programming to me: as your code becomes more complicated, the range of it's application narrows.


I think this is better understood as a consequence of Bonini's paradox (https://en.wikipedia.org/wiki/Bonini%27s_paradox).


I thought of the map-territory jawn, as well.


I think when debate comes down to differences in ideology and idiosyncracy, it's at the end of debating because these differences are impossible to compromise (they are all legitimate).

And the world is at a point that everybody can believe in everything and all these beliefs are all grounded and justificable (using the available tools and knowledge we've accumulated).

To deal with the somewhat disturbing thought that the world doesn't need to be saved, I resort to the mantra that: eventually, the reason we read, the reason we write, the choice we make, everyday, is just because we feel like.


I assume simple theories are very good for advancing the science itself. But I don't think they are as actionable as complex ones, because reality is complex, and you need to account for that when creating policy.


Nuance is lethal for scientists, but lack of it is lethal to practitioners.

And practical experience consist mostly of knowing what kinds of nuance are important for what kinds of problems.


I thought this was going to be a rant about voice recognition


Can we un-capitalize "Nuance"? For a second I thought it refers to the speech technologies company.


Same. I actually clicked hoping to read a rant against my least favorite speech company and their bad habit of buying up competition for the patents and then sitting on the technology. They own the rights to the speech synthesizer I use every day, Eloquence, which is no longer developed and will probably never improve, but is easily the most intelligible synthesizer at high speed.


Yap same here ;-) They squeezed us with their pricing model for years. I was nodding as I was reading the title, then found something else. Oh well.


This is possibly the most hacker-news-ish comment I've ever read.



http://www.dailywritingtips.com/rules-for-capitalization-in-...

I think the better and more useful idea would be to stop naming companies and products after very general and widely used words, no matter how tempting it may be to decorate stuff with the associations those words evoke. That never had merit to begin with, and I certainly don't see why general good faith use of language should be restricted by such shenanigans.


Titles can't all be unambiguous, and it seems unfair to let air out of this one's balloon.


Agree, borders on click bait.


It's a title.


It's a clickbaity title for the paper.


The amount of nuance in this article is overwhelming.

(The abstract/text ratio is ridiculous)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: