"glivenko cantelli on steroids", good. Sounds like they actually did something.
Yes, I'm torqued by the new learning labels on old bottles of pure/applied math, but that is not in my way.
> The lingo differs here and there: estimating parameters become learning parameters etc.
Rubs my fur the wrong way.
If they have some stuff beyond
borrowing from Breiman, okay.
What's "in my way" now is my startup: I've got the math derived and typed into TeX and the 80,000 lines of typing for the code, with the code running, intended for production, and in alpha test, so just for now I no longer have any pressing math problems to solve.
But, in time, I may return to my math and tweak it a little to try to get some variance reduction. Maybe some of the better machine learning literature would help, or maybe I'll just derive it myself again.
Function space geometry is about where much of my core math is.
Happy to hear back from you. I am actually gladdened that your anomaly detection work is getting some interest on HN lately. Hope something comes out of it. I am now slowly coming to the conclusion that pushing better methods on an existing stack would be really hard. Too much friction, too much politics. Perhaps the way is to create your own better cloud of servers, but that's really big league stuff. Not sure I have the stomach for that.
Curious if you have given any thought to the choice of the metric space where you define your statistics. That might play an important role from what I have seen. There might be an interesting manifold story there.
Big spoilsports are non-stationarities and even bigger are those fat tails. If only everything had a moment generating function.
I see that you have been pointed to Abu-Mostafa he is definitely a good source. Not that Andrew Ng is unaware of the stuff, far from it, he is fighting a different battle: to make parts of ML a commodity.
If you have time then you can browse the proceedings of COLT (conference on learning theory) and ICML.
> or maybe I'll just derive it myself again.
You almost always have to derive it yourself anyway even after you have seen the derivation by somebody else.
"glivenko cantelli on steroids", good. Sounds like they actually did something.
Yes, I'm torqued by the new learning labels on old bottles of pure/applied math, but that is not in my way.
> The lingo differs here and there: estimating parameters become learning parameters etc.
Rubs my fur the wrong way.
If they have some stuff beyond borrowing from Breiman, okay.
What's "in my way" now is my startup: I've got the math derived and typed into TeX and the 80,000 lines of typing for the code, with the code running, intended for production, and in alpha test, so just for now I no longer have any pressing math problems to solve.
But, in time, I may return to my math and tweak it a little to try to get some variance reduction. Maybe some of the better machine learning literature would help, or maybe I'll just derive it myself again.
Function space geometry is about where much of my core math is.
Thanks.