Episode 27 - Big Algorithm, Fat Tails, and Converging Priors
Today we dive into the current Bayesian flame wars on Twitter. Do Bayesian priors converge? As Nassim Taleb (@nntaleb) points out, not necessarily until a fat tail or power law distribution. We'll talk about what that means, and the wonders worked by Bayes rule even under some seemingly preposterous priors.
Also - the military wants to do machine learning with less data. Is the era of big data over and giving way to the era of the big algorithm? The results of the Twitter Shadow Ban poll, QA bias, the Streisand effect and the Alex Jones banning
News Stories
Military looking for algorithms that require less data
Where’s Waldo Finding Robot
Twitter is the only place that hasn’t removed Alex Jones
Oops! This one went out of date in about a day. We’ll follow up next week!
Books Mentioned
The Black Swan by Nassim Taleb
Antifragile by Nassim Taleb
Enlightenment Now by Steven Pinker
Nassim Taleb
Criticism of P-Values: A paper, and a blog post/video explaining the paper
Tweet on Bayesian Priors that don’t have convergent posteriors
The idea behind “Bayesian” approaches is that if 2 pple have different priors, they will eventually converge to the same estimation, via updating. A “wrong” prior is therefore OK.
Under fat tails, if you have the wrong prior, you never get there. (Taleb & Pilpel, 2004)
Gabish?
Video on Problems with Probability
Steven Pinker
Video that mentions all the stats about the world getting better:
Previous Episodes
Episode 9 discusses another idea in Taleb’s writings, Lindy’s Law.
Episode 0 is where I define and explain Bayes Rule.
Episode 21 is where I go into more depth on the justification of Bayesian inference.