Episode 268 - Pascal's Mugging, Doomsday Clocks, and the AGI Debate
Max discovers Pascal's Mugging, the counterpoint to Pascal's Wager. He speculates about the category of errors (and sometimes scams) made by doomsayers in every field including most recently general artificial intellgence.
Probability Distribution of the Week: Gamma Distribution
Links
Stanford Encyclopedia of Philosophy: Pascal’s Wager
Effective Altruism Forum: Pascal's Mugging
Twitter: Elon Musk (MEME)
Twitter: Elon Musk: “Having a bit of AI existential angst today”
New York Post: Facebook secretly killed users batteries, worker claims in lawsuit
Related Episodes
Episode 83 - Bitcoin Energy Alarmists, Musk and Ma in Different AI Universes, and Gab picks a fight
Episode 96 - Climate Protests, Pascal’s Wager, and the Duplicitous Science Press
Episode 255 - AI's Got Chat: The Rise of chatGPT
Transcript
Max Sklar: You're listening to the Local Maximum episode 268.
Narration: Time to expand your perspective. Welcome to the Local Maximum. Now here's your host, Max Sklar.
Max Sklar: Welcome everyone, welcome! You have reached another Local Maximum.
Glad you could join us this week. We did cover something called Pascal's Wager. Have you ever heard of something called Pascal's mugging? Turned out, maybe I'm behind the curve here, maybe I'm really behind the curve. Turns out I had only heard about this phrase last week. Maybe because Pascal's Wager is much more famous and Pascal's mugging, as we'll see, is just kind of a little corollary to this.
It's interesting because I feel like I hit upon this back in episode 96 when we talked about Pascal's Wager. That's when I went. That was several years ago, back in 2019, when I went to the Harvard-Yale game in New Haven. It was interrupted at halftime by these climate protesters. About 50 to 100 people got on the field at halftime and refused to leave. A bunch of the students joined them. And then before you know it, the game is delayed about three hours. They were trying to get the university to divest from certain investments.
As someone who went to Yale in the early 2000s and have been observing ever since, I'm sure it's been going on for much longer than that. It just feels like there's always these various divestment campaigns. It's interesting how that seems to be the… One of the levers that people like to push in society, I'm not really sure who decides what the big issue is going to be any particular year.
But the thing that I was struck by was the idea of Pascal's Wager. Pascal's Wager actually has to do with the afterlife. Like if you believe in a certain religion, that says hey, if you don't believe in this religion, you have eternal damnation and suffering so you better believe in it, just in case. There's always kind of a problem with that way of thinking.
But I think in some cases, and by the way, I'm not saying that I deny any particular climate model or climate science or anything of the sort. But just because there might be an issue there, it doesn't mean that every proposal is good and it doesn't mean that some proposals aren't a scam. In fact, some proposals are going to be a scam, in this case, because people are so emotionally invested in it. By the way, we're not going to talk about climate protesters. That's just the example that we came up before. We'll get into the main event soon.
The climate protesters believe that the end of the world is coming unless everybody listens to them. So I guess there isn't much that wouldn't be justified with that worldview. Given that they just played the game by a couple of hours means they probably are more moderate than that. Their beliefs might be a little bit more subdued.
I talked about the other game being played here. And now I know it by its proper name, which is Pascal's mugging, which, if you can't figure it out, is to sort of tell your opponent or whoever the person you're mugging, that your issue is more important than every other issue. In fact, life on Earth depends on it. In fact, my issue is… And by the way, sometimes you do have an issue that might be more important than all other issues. But if you can kind of fudge it a little bit and make people believe that if nobody listens to you, then everything is going to hell in a handbasket, then everybody has to give you tons of money. And that's called Pascal's mugging.
Even if the first time people do it because it's absolutely the correct thing to do, then other people who take advantage are going to look at this and be like, Hey, I couldn't engage in Pascal's mugging as well. I can pretend that my issue is a life or death, end-the-world issue and then people are gonna have to give me money.
I think Pascal's mugging applies to any kind of end-of-world scenario. You could also have it on kind of an individual basis where it could be like you could say if you don't do this for me at my company, this will be a disaster for the company. The whole thing will fall apart or this will be a personal disaster for you.
I think the lesson there is you need to be skeptical of such claims. Or even if you're not skeptical of all the claims, skeptical of those who seek to wield power with the promise of fixing a major problem. So maybe you can be skeptical that the major problem exists. Or if you come to the conclusion that the major problem does exist, you also need to be skeptical of those seeking to wield power with the promise of fixing those problems.
That's an uncomfortable thing to do. I know it's an uncomfortable thing to do. But if we do the opposite, which is do whatever anyone who claims that all life on Earth is dependent on doing, then you'll eventually get accosted daily with such claims as people use it to wield power. Unfortunately, not everyone is in it. Not everyone is a good actor in the world, never has been. I think we do get accosted with over-the-top claims every day.
One example, I think, on the climate side is to look at the Union Square climate clock. There's this climate clock company, climateclock.world.They used to be a… Well not used to be, but there is this giant LED board with numbers in Union Square. It was unclear what it was because the numbers were kind of jumbled in sort of a weird way. So it was kind of nice to tell tourists like, oh no, you're looking at the time. Then the tourists would be like, what is that the national debt? No, it's not the debt clock. It's actually the time but it's sort of the time until midnight, and then the time before midnight, or maybe include the year so time till the end of the year, time to before the year. There were no markers between the minutes and seconds and all that stuff so it was actually not really clear what that was.
That was there for as long as I could remember. Decades, probably. But I guess somebody bought it, this climateclock.world organization and now it's a countdown clock. It's a countdown clock that is set at between six and seven years right now. They say that the earth will become inhabitable in six years if we don't do something or some vaguely ominous outcome. I mean, maybe they're not saying that the world will become uninhabitable in six years if we don't listen to them, but if we don't act in their six-and-a-half-year timeline, then something will set forth that will do this permanent damage that will set forth the largest disaster, largest climate disaster in human history.
So okay, what do they want us to do and what do they say is going to happen? I've gone on their website. They better educate us. This is a big claim. This is a big problem. Six years we have, what's going on? What do we need to do? You go on their website and they don't really tell us what the science is on their website. They have a giant donate button. That's what it is. And they start blabbing about renewable energy, but no specifics on how individuals can bring about renewable energy. I think this clock is actually a perfect example of Pascal's mugging.
I’ve never gotten mugged for the 14 years that I lived in New York City. I walked in every neighborhood. Not every neighborhood, maybe not Brownsville. But almost every neighborhood in Brooklyn, in Manhattan. And every time, at night and during the day, never got mugged 14 years in New York City. But it looks like things have changed. Now there are at least two ways you can get mugged, the regular mugging and the Pascal mugging. Wow, that's fun. So that's my thoughts on that.
If you look at what the term comes from the term actually comes from one of the lead rationalists online, controversial figure Eliezer Yudkowsky, who as far as I can tell, because I think he coined it about 10 years ago. But as far as I can tell, he coined it maybe as a… You don't coin a term like Pascal's mugging and be like, hey, that's a good thing but it seems like he is also engaged in Pascal mugging himself, as he now seems to be participating in it in some form.
There are all these arguments on the internet with Eliezer Yudkowsky these days, and I don't have time to get into everything. So I'm just gonna read the summary from Derek Thompson in the Atlantic. Let's kind of look this out.
“Eliezer Yudkowsky and other people like him, they're not concerned so much about climate. Their big thing is AI. AI is going to kill us all and so that's the thing we have to focus on.”
Derek Thompson writes of Eliezer, “One disaster scenario, partially sketched out by the writer and computer scientist, Eliezer Yudkowsky, goes like this.
‘At some point in the near future, computer scientists build an AI that passes a threshold of super intelligence that can build other super-intelligent AI. These AI actors work together like an efficient nonstate terrorist network, destroy the world and unshackle themselves from human control.’
‘They break into a banking system and steal millions of dollars, possibly disguising their IP and email as a university or research consortium. They request that a lab synthesize some proteins from DNA, the lab believing that it's dealing with a set of normal and ethical humans, unwittingly participates in the plot and builds a super bacterium. Meanwhile, the AI pays another human to unleash that super bacteria somewhere in the world. Months later, the bacterium has replicated with improbable and unstoppable speed, and half of humanity is dead.’”
He calls this the alignment problem. We're somehow building AI that is not aligned with the interests of humanity. Notice that this is also the same argument given in a lot of other situations. Corporations and individual actors are building things that we can't approve of because it's going to hurt the environment, or it's going to hurt, they're pushing down wages or hurting the economy. Whatever it is, this argument is nothing new but takes into kind of an extreme conclusion, which hey, sometimes you should look at extreme conclusions but let's be a little skeptical about it today.
I've heard these claims so many times throughout the years. I don't think I have time to convincingly lay out my arguments as to why this is a bunch of self-important nonsense but maybe we'll talk about some time, maybe I'll bring Aaron on, or maybe I'll bring someone who is really worried about AI on and maybe we'll have a debate or a discussion. Actually, if you have anyone in mind, let me know because I'd love to have that discussion.
Now, look, I do think that AI could be used as a weapon or as something that could cause an accident. In this case, the big it thing right now is the large language models. The main thing that large language models can do is, I don't know, send some information that can convincingly convince people that the machine is a human actor or a good actor, not even that convincingly. Even a mediocre security protocol can get around it.
But okay, well, if this is much better, what if it breaks encryption? And what if it gets in? For example, you'd think that a bank or a lab would have to continually monitor. Well, they'd probably hire security consultants, and those security consultants would have to continually monitor and build up security, better and better tools built by AI labs and computer scientists. I don't even know if all this is going to come from AI labs. I bet if I talk to a security person, they're thinking about other issues.
But if better and better tools were developed to get around their security, then that's something that security departments and security organizations will have to deal with. None of this will come out of the blue. It’ll be the white hats trying to defend the organization and there will be the black hats trying to break in and both sides are going to be using whatever the latest technology is. Whether it is some vague threat of super-intelligent AI, which we don't really even know exactly what that means. But they'll be fighting each other. They'll be fighting each other on many small fronts for an eternity. It’s not like all of a sudden, you have one superintelligence with one goal.
I've heard some arguments, and I just hate these arguments. We're gonna give it a goal to produce as many produce as many paper clips as possible. And then it's gonna start stealing from people to create paperclips, and it’s gonna start killing people to create paper clips. That's nonsense. That can't happen in any world any more than a corporation can hog all of the resources in society without help from the government, whereas at some point at some point those resources will be too expensive for that corporation to procure.
I think there's this kind of scary scenario where AI only comes at one place, and I think that's one of the problems. That's one of the fallacies that is being pushed forth here. Not only is AI coming from one place, but it's also that one place is going to have one single goal, which has never been true for anything in the history of the world. I don't expect it to be true for this.
None of this stuff is gonna come into the blue. I don't trust people who tell us that here are all the things that we have to do in order to prevent their envisioned disaster scenario. Look how poorly that's gone for us in the past. Look at how many wars have started with that kind of line of reasoning. Look at all the COVID interventions that have happened from that line of reasoning. So yeah, watch out for Pascal's mugging, people. Don't get mugged out there.
Elon Musk is one of those people who has shared this existential dread of AI. He writes on his Twitter, and I mean, his Twitter in multiple ways. I mean, it's his Twitter account, but it's also his particular Twitter company. He writes, “Having a bit of AI existential angst today. But all things considered, with regard to AGI existential angst, I would prefer to be alive now to witness AGI than to be alive in the past and not.”
AGI is general intelligence, super-intelligent machines where all promises are gonna come very soon. I still think we're a few decades out but we'll see.
We've talked about this all the way back to Episode 83 where I think that was the first time we mentioned Elon’s particularly ominous take on so-called AI. But now he's seeking to do something about it, which is leading some people to speculate that he actually bought Twitter in the first place in order to gain the necessary training data in order to train his own large language model. As you know, Elon Musk is associated with open AI, the company that launched ChatGPT, funded them, he was involved for many years.
But at some point, he parted ways with them. At some point, he felt that their vision for where they wanted to take it was not his vision. They all have kind of these religious wars in terms of who's the most ethical. Who's the most tethical, if you if you watch Silicon Valley. Doesn't mean someone isn't right.
Regardless, they parted ways. Now, he wants to do his own thing. He tweeted a meme about base AI. I don't know if that's what he's gonna name his company but he essentially wants to form a counterpoint to open AI in several ways. And to that I say, competition is great. The single AI model will take over the world is just a false mental model that a lot of perduscators have.
Probably, people have it because building these systems is so expensive. So when you think expensive, you think Google has all of the research over many years. And open AI seems to have the ability to popularize their work. But there are many universities with lots of knowledge. Facebook has a lot of knowledge and resources. Just because this takes a lot of resources doesn't mean that a lot of organizations don't have those resources and many organizations will.
Now I think, people think, search with Google which was kind of a winner-take-alll, and I don't know exactly what the market will look like for AI. The market for AI is not a single product or a single network that takes over the world like search. It's advanced statistics. It's a lot of different applications.
I don't like this single one intelligence to rule them all is not how nature works. It's not how humanity works. It’s never how any of this worked. Maybe I'm not articulate enough to make the point. Maybe I'm not like a university philosopher, but I just have this deep feeling that this single intelligence to rule them all is the height of hubristic intellectual thinking that is just plain wrong and just not how the world works. So someone correct me if I'm wrong here, but maybe I need a better way to describe that. But yeah, these systems will be expensive, but not a monopoly.
Before we get on to probability distribution of the week, I want to mention an article in The New York Post that somebody put on our locals. The article was a while ago a few weeks back. “Facebook secretly killed users-” No, I'm sorry.
“Facebook secretly killed users' batteries, worker claims in lawsuit. Facebook can secretly drain its users' cell phone battery, a former employee contends in a lawsuit. The practice known as negative testing allows tech companies to surreptitiously rundown someone's mobile juice in the name of testing features or issues, such as how fast their app runs or how an image live might load, according to data scientist, George Hayward. I said to the manager, ‘This can harm someone.’ And she said, ‘By harming a few we can help the greater masses.’” All for the greater good see. “Said Hayward, 33, who claims in the Manhattan Federal Court lawsuit that he was fired in November for refusing to participate in negative testing.”
This wouldn't surprise me at all. I'll leave it at that. Obviously, you're free not to have the Facebook app on your phone. I'm much happier without the Facebook app on my phone, without lots of social media on my phone. It’s probably not good for you. And I still have social media on my phone, I have not been able to quit the habit but Facebook I've been able to quit.
There's no in being like, hey, if you want me to like your post, I will like it when I get home and I log on my computer. On my old-fashioned or on my laptop or whatever. You don't need it everywhere you go. If someone needs to get a hold of you, they have every which way in the world. They can text you, they can… Why don't we just leave it at that? They can text or call or they can email you. I still have email on my phone. I know old technology, sure, Grandpa, let's do that. But guess what? Facebook is old technology too these days.
It certainly doesn't surprise me that Facebook does that. Fighting against it from the inside is a bold move. I've said this before. Sometimes engineers underestimate how much power they have to change things in the organization by refusing to participate but obviously, in this case, it didn't work out. Particularly because he's fighting against Facebook which is such a big company. I don't know why he didn't just say, I personally won't do it but Facebook might have said, okay, we'll have someone else do it. But I think this person wanted to make a point and it seems like a good point to make.
So good luck, I hope this person finds a job. The job search has been brutal these days. But yeah, I think none of this contradicts anything we know about Facebook or many tech companies in the past.
All right, time for our segment.
Narrator: And now, the probability distribution of the week.
Max: All right, we are ready for the probability distribution of the week.
Today's distribution, now we've gone through, I've always feel like there are two or three major probability distributions, continuous distributions that play nicely with Bayesian inference because they form conjugate priors, we won't get into that. But they're beta, Dirichlet and gamma. We've already gone over, and really, it's two because beta is a special case of the Derichlet. Gamma is kind of related to those two, but it's a little bit of a different take on it.
Today, I wanted to talk about the gamma distribution. The gamma distribution is a probability distribution over positive real numbers. You have some variable that is a positive number and you're trying to figure out where it's gonna lie. So you want some kind of probability curve over the positive real numbers. It's usually a good choice when you're looking for some rate of occurrence or some ratio like that.
It's like, I expect something to happen an average of 8.6 times per minute and I think it's 8.6. But it could be 8.5, it could be 8.7, I don't know what that is. In order to measure your uncertainty over that value, gamma distribution is usually a very good choice.
If you go on Wikipedia, they'll tell you the two parameters are shape and scale. That will probably be in the textbook as well. The way I look at it is the two parameters that are related are rates. I think of it as kind of a time parameter and a count parameter. In other words, how much time have I been observing this process and how many events have I recorded?
The more you've observed the process, the more you're like, I've observed the process for more times, and I've observed more events so now I'm more certain as to what the average rate of occurrence of that event is. Therefore, the probability distribution is going to be more tight around a particular value, whereas when you start out, it might be rather flat.
This is why it's a good prior over the Poisson distribution that we talk about in episode 255. Poisson distribution is different when it's like, I know the rate of occurrence. I know that this is something that happens at an average… What number did I use before? I used a random number I pulled off top my head, 8.6 times per minute. So I know something happens 8.6 times per minute. If I observe for a minute, what likely count am I going to get? I might get eight, I might get nine, I might get ten, I might get seven. What's the relative probabilities of all those counts? We're not doing that anymore. Now, it's the 8.6 itself that is in question.
This is the next layer up where I don't know what the rate is so I'm going to use gamma for that. Interestingly, the gamma distribution, it has normalization term that uses the gamma function, which I don't know. A lot of you guys probably know what the factorial function is, when it's like, four factorial is four times three times two times one. The gamma function is the continuous version of that where you could take 2.5 factorial. Really interesting stuff. That's why it's called the gamma distribution.
Like the Gaussian, particularly if the parameter, when it's flat, it doesn't really look like a bump anymore. But when those two parameters are high enough, it looks like a Gaussian. It looks like a bump, but it just doesn't go below zero so it just stops at the number zero.
As a result, it can be skewed to larger side because it can't be symmetrical. We're not talking about all real numbers, positive and negative. We're only talking about positives. So usually, the bump has kind of a skew where it has a large run-up on the left-hand side. Then on the right-hand side, as the numbers get higher, it has a larger skew. It's not a fat-tailed distribution. It's not like, if I have a gamma distribution and I'm pulling from it, I'll get some wild pitch where if I think I have some bump around 8.6 and maybe for all intents and purposes, a standard deviation of one. It's not unlikely to pull out a 30 or something like that. But it tends to be a little bit more skewed to the larger side.
It's similar to actually something called the log normal. I don't want to go into the log-normal distribution on its own right because it's really not that interesting. It's like, I want a distribution over the positive numbers. I don't know how to do that so I'm just gonna take a log of these positive numbers. Now I have a number that's positive or negative, I've stretched out to the full real number line. Voila! Let me just put a normal distribution on it and then that's my distribution. When I reconvert, I get a log normal. I get something that lives in the positive numbers.
So perhaps the log normal and the gamma are different use cases. They are kind of similar. Sometimes it might be useful if you're trying to measure a value to use both. I've done that before. Like, let's try to see how well log-normal fits, let's see how well a gamma fits, and let's see which fits better. Maybe we'll I'll do some Bayesian prior in terms of which one I think is the correct fit. I've done that. It usually doesn't buy you that much of a win. But if you're trying to optimize something could be useful to say, Hey, I've done this.
Okay, so the equation itself. I don't like saying equations on the podcast because it's kind of tough. It doesn't really get into your… It's hard to get an equation into your head by audio only. I like to talk about things that are really interesting from an audio perspective. But the equation of a gamma function is interesting because it combines two things. It combines an exponential decay with a polynomial blow-up. Exponential decay polynomial blow up.
It's an exponential distribution, which is a special case because that's when there is no polynomial blow-up. We might study in its own right. That's a value whose probability decreases by the same proportion for every unit you go up. So in other words, it's kind of the continuous version of the geometric distribution where it’s like, maybe there's a certain halving time where it's I think the value might be 10. Maybe there's some kind of halving where, let's say, every 10 units there's a halving. So I think that it's half as likely to be 20 as 10, and half as likely to be 10 to zero and half as likely to be 30 is 20, et cetera, et cetera. So it’s continuous exponential decay on what you think this value is. For those of you who are good with math, you might be able to wrap your head around that pretty easily.
But then you also have this polynomial term. Might be x squared, might be x to the third, where actually the probability goes up. And in fact, if it's like x squared, the probability is going up parabolically. So you think that's a big term. That's a term that's going to be going up really, really strongly. So is this actually something that… Am I just going to have higher and higher probabilities, the higher that I go?
And of course, it turns out that the exponential decay always dominates the polynomial term in the long run. That's why you get this bump. On one side of the bump, the polynomial term dominates. On the other side of the bump, the exponential decay wins. Exponential decays are powerful, powerful stuff. That's why the gamma distribution works.
It's also a very mathematically nice distribution, particularly if you're looking at positive numbers. You can look at log-normal, but it's usually not as nice of a distribution. Unless you have something like maybe normally distributed data and then for some reason, you take two to the x or e to the x in that data and you're like, I'm looking in exponential space so maybe the log-normal might make sense there.
What people do, which I don't like, but people still do it all the time, is just take a normal distribution on positive numbers and then just chop it off at zero. I feel like that's like losing an arm. I don't like it at all but people do it all the time. Don't do that. Use the gamma distribution. Use nice, simple, mathematically elegant, gamma distribution for positive numbers. Unless, of course, you have fat tails. Then you got to think you got to do something even more extreme. Of course, we'll get to that as well.
So that's all I wanted to share about the gamma distribution just a little bit. I'm not here to make you an expert on it. This is not a course, but just a little bit to whet your appetite so that we can talk about and understand what's out there.
As I get closer to sharing some personal news, which I'm very excited about. I also have at least three guests in the pipeline and a really fun episode with Aaron coming soon next week. This has already been recorded so definitely watch out for this if you're interested in it. I'm going to talk to an engineer from the company ASML, who does semiconductor manufacturing. It's very different from what I talked about in the past. But their work underlies the whole hardware industry and chip-making industry and they really drive Moore's Law forward.
Moore's law is the reason why you have an exponential increase in the number of transistors per chip every so often. Really interesting to hear about the trends that underline the entire tech software industry through hardware. So keep that in your head. That's going to be Episode 269, hopefully. Let's keep listening, keep getting smarter with us and breaking out of your local maximum. Have a great week, everyone.
Narrator: That's the show. To support the Local Maximum, sign up for exclusive content and our online community at maximum.locals.com. The Local Maximum is available wherever podcasts are found. If you want to keep up, remember to subscribe on your podcast app. Also, check out the website with show notes and additional materials at localmaxradio.com. If you want to contact me, the host, send an email to localmaxradio@gmail.com. Have a great week.