Episode 134 - AI Breakthrough & Understanding “Understanding”
Tech-savvy people worldwide are excited because of the latest AI system GPT-3 by OpenAI. This new technology is making AI as advanced as it has ever been. GPT-3 is a groundbreaking development that could potentially have a tremendous impact on predictive technologies around the globe.
In this episode, Max Sklar talks about OpenAI’s GPT-3 and wrapping our heads around natural language understanding. There is no doubt artificial intelligence is slowly changing the world, and Max is here to unravel what that means for humanity. He also gives us an update on other news in tech and our society.
Tune in the episode to discover the latest breakthrough in AI and more!
Here are three reasons why you should listen to the full episode:
Learn more about the latest in AI development, the GPT-3 by OpenAI.
Max shares his insights on artificial intelligence and relates it to natural language understanding.
Discover the latest news in technology and our society.
Resources
Try the text simplifier: simplify.so
Wall Street Journal: An AI Breaks the Writing Barrier by David A. Price
BD Tech Talks: The untold story of GPT-3 is the transformation of OpenAI by Ben Dickson
MIT Technology Review: OpenAI’s new language generator GPT-3 is shockingly good—and completely mindless by Will Douglas Heaven
GPT-3 White Paper on Arxiv by OpenAI
Engadget: Watch a Toyota-backed flying car’s first public, piloted test flight by Jon Fingas
New York Post: NYC is dead forever by James Altucher
New York Times: Jerry Seinfeld: So You Think New York Is ‘Dead’
New York Post: No, New York City isn’t ‘dead forever’ — here’s why by Stephen P. Diamond
New York Post: Sorry, Seinfeld: Your love of NYC won’t change the facts about its crisis by James Altucher
Related Episodes
Episode 4 on Language Models and Code Breaking
Episode 16 on Underfitting
Episode 27 on Big Data vs. Big Algorithm
Episode 132 on the future of NYC
Episode 133 on Topology
Episode Highlights
The Latest Language Model GPT-3
GPT-3 is a new language model developed by Open AI. It stands for Generative Pre-trained Transformer 3.
It uses deep learning and computational power to predict what words or phrases are more likely to appear together.
This model is not just about understanding language and grammar rules, but also about understanding the world.
The Wall Street Journal beta testers of GPT-3 reported that it could accomplish human tasks such as writing investment memos and generating business ideas.
OpenAI will release GPT-3 as a commercial product.
Testing the GPT-3
Max tried the technology on simplify.so, a free GPT-3 powered site.
He shared the text he tried to input and how the website simplified and interpreted it.
The result he got was uncommon, and it made the original text longer.
Usually, GPT-3 makes texts significantly shorter.
Thoughts on the GPT-3
Max thinks that it is good at wordsmithing after testing it out with some hastily written sentences.
It can help in writing and editing, and it offers more than grammatical checks.
He talks about the possible users of GPT-3 once it launches commercially.
When it comes to new AI models, ask yourself, is this a parlor trick, or is this real intelligence?
Natural Language Processing vs. Natural Language Understanding
Max uses the concept of a chair as an example of how humans translate their understanding of different ideas and things through language.
Humans create mental models of different things and concepts, and we can relate concepts to one another.
He shares his insights on how AI can't replace the human brain, but he thinks that the GPT-3 is really up to the challenge of replicating it.
Current News About Tech & the World
SkyDrive performed a public test flight for its flying car.
New York City has become a hot topic after New York Post published an opinion piece about the city and why it's dead forever.
For Max, the comments about New York City are arguable. There are still reasons to stay in New York City.
5 Powerful Quotes from This Episode
“In order to actually build a really good language model, like the one that is embedded in our brains, you actually have to understand the world.”
“One of the things to think of when you look at an article like this is you kind of ask, is this a parlor trick, or is this real intelligence?”
“Sometimes you understand the world through formal definitions. But that's rare in life. That's usually not how people learn concepts. That's not how people learn language. We don't sit toddlers down and say, ‘Okay, it's time for definition 1.1.’ … That’s not how they learn.”
“What's interesting about humans is we get good at seeing examples and drawing the appropriate boundaries from the examples.”
“Try to focus on how to get the most intelligence out of the data you have… They seem to validate the idea that humans don’t need a lot of examples in order to get something right.”
Enjoy the Podcast?
Are you hungry to learn more about the latest news in tech and our world? Subscribe to this podcast to learn more about AI, technology, and society.
Leave us a review! If you loved this episode, we want to hear from you! Help us reach more audiences to bring them fresh perspectives on society and technology.
Do you want more people to understand GPT-3 and natural language understanding? You can do it by simply sharing the takeaways you've learned from this episode on social media!
You can tune in to the show on Apple Podcasts, Soundcloud, and Stitcher. If you want to get in touch, visit the website, or find me on Twitter.
To expanding perspectives,
Max
Transcript
Max Sklar: You're listening to The Local Maximum Episode 134.
Time to expand your perspective. Welcome to The Local Maximum. Now here's your host, Max Sklar.
Max: Welcome, everyone. You have reached another Local Maximum. Today is...what is this? This is like last day of August, right? August—no, second to last day of August. Okay, last episode before Labor Day. So we are now, it's kind of a quiet week, but we're now heading towards the last stretch of summer, and hopefully, last stretch of 2020, as hard as that is to believe. I didn't hear a lot from you from last week's topology episode. I really enjoyed doing that. I know that a lot of you listen to it, so that is that feels really good for me.
A few follow-ups after listening to that episode, just one point about neighborhoods, you know, topological space. You know, if a point has a whole bunch of neighborhoods, the idea is if a neighborhood completely surrounds the point, then to approach that point, that you have to enter every single neighborhood at some point. So that is more of a more succinct way of the kind of the point I was trying to make. We’re asking a lot about—Aaron was asking me—a lot about some of the applications of topology, so I looked some of it up. It turns out there are plenty of applications, but they're usually in the hard sciences and not quite in the area that I'm in. So, for example, knot theory is definitely used in theoretical physics, like I said, but it's also used in things like biology and chemistry and the shapes of DNA and the different ways that these molecules can arrange themselves. So I thought that was interesting. I haven't looked into that stuff before, so that sort of, yes, that is definitely an application.
And has there been some talk of topology in machine learning? The answer is, there absolutely has. I mean, I found papers and stuff on it, but nothing has been, nothing's popped out at me being too compelling. Nothing's popped out at me at being like, “Oh, wow, this group is really using topology to kind of change the game and machine learning,” and just sort of some interesting stuff. But who knows, maybe one day, that will change.
Okay, so a few notes before I get into the topic of today, which is, I'm going to talk about—I'm going to give an AI update. And we're gonna talk about, there's all this talk now about GPT-3, which is this new AI system that everyone's all excited about. So I want to talk about that a little bit and maybe what we can say about that.
Before I begin, there is a new section on the Local Maximum Website on localmaxradio.com. You can go to localmaxradio.com/questions. And essentially, it's like a blog, but what I'm going to be doing is going through a lot of the concepts that we talk about on Local Maximum and make kind of a landing page for it and discuss those concepts. So that way, you can look them up directly without having to look up a specific episode with that concept on it now. So, for example, I have, “What is a local maximum?” “What is underfitting?” “What is overfitting?” “What is hill climbing?” “What is topology?” And I think I'm just going to add more and more. I think it'll be a helpful resource. It'll also be helpful for SEO from my perspective. So I am looking forward to that. And also, to kind of put together more, you know, concepts and ideas in our toolbox that we can kind of use and learn from and refer back to. So that's localmaxradio.com/questions. If you have any of your own personal questions, you'll notice that this is more like definitional questions, but if you have any questions that you want to suggest that we write an article for, send me an email at localmaxradio@gmail.com.
Alright. So, GPT-3. What is this? There's been a lot of talk about this system. It stands for Generative Pre-trained Transformer. It's a deep neural net, and it is essentially a language model. So what the language models do—we talked about language models a lot on the program—and you can go back to Episode 4, when we first talked about language model, we’ve actually implemented one to crack some codes that Aaron sent me. And I was able to crack the codes because I was able to automatically detect which text is more likely than which other text, and that is essentially what a language model does. It sort of tells you which phrases and words are more likely to appear together than other phrases or words. In the case that I was working on with Aaron, it was which letters were likely to appear together. So it was, you know, I was trying to decipher some gobbledygook he gave me with the substitution cipher where each letter was attached to a different letter. So they were kind of, Z might stand for A, for example. And so, what happened was, the algorithm would try lots of different things, and it would look for results that look more likely than other results. And the way it defined more likely and less likely was building this language model. The language model that I built was pretty simple. I think it was—I actually don't remember what it was—it might have been just the logistic regression model, or actually, no, it's probably a bigram model. So that's sort of a simple one.
But GPT-3 kind of takes it to the next level. It uses deep learning and a lot, a lot, a lot of computational power in order to do that. So, what does it take to figure out which text is more likely than other text? You maybe don't need to have that very intelligent system to figure out what the words are. You know, I could just feed it a dictionary and memorize the dictionary or something like that. It maybe doesn't take that intelligent of a system to get the grammar right. You know, I could either feed it grammar rules, or I can have it learn grammar rules. Probably, it's better to use machine learning to learn the grammar rules and to learn which words can be which parts of speech and so on and so forth, so that's great. But you really do need a high-level intelligence to do this extremely well, because, yes, we can program vocabulary and grammar, but after that, you really need to learn about the world. So for example, if I said—I'm trying to think of an example here off the fly—but I was looking out my window, and I saw a building. That might make more sense than I was looking at my window, and I saw an idea. I guess you can, but you would probably say “had an idea,” but an idea as a physical object. So “saw an idea” would probably pass the grammatical test, but it might not pass the understanding test. I could probably think of a better example than idea. Like, what's something that you would never see out your window but still a noun, like it's still a correct sentence? But anyway, I'm sure you could think of one.
So, in order to actually build a really good language model, like the one that maybe is embedded in our brains, you actually have to understand the world. And I want to talk about that a little bit, like what does it mean to understand the world? So let's go back to this GPT-3. So it's not just a language model that passively assigns probabilities to languages. It also generates text as a lot of these models can do. It edits text by taking one text and kind of transforming it into something that has similar meaning but maybe has a better score in some other area, like in this case, simplicity or perhaps succinctness or maybe readability or something like that. Sorry, let me move this over here. So here's the hype that we get from the Wall Street Journal, and this is sort of people are kind of on the edge here, whether this is hype or whether this is real deal. But the Wall Street Journal says that—oh, first of all, let me talk about who did this. This was developed by an organization called Open AI. They are kind of a mix between a non-profit and a for-profit. It looks like they were non-profit, but now, they're going hard-profit after developing this, funded in part by Elon Musk and Microsoft. So, there are 175 billion parameters, orders of magnitude larger than the last system GPT-2. So if I were getting millions and millions of dollars from these people, this is maybe what I would want to build, I want to try to generate the first human-level intelligence system.
So reading here from the Wall Street Journal: “Beta testers, in recent weeks, have found that it [GPT-3] can complete a half-written investment memo, produce stories and letters written in the style of famous people, generate business ideas and even write certain kinds of software code based on a plain English description of the desired software. Open AI has announced that after the test period, GPT-3 will be released as a commercial product.” So now they're going profitable.
So you could actually test it out for yourself. I've done this by going to the site, simplify—I gotta pronounce this right—simplify.so. You can get this on the show notes, localmaxradio.com/134. Let me try that one more time, simplify.so. Okay, there we go. Okay, so basically, it tries to simplify your text using the language models of GPT-3. And I chose to input in the text from my website, from localmaxradio.com. So let me read my starting text. My starting text was this: “I chose the name for, I chose to—see, I can't even read my text—I chose to name my podcast The Local Maximum because of the triple meaning. I study the practice, I study and practice machine learning and software design, for which the local maximum, as described below is a very important concept. It also contains the word ‘local,’ which matches my career long interest in location data and local search. Finally, it also contains my name, Max.” So that's an explanation of the podcast name.
Now, I got something really weird from the simplified text. Are you ready for me to read this? Okay. “The local maximum is a concept in mathematics. It is the highest point of a hill, but there is another hill right next to it that is even higher, so you can be at the highest point, but it's not the highest point. A local maximum is a high point, but there are other points nearby that are higher. For example, if you are on top of a hill, and you look around, you might see another hill that's even higher than the one you're on. You can be at a local maximum, but there can be other places that are even better. The local maximum is the highest point in your area, but there could be something better just outside your area. For example, if you live in a small town and work in a factory, your job might be the best job in town, but if you were to look around in other towns nearby...” and then, it just abruptly stops after that dangling sentence, “if you look around in other towns nearby.” I think I know how to answer it.
So it's interesting. It's impressive that it seems to understand what a local maximum is. It's impressive that it came up with its own analogies, which I kind of need. I sort of need someone to help me come up with analogies, as you guys know. But—for—why do I get the feeling that it's just copying this from a larger database? Or does that even matter? I mean, is that sort of what our brains are like? We have a database of analogies that we can use. So apparently, what I got is very uncommon. It made mine longer, and usually, it's supposed to make them shorter. But according to The Wall Street Journal, it took the first paragraph of George Washington's farewell address and shortened it to “I am not going to run for president.” So, and this is like this long thing, apparently written by Alexander Hamilton. So that's interesting.
So another example that they give was translating, and I put some other stuff on the website in there too, and it kind of improved things a little bit. Another example that they gave was translating text into equation, so it could actually learn. Humans fed it in a few examples, like if I say, “five minus three,” but I type out “minus,” then that should give me a minus sign. And so it's able to kind of translate that into equation. So that actually could be useful for me, although once you learn LaTeX, maybe that's not that hard to do. So it seems also to be good at wordsmithing because I put in a few of my sentences that were kind of written hastily, as they often do on some of the portions of my website, I'm sorry to say, and it is good. It will kind of rearranging it, so the words flow better, which I think is really helpful. So I think a system like this could improve writing, which could be useful for editing. Although I wouldn't use this as an editor—you almost have to re-edit it. And you know, some people say that this can be used to generate text initially, and then edit it. I don't know whether that would actually work well or not. It seems like you really have to seed it with some ideas first, or it's not going to come up with anything original. But it is interesting as a way to kind of improve your writing automatically, sort of go beyond that grammatical check, to sort of pick out awkward sentences and make them simpler and make them flow better. So that's really, that's really interesting to me.
So, commercial interests. Well, they're going to make this available commercially. I'm not sure what the biggest business is going to be here. You know, some people say, “Well, you know, lawyers and writers,” but I don't know. I don't know if those groups are going to use it that much. Is it going to be used by big tech companies to mine their corpuses of data? Probably more likely from a commercial perspective, but we'll see.
So, one of the things to think of when you look at an article like this is to kind of ask, “Is this a parlor trick? Or is this real intelligence?” And it's hard to say. It's sort of something that we kind of have to think about as AI moves forward. It does seem to be smart in some areas and dumb in other areas as computers are always do. So I looked at their paper a little bit, and it seems like they really are trying to generalize their intelligence as much as possible. They don't want to build a model that's just good for a specific task’ they kind of want to build one model that does it all. So, for example, at Foursquare, when I built sentiment analysis model or the noun phrase model, I would do one model that's good for a specific task, that's good for the specific things that people right at Foursquare, which made sense for Foursquare because that's sort of what we were aiming to do. But they're trying to solve this problem generally for everybody, and I think ultimately, it would save time and money across the economy if everybody has access to something like this easily.
So okay, another quote from their paper, from the Open AI paper on Arxiv is, you know, “humans do not require large supervised datasets to learn most language tasks – a brief directive in natural language ([for example] ‘please tell me if this sentence describes something happy or something sad’) or at most a tiny number of demonstrations (e.g. ‘here are two examples of people acting brave; please give a third example of bravery’) is often enough.”
And so this is sort of, if we refer back to, there's this one question that I've had about AI and machine learning throughout the course of talking to you on this podcast, which is, in 2010, they said big data is all that matters. It's just the bigger data set you have, the more intelligent you are. But I thought maybe we should focus on big algorithm, maybe take the, you know—most data is junk anyway—maybe try to focus on how to get the most intelligent out of the data you have, and, you know, trying to work out that way could be a good strategy as well. And so, they seem to validate the idea that humans don't need a lot of examples in order to get something right. It seems like humans are very good at taking a couple of examples and finding the boundary between what is an example of what you're talking about and what isn't. So if I give you like three examples of cats, then you kind of know, you get the idea, you kind of know what a cat is, and you sort of have an idea of, maybe not perfect, but you can have an idea of when it's not a cat anymore, which is interesting. Whereas, machines kind of need a little bit more input in order to figure that out.
So I want to talk about a more general question today, which is, what is the difference between natural language processing, just plain old statistics on natural language, and natural language understanding? You know, what does it mean to understand language to really deep philosophical problem? You feel like you understand something; you usually have some concept in your mind. Let's think of an example, maybe a chair. I know that chair is too often used in philosophy, but I don't have a—I need to work on, if anyone has an online course in thinking of good examples, let me know because I need better examples—but let's suppose I say the word chair. Does an image pop up in your mind? Perhaps it’s an image of a chair, perhaps is the chair in your office, perhaps the simple wooden chair with four legs? And then I ask you to think about, “Okay, what is a chair? So, to do it for me, when you say chair, I have a single chair in your mind, but when you ask me to think more deeply about what is a chair, I think of all the different types of chairs. And when I thought about it, it's really not just the concept of the chair that's important—it's the connections and relationships with other concepts. That's important.
So it looks like the concept space in language, in natural language understanding, is actually a topological problem. So, for example, a chair is something that someone usually sits on. It has certain properties; maybe it's solid. Does a chair have to have a back or not? I feel like usually, it does, but it doesn't have to. Is a stool a special type of chair? Or is a stool something separate? And so, there are gray areas there. And I'm not always concerned about the gray areas, but I feel like even though there are gray areas in our language, we kind of get the idea that we sort of know what something is based on a few different properties that come together, and we see that pattern of properties again and again and again. So, for example, if you think of the general outline of a chair, you see that pattern over and over again, you see that thing is built, you can build one yourself maybe, and so you kind of have a concept of that in your mind. So that's your mental model.
And so I feel like the mental model, which is how this concept relates to other concepts out there, means that you have a better understanding of the language that you're using and a better understanding of the world. Now, sometimes you understand the world through formal definitions, but that's rare in life. That's usually not how people learn concepts. That's not how people learn language. We don't sit toddlers down and say, “Okay, it's time for definition 1.1. The definition of a book, a solid object is called a book if and only if it includes one or more pages of text bound together between two covers, blah, blah, blah, blah.” No, no, no, that's not how they learn that. They see a bunch of these objects that have similar properties, and, you know, they see them being pulled off the shelf or being shown to them or whatever. They're being shown this, called book. And then, you get an idea of generalizing the concept that, “A book usually has a cover. It usually has pages. I see people reading from it. Okay, so all of these things appear to be books.”
And again, what's interesting about humans is we get good at seeing examples and drawing the appropriate boundaries from the examples. So you know, not always maybe, but a toddler might know the difference between a book and let's say, a placemat with writing on it, just because they know that, “Okay, those are two different things.” Not always, and sometimes, like we said in one of those previous episodes, that toddlers tend to overfit. Where is our overfitting and underfitting episode? I'm gonna look that up for a second. Yeah, Episode 16 on overfitting and underfitting. Right. Humans tend to overfit, overgeneralize, underfit, undergeneralize. But the thing is that I think we are very good at figuring out what the boundary is. Another interesting one is color. Like, if I show you a car, that this is the color blue, people have a really good sense of where blue ends and where another color begins. Maybe it's just through, I don't think...maybe it's biological, maybe it's just through conditioning, but we tend to get that from very few examples, which is to me, really, really interesting.
So that's what I want to say about GPT-3. Let me know if you have any comments on it, localmaxradio@gmail.com. Again, it's a deep learning model. It's not something that is very simple where you can go through and explain everything on the show directly. But it's cool to see this kind of AI research happening in the wild and to see even these large kind of breakthroughs happening in 2020, where it feels like, for a lot of us, not a whole lot is getting done.
Okay, a few more news items that I want to get through, and then we'll call it a day. We'll have a pretty quick episode today. The first is, oh, there's this one going around the Internet, and this is just that, here's an article from Engadget about the Skydrive flying car, talk about emerging technology. “Toyota-backed SkyDrive has finally conducted a public, crewed test flight for its flying car after years of work. The startup flew its SD-03 vehicle around the Toyota Test Field in the city of Toyota with a pilot at the helm. While it wasn’t autonomous, as you might have guessed, it showed that the aircraft could work as promised in the field.” So these kind of spectacular tests often make a large kind of, let's say, press event. And I'm sure it drives technology forward for Toyota and whoever else, in knowledge and learning, for whoever else is funding this. But there isn't really a huge push for this right now in terms of overall funding. So it's not like self-driving cars, where there is this huge, unprecedented push in arms race by all the tech companies and the major car manufacturers to get to autonomous features, especially level 3 and 4 first.
So it's interesting. I'll continue to follow these things, but I don't think we're gonna be flying to and from work, Jetsons-style, anytime in the future. So that's clearly not in the cards. But hey, just like any tech like this, there could be some niche uses. So for example, in areas where there aren't roads, you could take one of these small flying vehicles and maybe it would be a lot cheaper than taking a helicopter or something like that, to get from point A to point B, but obviously a lot more expensive than a car.
Okay. And finally, I want to talk—this is just something that has been lighting up the podaverse recently. Everybody's talking about it. And because it's related to the stuff that I've been talking about, I definitely want to weigh in, which is the James Altucher episode—article in The New York Post that says like, “New York City is dead forever — here’s why,” and obviously, you know, living in New York City, we have a lot of problems right now. And so I think, well, obviously Coronavirus is a problem, the response to it as a problem, the high crime rates is a problem—all sorts of problems happening in the city right now. But I think the reason why this article inflamed people was because of the headline, basically. Because the headline sort of told you there was no hope. And the argument here is that now we have high bandwidth, so there's no reason for people to be in the city anymore. There's no reason for people to live in the city, and he says that, you know, there is a bunch of research saying that people are better off working from home, they're more productive working from home, even if they don't enjoy it.
I would like to double-check the research that he's looking on there because I am a little bit skeptical. I absolutely think some things are better working from home, but I don't think—well, the podcast, for example. Although it would kind of be nice if I had a studio to go to, and Aaron was there even though he's in like a different state right now and we get to say hi, and we do the podcast, and the guests come in. But you're right, maybe it's more productive. But to me, it almost seems impossible that some of these creative things, where you have people sharing ideas and randomly bumping into each other in the hallways and having random conversations, when that doesn't happen, that is going to be a boon for an innovative company. It seems like that just doesn't seem possible.
So anyway, all these responses, you know, Jerry Seinfeld responded, people are just angry, like you know, “How dare you.” Whatever. I don't want to get into that. I just want to talk about the issue of, okay, let's say this is a problem. “Could the city be turned around?” What does that mean for the city to be turned around? Obviously, you want New York City to be kind of a center of culture and business and industry and entertainment, and our cities doomed across the board because of cities are doomed across the board because of this trend, then that's a problem. But I actually don't think that's the case. You know, we talked about, in 2018, the rise of, you know, when we were doing our predictions market, our predictions episode about kind of the rise of excerpts and outer suburbs because of, well, one day, self-driving cars and because of bandwidth and because of working from home. And so we sort of talked about this trend even before all of this happened. And now, this trend is being speeded up—are sped up—but I don't think that that means the city is over because of the reasons that I stated.
I think there's a certain amount of creativity and cross-pollination that still only happens in person. Like, yes, you have enough bandwidth to do video chats whenever you want. And yes, you can get all the entertainment and stuff from your own home, but I think people are going to find there's something missing, and there are going to be people flocking to the cities. Now, there are economic problems in the city, but those economic problems are—they are severe—but they could be turned around with good leadership. I think that for a time, that rents and property values will have to fall, but it will attract lots of people to come into the city, particularly young people, and if you can keep crime low, which there's no reason why you can't, under good leadership and make it a priority; they just haven't made it a priority. There's no reason why you can't kind of reinvigorate the city.
And I actually think there is an opportunity for kind of new companies to start up companies, to crop up, to take advantage of what is happening in the cities. And also now, I think that in the future, maybe if the density drops and if less people are using transportation, it'll actually be easier to come in and out of the city. If there's more, if it becomes easier to get in and out of the city, then more people could actually come in and out from living in the suburbs if that makes sense. And I just think that there are companies, maybe I'm not being specific enough, but there are companies that could take advantage of this. I think it has to be an area where people need to be creative, it has to be in product development pursuits, where people are bouncing ideas off of each other constantly. And those types of companies, I believe, will thrive in the cities in the future if the cities have smart leadership and if they appropriately plan for the future. We don't have that right now, but there's no reason why we can't have that in the future.
Okay. Sorry, that wasn't as succinct and clear as I usually am, but it's sort of a few of my thoughts off the top of my head, maybe we'll talk about this later. If you want to weigh in, localmaxradio@gmail.com. Alright. Pretty short episode today. Have a great week, everyone.
Max Sklar: That's the show. Remember to check out the website at localmaxradio.com. If you want to contact me, the host, or ask a question that I can answer on the show, send an email to localmaxradio@gmail.com. The show is available on iTunes, SoundCloud, Stitcher, and more. If you want to keep up, remember to subscribe to the local maximum on one of these platforms and to follow my Twitter account @maxsklar. Have a great week.