Episode 306 - Spotify Shrinks, OpenAI Returns, and Household Robots Emerge
In today's December news update, Max talks about industry events like the layoffs at Spotify and the recent turmoil with OpenAI's management. He also talks about the NYU/Meta collaboration on household robot technology called DobbE.
Links
CNBC - Spotify Lays off 17% including engineers (1500 jobs)
Wireless Generation: MClass Montage 2008
Fox Business - Sam Altman has officially been reinstated atop OpenAI as CEO
Fox Business - OpenAI brings back Sam Altman as CEO, establishes new board days after co-founder's ouster
Fox Business - Can OpenAI survive the turmoil?
Business Insider - We now have more details on what the heck happened at OpenAI
MIT Technology Review - Unpacking the hype around OpenAI’s rumored new Q* model
X - Pedro Domingos take on OpenAI history in a nutshell
Medium - Q* and the Future of Artificial General Intelligence
Blockbuster Blueprint - Top AI Experts Predict Artificial Superintelligence In 3-5 Years. Now What?
X - Brett Adcock on AI x Robotics
Marktechpost - Researchers from NYU and Meta Introduce Dobb-E
X - Mahi Shafiullah announcing Dobb·E
BBC - Why has the Gaza ceasefire come to an end?
Washington Post - Here are the hostages released by Hamas and those remaining in Gaza
YouTube - Origins of the Palestinian-Israeli Conflict Part I: to 1949
Related Episodes
Episode 213 - Artificial Consciousness Controversy
Episode 75 - The Market Loves You: Jeffrey Tucker on the nature of Innovation
Episode 272 - Data Science History with Chris Wiggins and Matthew Jones
Transcript
Max Sklar: You're listening to Local Maximum, Episode 306.
Narration: Time to expand your perspective. Welcome to the Local Maximum. Now here's your host, Max Sklar.
Max: Welcome everyone, welcome. You have reached another Local Maximum. How're you doing today? December! We are finally getting around to basically the end of the year. So I thought I'd give you a little December news update to start with, for our industry.
First of all, the software industry today, the news is not so great. Well, the news might be great for shareholders of Spotify, whose price has gone up after this news. And not so great for the employees, Spotify has laid off 17% of their workforce, including engineers, that is 1500 jobs. Of course, the CEO used the term right-sizing, which is always when you don't want to say downsizing, you say right-sizing. Nobody ever says right sizing when they want to hire more people because they think their size is too small. But they use the term right sizing when they think their size is too big.
What's going on here? It seems like unfortunately, the crash in engineering jobs has continued. I don't think it continues indefinitely. But I think what has happened over the last decade is so many of these what were once startups- I mean wasn't Spotify? Spotify was a pretty small startup. You know, 10 years ago, let me actually look up — I actually did not look this up before the show — let's look up when Spotify was founded.
Spotify was founded in 2006. So probably around 2013, it was still a medium-ish, smallish company over the 2010s. As you know the ideological driver for these companies was: hire more and more, get more and more engineers in there, build more and more, unfortunately, that at some point reached its limit. And I think what's happened now, you know, remember, Meta laid off many of its employees? Google did too; not many of its employees, not like an order of magnitude of the company, but but a certain percentage.
And so yeah, I think that the market is readjusting, and reallocating engineers, product designers, product professionals. And I think that for those of us who are engineers, the interesting thing about being an engineer is that you have to go into business as well. I mean, that's why I got my degree in Information Systems from NYU, which was both engineering, which is both computer science and business, because you'll also have to try to figure out where your skills will be most valued.
I haven't been very good at it, either. But this is something to watch. It looks like big tech is not the place to be right now. It's probably more like small tech. Unless, of course, you you get to work on AI research, in which case, I think it is the place to be. Now: talk about how tech jobs have changed, somebody posted a video, a video that I made, actually in 2008, and I'll repost it.
It was a video I made when I was working at Wireless Generation, back in 2008. And it was for the company talent show. And the idea was, we were gonna make a product in five minutes through a montage and I had like, kind of funny takes on what everyone's job was, we had the sync team that was all talking in sync, they were all talking at the same time.
It was funny, because that was programming for the Palm Pilot. And there was a whole team involved in figuring out how to sync the data from the Palm Pilot onto the data in your desktop computer. Come to think of it, those, you know, we were using Windows XP, and we had desktop computers on our desks there. So I literally had to go into work to get any work done.
So it was really cool to see that and I'm glad that someone reposted that. Because A: I thought the video was going to be really embarrassing, but actually turned out to be really funny. And it turned out to remind me, hey, you know, it was kind of fun to be in an office that you had to be in and it wasn't work from home and you know, there are friendly faces there. And you kind of got to learn a little bit about what everyone was doing, which is unlike, which is very different from the world of the 2020s.
So I don't know, maybe people find inspiration. I found inspiration from because I remembered how creative I used to be. I still am creative, but I do different things now. And so I kind of remembered, oh, I could make, I can make videos. I take myself too seriously now, maybe I shouldn't take myself so seriously anymore. I think that would be a fun thing.
There's a really funny scene in that where one of our database administrators was haranguing me about the sequel I wrote, you know, what are you doing to our database? It’s pretty funny. But, yeah, definitely check that out: localmaxradio.com/306. I'll put up that Wireless Generation talent show video from 2008. Someone said, look, it's baby Max. It's me from right out of college.
All right. Now we're starting to get a little bit of a picture of what happened at OpenAI we still don't know for sure. But since the last time I gave my news update, Sam Altman, founder and CEO at OpenAI has been reinstated as CEO of OpenAI. According to Fox Business, Sam Altman confirmed on Wednesday that he was returning to the helm at OpenAI as CEO nearly two weeks after he was ousted from the company he co-founded.
In a blog post on OpenAI, Altman said he was returning, his replacement would resume her previous role, and the new initial board will consist of Brett Taylor as chair, Larry Summers, and Adam D’Angelo.
“I've never been more excited about the future,” Altman said. “I am extremely grateful for everyone's hard work in an unclear and unprecedented situation.” Talking about unprecedented well, we'll get to that in a minute. “And I believe our resilience and spirit sets us apart in the industry. I feel so so good about our probability of success for achieving our mission.”
Well, you know, what an absolute roller coaster ride for these guys. This company is worth many, many billions. It's not like the next unicorn. A unicorn, like a Silicon Valley unicorn is something worth a billion dollars. This company is already worth many tens of billions of dollars. And I think this, and the employees know that this has the opportunity to be one of the next trillion dollar companies — like a great shot at it.
Will it own AI entirely? Probably not. You know, its market share remains to be seen. How does the market shake out over the next couple of years, do other companies catch up? Does seem like they have a pretty strong lead on their initial product.
Sometimes initial products falter and fail to the follow ons. But people are really getting addicted to ChatGPT. People are using their API, which is not so much a consumer addiction. But you know, once you use someone's API, you really don't want to switch, you really need a very good reason to switch.
So they are getting their tentacles into the internet, so to speak. And that's very good for them. Lots of great applications of AI, I think it can spawn smaller companies as well. So it looks like Mr. Altman is firmly in control of the future of OpenAI here.
How did he do it? It says OpenAI was in turmoil following Altman's ouster, as nearly all of the company's 770 or so employees signed a letter threatening to join Altman at Microsoft unless he was reinstated. And the board stepped down.
I've never seen anything like this. So that's pretty incredible. You know, it doesn't matter what you think of Altman, it's pretty incredible that all of the employees banded together and not in like a unionization sort of a way, not in like a “We want this and this and this.”
I mean, these employees are likely for the most part already very wealthy, at least on paper. Not liquid, but on paper. But, you know, for them, it really, really matters. They almost have a bigger stake in the company than the board does, it seems.
So it's amazing to see how all the employees can get together and make a threatening letter. I don't want to say make a threat, like it's some kind of an illegal thing. But you know, say hey, we will take this action and join this other company unless you make X, Y and Z personnel changes or executive changes and they won. So pretty incredible to look at the power dynamics here. We're still putting together the pieces of what happened. There was a lot of speculation: was it corporate politics?
You know, it looks like they're all playing politics internally with the board of directors. Like Sam is, the others are. I'm really bad at this personally, I think, you know, playing politics, with business and work. I don't know, maybe I could do it if I were put in the situation, but I would at least try to be consistent and ethical.
From what the article says it doesn't sound like Sam did anything that kind of crossed a red line, but it certainly rubbed people the wrong way. It says from Business Insider, “Some members of the OpenAI board had found Altman an unnervingly slippery operator. For example, earlier this fall, he confronted one member Helen Toner, a director at the Center for Security and emerging technology at Georgetown University for co-writing a paper that seemed to criticize OpenAI for stoking the flames of AI hype.” Something that she's not alone in — a lot of people feel that way.
“Toner had defended herself though she later apologized to the board for not anticipating for how the paper might be perceived. Altman began approaching other board members individually about replacing her when these board members compared notes about the conversation. Some felt that Altman had misrepresented them as supporting Toner’s removal. He'd play them off against each other by lying about what other people thought.”
I mean, that is something that does rub me the wrong way at work: when someone says, oh, so-and-so is saying that you need to do this, and then you talk to so-and-so. And so-and-so says what I never said that that wasn't you know, and so they kind of give the false idea that everyone's on the same page. And that there are operators, there are a lot of middle managers like that.
And so I don't know, if Sam Altman was doing this purposefully, but no, I can see why people don't like that. Maybe that wasn't a good reason to remove the CEO. But I could see why they don't like it.
Okay. You know, I've been thinking about what my takeaway is with that, you know, if that's true, I think this pales in comparison, for example, with some of the stuff Steve Jobs used to do. On one hand, we've had a string of well known frauds in the tech industry across the industry in general. You know, Sam Bankman-Fried comes to mind, Elizabeth Holmes comes to mind.
Not just Sam Bankman-Fried. A lot of the crypto people, Elizabeth Holmes. If I could trade those for the so-called slippery operators, I would. But I wonder if there's a way to avoid either. You know, in any case, Sam was obviously one of the people who got the company to where it is. He can share that achievement with others. But he absolutely has earned his role, as far as I'm concerned.
According to this, the new board will be more friendly towards their investor, Microsoft, and also contains Larry Summers, who's somehow involved with everything. No more Ilya. But I hope Sam and Ilya don't become enemies, because they created this together. Ilya’s remember, is, you know, I think, you know, probably the most well known engineer at the company and is, you know, he probably had the biggest hand in terms of developing this technology.
He was the one who said, if you remember a bunch of episodes ago, and I should look this up, but he said, “Hey, maybe our code is a little bit conscious.” You know? February 2022 — AI Consciousness Controversy. Ilya Sutzkever, wrote a tweet saying, “What we're working on is a little bit conscious.” And of course, I think that's ridiculous.
But you know, maybe that was one of the examples of stoking AI hype. But, you know, it doesn't mean that Ilyua Sutslever doesn't know what he's doing in creating this technology. He obviously, he obviously, has done some incredible work.
Another rumor that has come out of this that seems to have died down, but is not the case was that they were working on this QStar system, QStar model, and that this was going to solve artificial general intelligence, the first super intelligent machine, and that was going to change the world. And, you know, the board was like, Oh my God, oh, my God, oh, my God, and the employees were like, Oh, my God, oh, my God. Okay.
And then, like, you know, they've had a fight over what to do about it. And that led to this whole kerfuffle, per se, it was crazy how that rumor spreads. So QStar, from what I understand, and I'm getting this from technologyreview.com. This is their model to work on mathematics. And so, the very interesting model, it touches on the field of formal methods and automated proofs.
I took an automated deduction class at NYU, I've always been very interested in automated deduction. Can you do math in an automated way? It turns out that a lot of mathematics you can do through algorithm, through an automated way.
But when it comes to the highest level of mathematical proofs, and trying to understand the mathematical truths of the universe — I didn't even want to say “the universe” because usually the universe is physical truths, mathematics is just the, is the base layer, the logical truths. The algorithms and randomly searching for correct statements can only take you so far — it requires creativity, it requires planning, it requires intuition.
So I think, though, that now, with AI at the level it is, it stands to reason automated provers have been around for quite a while, a very, very interesting level of research and very, very high level abstract thinking has to go into that research. It stands to reason that machines, even large language models, would be great collaborators in this space of automating mathematics, but I should say it's hardly Artificial General Intelligence.
I think that our toughest math riddles today, some of them are not solvable at all. Some of them will be solved by AI. But that can't be solved by humans, I believe that we will see a string of them. And probably very soon, but very soon in mathematics is 20 years. Because, you know, mathematics is a very slow moving. Well, it's not necessarily a slow moving field. But it's like, if you're talking about mathematics from 100 years ago, that's still like, super advanced. The stuff that you learned in high school is hundreds and hundreds of years old.
I think that I think this technology is going to do that. But most of these won't have practical applications and super intelligence for quite a while. I'm trying to think if any of them would have any practical applications. You know, what if we solve the Riemann hypothesis? Well, that certainly would be a mathematical breakthrough.
I wish I knew more about the practical applicability of that. Hopefully not break encryption, hopefully not that. Because then we've got a problem on our hands, a bit of a conundrum in terms of both banking and all security on the internet and international security as well. So we'll see about that.
But I think that's the premise of the finale of the HBO show Silicon Valley. I probably shouldn't give too many spoilers. But that's not so bad. I think the premise was that their AI broke encryption, and they were trying to figure out what to do.
So to make a long story short, Pedro Domingo, a famed AI researcher, and the author of The Master Algorithm, posted on Twitter, on X, “OpenAI history in a nutshell, more and more hype about less and less, culminating in QStar is AGI.” QStar is Artificial General Intelligence, which he says is not, despite the hype. Though, it still strikes me that these guys have put together an incredible product with incredible technology.
So that can't be discounted. Even if there's a you know, even if there's the urge to embellish, the urge to to hype up, I think that always happens. Alright, InnovateForge on Medium wrote a little bit more about QStar if you're interested.
“While the development of QStar is a notable example of the progress being made in the field of Artificial General Intelligence, it has also raised concerns about its potential impact on humanity. Some researchers have warned the board of directors of OpenAI, that the breakthrough could threaten humanity if not properly managed.”
Now notice this was written before, this is part of the speculatory bubble after the whole corporate politics, while all the corporate turmoil is taking place. He writes, “The exact safety concerns noted in the letter are not specified. But there has been an ongoing discussion among computer scientists about the danger posed by highly intelligent machines.” Apparently has in it a lot of learning, planning and reasoning which of course you need for automated deduction.
So that's just a piece but what will the world look like in 2030 with all of this technology having matured and all this technology being at the forefront, you know, we're starting to catch a glimpse of it. And it's something that I want to explore over the coming weeks and months here on the Local Maximum.
Really, really interesting piece of technology that has come out or, or you know, announcement I guess that came out over the last few days that all of us should be interested in for the future: Researchers from NYU and Meta introduce DOBB-E. I wonder if they're getting it from Harry Potter elf and and DALL-E which was the image one but anyway.
DOBB-E, an open source and general framework for learning household robotic manipulation. This is a home robot system, maybe finally an upgrade to that Roomba. I don't know how many of you have a Roomba. It kind of aimlessly wanders around your floor bumping into things, doesn't seem very smart. You know, it came out probably 20 years ago now and hasn't improved that much.
It came out 20 years ago and you're like, oh, it's dumb, but you know, it's 2003 maybe by 2013 it'll be super smart, start sweeping the floor in all the right places and you'll be able to put everything in the trash can. Well 2013 rolled around and the Roombas were pretty much exactly the same. 2023 rolled around, the Roombas were pretty much exactly the same.
But DOBB-E is different. It actually stands up. It manipulates things. Going to the Market Tech post, for example — sorry, going to the article for example, “The study recognizes recent strides in amassing extensive robotics datasets, emphasizing the uniqueness of their datasets centered on household and first person robotic interactions, leveraging iPhone capabilities. The dataset provides high quality action and rare depth information compared to existing automated manipulation-focused representation models into pre-training for general, generalizable representations.”
Okay, I'll go through that. “The full work addresses challenges in creating a comprehensive home assistant advocating a shift from controlled environments to real homes.” You can see the videos of this thing, you know, doing tasks in real homes like unplugging things, plugging things back in, taking things in and out of the laundry.
“Efficiency, safety and user comfort are stressed, introducing DOBB-E as a framework for embodying these principles further down. In conclusion, DOBB-E is a cost effective and versatile robotic manipulation system tested in various home environments with an impressive 81% success rate. The system's software stack, model data and hardware designs have been generously open sourced by the DOBB-E team to advance home route robot research and promote the widespread adoption of robot butlers.”
“The success of DOBB-E can be attributed to its powerful yet simple methods including behavior cloning, and a two layer neural network for action prediction.” That two layer neural network is not that deep. But this is just for action prediction. Like I plugged something in it'll do that I guess. “The experiments also provided insights into the challenges of lighting conditions and shadows affecting task execution.”
So unlike self-driving cars. And those people in New York know what I'm talking about. Self driving cars were first introduced in Arizona. Now they are introduced into California. New Yorkers must feel like self-driving cars are going to come to New York City dead last. Everywhere in the country, and then New York City. I don't know if we can handle a self driving car, maybe we can.
But this household robot thing is starting in New York first, New York City first technology. So that's kind of cool. And you know, it's exciting to see that this idea of the robot butler is going to be a kind of a fast follow, I think, to the self driving car. Because the self-driving car unlocks so much more economic potential, but then it's like, well, you know, where's my, where's my robot cleaner? Where's my robot chef? Where's my, you know, robot butler to kind of tidying things up?
Perhaps the Jetsons world that Jeffrey Tucker once predicted, but then lamented, after the reality of big tech set in is, in fact, coming to fruition very soon. I spoke to Jeff Tucker, on episode 75 of the Local Maximum. So maybe you should check that out.
All right. So I hope to have Aaron on soon. And I have several interviews in the cam that I want to share with you. Another one about the Constitution and this morning, I had a really interesting conversation with Michael Callahan of the Where We Go Next podcast.
I was on his show. And then he came on my show. And we talked a lot about the, our discussion kind of focused on the intersection of technology, and identity and how we relate to people. I thought it was a fascinating discussion, a really great, a really great one to listen to. So, so look out for that one soon.
I'm also I'm going to post you know, there's there's an episode I'm gonna throw back to another episode recently, but based on something that I went to tonight. Because earlier this year, I did an episode, Episode 272 with Chris Wiggins and Matthew Jones on the history of data science, and they were actually at Betaworks today, discussing that book yet again. And we got into kind of the ethics of data.
Someone asked a really interesting question, who was like an outsider from data science, saying like, you know, are these models going to be seen as, particularly as AI models going to be seen as the truth? Well, okay, this gives us an answer. So this is going to be our truth. And, you know, is that a problem?
And as Chris put it — I mean, I knew this, but I think he said it very well — is that people on the inside building these models, we all know that there were a lot of kind of subjective decisions that went into it. And I think the problem is that people on the outside often see it as some kind of magic box, it's always the people who are using it.
So I thought it was a very interesting discussion. Kind of makes me want to go back to Episode 272. So definitely check it out. All right. Let's see. So also look out for all of my discussions in the future.
One thing to end on maybe it's not a very positive note, obviously, in the news have been, you know, watching these almost surreal negotiations between Israel and some Hamas with the prisoner swaps and the hostages that are just, you know, random people taken from Israel into the Gaza Strip, I mean, that that has got to be probably the toughest. This has got to be probably the toughest conflict to watch of any that I've lived to, especially with the whole situation with hostages and the fighting in urban areas.
I worry about what the outcome will be. I hope that some read, there's some resolution at the end of this coming soon. And, man, I hope the conversation we're having here in America and around the world improves.
If you want to listen to something around the history of the conflict. I've listened to like so much stuff on that recently. Some of it is, you know, kind of incredibly biased and advocacy-based and some of it's good, some of it's not so good
I really enjoyed the one that Henry Abramson put out recently. So I'm going to post that as well. And yeah, so. All right, that's it. We'll get to the interview soon. And hopefully, hopefully, we'll have Aaron on soon. All right. Have a great week, everyone.
That's the show. To support the Local Maximum, sign up for exclusive content at the online community at maximum.locals.com. A Local Maximum is available wherever podcasts are found. If you want to keep up. Remember to subscribe on your podcast app. Also, check out the website with show notes and additional materials at localmaxradio.com. If you want to contact me the host, send an email to localmaxradio@gmail.com. Have a great week.