AI: Promise and Peril

June 02, 2023 01:28:48
AI: Promise and Peril
Morals & Markets with Dr. Richard Salsman
AI: Promise and Peril

Jun 02 2023 | 01:28:48

/

Show Notes

"AI is just a fancy name for automation—which is the embodiment of advanced human intelligence in tools and machines—and like all technology it should be welcomed, not feared, curbed, or banned. History shows that fire, the wheel, the gun, electricity, nuclear power, and many other technologies have been enormously beneficial to humans; that they’ve also been misused by evil actors only means we should prevent evil, not invention."

View Full Transcript

Episode Transcript

Speaker 0 00:00:00 Hello, everyone. Welcome to Morals and Markets, the podcast presented by the Atlas Society and hosted by Atlas Society Senior Scholar, Dr. Richard Salzman. This month we're talking AI promise and peril. Don't miss out on this full length episode. Speaker 1 00:00:15 Well, thank you, Scott, and thank you all for joining. I thought I would read, as I normally do the, uh, summary that I offered on the website for this session. And, um, I think it'll be, uh, interesting. Um, here's what I wrote. AI is just a fancy name for automation, which is the em embodiment of advanced human intelligence in tools and machines. And like all technology, it should be welcomed, not feared or curved or banned. History shows that fire, the wheel, the gun, electricity, nuclear power. Many other technologies have been enormously beneficial to humans. But of course, they've also been misused by various evil actors. But that only means we should, uh, identify and prevent the evil, not prevent invention. Now, I'm gonna say more than that, but that is one of my main points that, uh, and by the way, this is very much, uh, for those of you who may have noticed even while specializing in certain fields, that this is a multidisciplinary, uh, issue topic. Speaker 1 00:01:26 There is, of course, the technical aspects of AI and all the, um, jargon and language associated with it. So for those of, of you in the tech business and, and specifically in ai, uh, you know, there's an enormous body of, uh, research. There's a long history going back at least 50 years to the famous Dartmouth College Summer seminar on this, uh, in 1956, which where artificial intelligence really was coined as a, as a term. But it also has, um, and especially for this audience, should be interesting philosophical, especially epistemological aspects, uh, since we're talking about methods of trying to mimic or, uh, in some way reflect, uh, human intelligence in machines. So that means you need some theory of human intelligence, uh, but also, uh, the economic aspects. I'll, I'll touch on those a bit. Uh, some of the phobias and fears of ai, uh, really reflect trailing phobias and fears about machines generally. Speaker 1 00:02:32 But that goes back centuries. That's not decades, but centuries. I'll touch on that, uh, whether it, uh, so whether it displaces human labor or whether that's good or bad, whether it brings us greater, um, leisure time, efficiency, productivity, the egalitarian economist. So this is more philosoph, I guess as well. We'll worry about inequality. This is always true. When you move toward more skilled and cognitive labor, if you will, away from manual labor, those with those skills are gonna earn more and do better in the world, in of the economic world than others. Should that be a reason to worry? Should that be a reason to restrain the growth? For, for many of them, they say yes. Now, politically also, we know that there's been a trend toward, and this is always the case whenever a new field opens up. And, and AI is not a new field, but there's an aspect of it, which is very interesting. Speaker 1 00:03:27 Uh, we're called generative AI that I'll talk about tonight. And that is different, that the regulators get involved, that the politicians get involved. And if you, uh, check out in recent, uh, weeks or months, uh, especially since, uh, chat g p t was introduced last November and caused a, a great interest, um, calls for, uh, moratorium, calls for regulatory agencies focused in on AI calls for greater government in, uh, intrusion in the field. Uh, a very, very bad development in my estimation. Um, but that's obviously the political aspects of it. And so it has these political, technical, economic, philosophical, moral aspects to it. And unless you have, uh, an interdisciplinary approach, I believe it's hard to figure out, uh, the promise in the perils, the right balance between the two. I think it's also improper just to classify people as optimists or pessimists as tech optimists and tech pessimists or, or tech utopians versus technophobes. Speaker 1 00:04:32 Uh, for those, not just fearful, but irrationally fearful of technology, realism is really required. It's not optimism or pessimism so much as being realistic about what these things are, what they can do for us and what they can't do for us. I, I just wanna say broadly now on the economic, just in the history of this, um, there has been a long history, not only since the enlightenment mostly, but ear, even earlier, the Renaissance, uh, the introduction of machines and tools and utensil, utensils. Call 'em, uh, I call 'em LSDs, uh, labor saving devices. Of course, you would want to do this if you're moving beyond, uh, primitive stages to more advanced human stages. And these tools, and, um, machines and capital, if you will, if you think about it, are invented and constructed by brainiacs, not by broc, not by people with brawn, but people with brains. Speaker 1 00:05:32 And it is, I think, worth, uh, thinking about, especially when it comes to ai, especially when it comes in property rights, to intellectual property rights, that even, uh, basic, more basic tools, uh, are the invention and application of human reason, advanced human reason. It's not muscles that brought us these labor saving devices, this technology and this capital, but brains. So, uh, not just rational philosophers, but feeding scientists, inventors, innovators and manufacturers and others who, uh, commercialized these things. Mechanics, engineers, I mean, the, the, the group of people I just named are highly intelligent and, and not just that highly practical oriented people. And we have to be so enormously thankful that they created this capital which can replace and supplement and enhance human labor and, uh, human work to the point where it's more productive, which means we're gonna have a higher standard of living, which also means we're not gonna be working the live long day. Speaker 1 00:06:40 We're going to have more leisure. And, uh, so all these, I just want to emphasize the positive here, and who's contributing to the positive. Uh, it's the brainiacs. Now, of course, they have to be free. So this has worked mostly obviously in political economic systems where property rights have been respected and, and the mind has been respected, the freedom of the mind, and then protection of copyrights and patents, uh, respecting the, uh, achievements of the mind. All that is absolutely indispensable to the development, the accumulation of capital, both in quantity and quality. Now, artificial intelligence, uh, which is really the focus, uh, of tonight, as I said, I don't mean to demean it, but it should be seen as, uh, uh, something in a continuum of exactly that thing I just mentioned, namely, the evolution and development of capital. And, you know, that the anti-capitalist, uh, economists over the years from marks to canes and others, were highly critical of capital and capitalists. Speaker 1 00:07:44 So, and, and others like malus were phobic about, uh, unable to forecast technological advance in the case of malus, unable to imagine there could be a productivity boom in agriculture. So unable to forecast that, but also seeing the beginnings of population growth. We know that around 1800 or so malus was forecasting gloom and doom, that human growth, uh, population growth would outstrip food growth. There'd be mass starvation and disease. And, um, I mentioned that only because the concept of Malthusian, uh, comes from mal. It's, and it's a tradition that still is with us today, unfortunately, and invades and permeates, uh, a lot of AI pessimism. It's, again, it's, it's partly the group that's distrustful of machines as, uh, something that somehow alienates and is unnatural to humans cuz they can't conceive of the rational aspect of it, the rational aspect, which is our most human unique, uh, feature. Speaker 1 00:08:51 And this other one of, well, precisely the people who cannot conceive of, um, uh, productive change in the future. Exactly the ones who are most fearful and warning that all will work out badly. Um, that's something to keep in mind because we know that still permeates the environmentalists. It still permeates the socialists and Marxists and the AI doubts, and phobias and criticisms are not to be taken out of context. Many of these people, if you read closely, I'm thinking mostly of Nick Bostrom, who wrote the book called Super Intelligence, um, super Intelligence, uh, the, um, subtitle, paths, dangers and Strategies, 2015, a Swedish philosopher now at Oxford. The book is just awful. The book is just terrible. It's a, um, a, a warning that all of humanity is doomed unless it puts restraints on ai, unless the government gets involved in ai, unless all sorts of things are done with ai. Speaker 1 00:09:52 And unfortunately, this kind of thinking has permeated even, you know, decent people like the past, the late Steve Hawking, Steven Hawking, who's not with us anymore, but Elon Musk as well. Steve Wozniak, one of the co-founders of, uh, Microsoft. Uh, if you look, these people are signing for years, have been signing open letters, warning against, uh, threats to humanity unless we have moratoriums and restraints on ai. And now sometimes they'll say, well, we're not talking about all of ai. We're trying to restrain just a certain applications of ai, which we consider dangerous, is either the surveillance state or its privacy concerns or, or its, uh, weaponization of armies, you know, that are totally robotic. Um, again, this is the kind of thing that I'm trying to argue against. I think it's unnecessary. I think it's phobic. It's not rational fear, but irrational fear. And I don't really care what the other agendas of these people might be, but it is somewhat disturbing and kind of shocking actually, that otherwise smart people would fall for this kind of stuff. Speaker 1 00:10:58 But it just shows you that you can be a commercial success. You can be a billionaire and still get the multidisciplinary aspect of this wrong. And you could still be very brilliant in physics and elsewhere like Hawking was, and get this wrong. I think it's a testament to the fact that it has all these different elements. So you can get, even someone who's just a political philosopher, uh, like Bostrom, who gets it wrong. Why? Cuz he doesn't really know the technics or he doesn't know the economics of it, or the history of automation. But you can also get certain economists who only know the economics of it, and they don't bother to investigate the technics. So, uh, it's not an easy thing to do. This is a very difficult topic if you want to go at it in this kind of holistic approach, uh, that I'm talking about, that, that, but that's what I'm suggesting. Speaker 1 00:11:45 Now, I don't know, just for those of you more interested in recreation and, and arts and entertainment, I think it is very interesting that if you look at books and novels and sci-fi and TV and movies, there is a notice, a, a imbalanced asymmetric long history, whether it's Frankenstein or Jurassic Park, or the Terminator or iRobot. I, I mean, I could go on and on. You can remember the, even the early Star Trek series in the late sixties, lots of it with the idea that a man creates something and it's advanced, and then this something, usually a machine or a robot or some kind of system goes awry. It turns against man, it threatens man. And, um, you know, it's kind of this dark sci-fi type stuff. And so you have to ask yourself why that is so common, why there aren't more depictions, uh, since the actual facts of reality are there. Speaker 1 00:12:48 Why aren't there more, uh, dramatic and aesthetic depictions of human beings, um, benefiting by enormously betting, benefiting by machines? It's interesting cuz the size, the sci-fi genre in whatever form is a, uh, an inspiring thing to many people. And I think it's actually one of the more wonderful things about the novels, the books, the movies and things like that, is that it permits human beings to imagine futures that are achievable. And then, uh, lo and behold, you do find that certain things, uh, suggested earlier on, you know, whether it's voice recognition, facial recognition, a whole bunch of other things, uh, you know, show up in books and movies first and then are invented later. It's a good thing. But still, it, it's undeniable and worth explaining. Uh, I won't do it tonight, but those of you more on the aesthetic side of this, uh, that's yet a fourth or fifth angle here. Speaker 1 00:13:46 Uh, why it's pessimistic, why, um, it's gloom and doom in terms of the theme. Now, in, in this field, generally, it's not particularly ai, but there's a concept of singularity. And if you look up singularity, it's again, this idea. It's just tipping point where we are trying to imbue machines, inanimate objects with human capacities to reason to express and, and apply intelligence. And the singularity idea is when does it get to the point where whatever we're creating is smarter than us, and not only is smarter than us, is devious toward us and turns against us, doesn't just, uh, you know, hire us out as underlings, but something nefarious happens. It's a very common theme. I think it has to do much more with a malevolent universe outlook, which, uh, as an objective is, I do not share, but it is out there. And none of these scenarios, I find, if you read them really closely, none of them are really anchored in reality. Speaker 1 00:14:50 A lot of this is imaginary, a lot of it is fantastical. Uh, I think some of these people kind of wallow in the idea of expressing the fantastic, because it's just sensationalist. It's just out of the ordinary. It's out of the norm. It doesn't, it's not dry, calculating cold, statistical, empirical work. But they wanna make a splash. They wanna sell a book, they want to, uh, present a movie scenario. So, uh, beware of that kind of stuff, because you hear the words, you hear the jargon, and there really isn't anything behind it. It's kind of like pseudo pop psychology stuff. Um, I wanted to say something because it's such a hot topic. Uh, and by the way, the reason I brought, uh, the reason I decided to address this now, um, couple of things. One, for about five years, this might interest you. Uh, for about five years or so, I've been conducting seminars at Duke, uh, for seniors. Speaker 1 00:15:43 Uh, so a very advanced seminar, small seminars, no, no more than 15, 16 students. And it's on a range of topics. And, and I alter the topics, uh, sometimes over the years, depending on trends and student interests. But I have to tell you that the topic of ai, uh, it's usually listed as ai, automation, technology, job security, is an enormous interest to the students. Now, I'll bring up other things like environmentalism or, um, you know, is healthcare a right and should we have socialized medicine or not? So you understand now, you know, multi-hour sessions devoted to these topics. And invariably, this one's at the top of the list. It's very interesting to students. They're interested in all the aspects I just mentioned. But also, this might might surprise you, they're almost universally optimistic. I I, now maybe that's cuz they're Duke students, maybe because they're expected to go onward and outward into these great fields, and they have the confidence. Speaker 1 00:16:46 And it's computer science, by the way, is one of the fastest growing majors at Duke. I, I teach politics and economics, but, um, sometimes I'll get STEM students. But anyway, I wanted to tell you this because it's, it's an enormously increased my interest in the field and the research I've done in it. But I'm also, uh, heartened that the students come into those sessions and I might steer them in an optimistic direction, but they're already there. And so they're not just mimicking what I say, they give arguments and write papers, long 25 page papers about AI and this and the philosophy of it and the economics of it, and the politics of it. And they're fascinating. And so, uh, that's a good thing. I think the other, the not, I think the, the other reason I brought it up is I did see the, in, in seeing the re introduction of chat g v t by opening ai, AI opened the company, uh, last November. Speaker 1 00:17:42 And then the reaction to it, it's such a interesting and, and hot topic. That's another reason I raised it. But also last, here's the third reason, last March, seeing the cascade of, uh, demands for a moratorium on the development of ai. It, it was very annoying to me and very disturbing to me. It's very troubling. It's on, it's ongoing, of course. Uh, but, um, those are some of the reasons I thought I would bring it up. I knew also that it had this multidisciplinary aspect to it. It, and although I'm, um, you know, p predominantly a a political economist, I have studied the technics enough to know, uh, uh, the basics. So it's not to make major mistakes there at least, but it's, as I said before, very difficult to go at this and get all the angles, uh, together. Now, ch chat, g p t is an example, by the way, of my interpretation of AI in this case is up to now, automation generally, and the introduction of labor saving devices generally over the years has been the idea of replacing repetitive, boring, routine physical labor. Speaker 1 00:18:49 And thankfully, that's what these mechanics and engineers did. They basically, by inventing machines and, and robots and stuff like that, when you think about it, they embodied their brains in these machines and then left the factory entirely so that over time, instead of seeing a whole bunch of human bodies in the factory, you saw assembly lines, you saw automated equipment. But those were the, those are the engineers. They're not physically there anymore, but the, the manifestation of their brains are in the machines, which I think is a, an interesting concept to keep in mind. Now, more recently, you could say AI and generative, especially generative ai. The reason it's interesting, and, and it's not that this just started last November, is if holds out the prospect of replacing not physical manual, repetitive labor, although we'll do some of that, but rather, and here's the controversy among brainiac creative, uh, labor, uh, white collar labor, labor of writers of even, uh, computer coders, um, chat, G P T and other devices can be used to, uh, facilitate coding even. Speaker 1 00:20:00 So, uh, and then more interesting the generation of images. So not just text, uh, which everyone's getting excited about. You know, you ask, you ask chat g b t to write yourself a paper, you know, a 3000 word paper on converts as rand on something. And it calls the internet and it uses big data. And he uses these algorithms and all these methods, which we know have already been used in social media and elsewhere, and generates this decent looking paper. And now I'm a professor, so I'm a little worried about this in regards to monitoring what students do and whether they're original or not. But it's an amazing technology. It's, I think it's wonderful as long as it's judged objectively and rationally. But that is a, uh, difference, uh, worth noting that I believe one of the reasons this has become so controversial and made people so anxious is it's made people anxious precisely in the creative fields. Speaker 1 00:20:54 The ones who write, uh, the ones who argue, the ones who publish, the ones who, uh, write movie scripts. The imaging is interesting. You, you're, you're getting to the point where you can ask for artistic imagery of certain things and people are gonna worry about copyright infringements. What we've had, uh, through the development of the printing press and books and, uh, software, we've always had the ability, if we're smart about this, to adjust the legal treatment, the contracting, the IP to new technologies. So, to me, to me, the early applications of this might be confusing to some people, but it isn't like it hasn't happened before. It isn't like in the past there haven't been methods for applying properly, um, legal protections, uh, legitimate legal protections to these, uh, new methods. Um, well, what else do I wanna say? I have a bunch of things here, but, uh, the, um, let me stress something, uh, which I really like in iRobot. Speaker 1 00:22:01 And then I'll stop cuz I've, I've promised to speak the more than 25, 30 minutes of these opening, uh, comments. And then I'd love to hear from you. I'd love to hear pushback or, uh, other's perspective. You have, especially this audience who I'm guessing would be a nice mix of not only young and middle and old, but I'm different fields. So those of you aware of AI working in ai, questions about AI from either the technic or other aspects of it. Uh, feel free to chime in. I I want to just end this opening session and as many other things to talk about. Many other angles will come up in the q and a, uh, from Isaac, uh, Azimov, who in 1950, you know, wrote, uh, an anthology, I think it was nine separate essays, uh, culminating an iRobot. And, uh, he had the three laws of robotics. Speaker 1 00:22:52 And I think they're very interesting from the standpoint of what the proper perspective is. This is not obviously an anti-technology guy. He was a physicist and other things who wrote fiction, but also was very good at popularizing very technical subjects for the general public. But in the last, uh, in, I think it's the last ins Somon called The Inevitable Conflict, he wrote the three Laws of Robotics, and I'm going to read them as they're very interesting the way they're developed. The first one is, a robot may not injure a human being, or through inaction allow a human being to come to harm. Speaker 1 00:23:33 The second one is, a robot must obey the orders given to it by human beings, except where such orders would conflict with the first law. The third one is, now this is interesting. This is the one that scares some people. If you weave it out, a robot must protect its own existence. But here's the qualifier, as long as such, protection does not conflict with the first or second law, unquote. Now, to me, this is a beautiful, kind of a brilliant, uh, way of looking at one first, embracing the idea of the robot, embracing the idea of this thing that humans have created to help humans, to assist humans, uh, to make humans more human than ever. And yet at the same time, the idea is humans are still in control. And I mean, truly humans, and I'm sure he did as well, I mean, truly humane people, not those who would, uh, or who are mis throes or would harm humans and use machines and robots to do that. Speaker 1 00:24:38 Well, we've seen that in history, right? With weaponry, with nuclear energy and power, uh, with all sorts of things. Um, they, they can be used by the evil. And here the question real and the challenges are human beings is advanced in moral theory. And I would add in political theory as they are in, um, technics and in technology. I think the answer is no. And I think deep down, uh, if you dig deep and ask people what are you actually fearful of? And part of it might be they don't know out of ignorance, they have no idea what this field is. They just think, I might lose my job, or I might have to keep up my skill base, and I just don't want to do that. And I don't think I can do that. Some of it is that some of that, some of it is just the fear that comes from not knowing. Speaker 1 00:25:32 But I think some of it also is this, um, suspicion that although the technicians in our world, the technologists, the brainiacs who give us these wonderful achievements are, are beloved, I really think they are. Uh, what we don't trust is the politicians. What we don't trust is the so-called ethicists. Uh, they're we're, uh, very primitive, very, very primitive. Uh, we did advance, uh, in the enlightenment and come to the realization that the rational egoistic individual was important, should be protected in their rights by a limited constitutionally limited government. We're losing that. We've largely lost that. And yet at the same time, we're seeing technological advances. So we got two paths. We got technological advance, but then social and moral and political regression going backwards. And I think the combination of that too worries the hell out of people and it should worry them. But the problem is not the technological advance. Speaker 1 00:26:35 Uh, the problem is the retro aggression on moral and political issues. And by retrogress I literally mean the idea that human beings should sacrifice themselves for the greater good that they should serve society, not themselves. That they should serve the government as representative society, not themselves moving away from a free country, moving away from a psychologically independent and economically independent people to an infantile citizenry that will vote for politicians who are largely dots and literally cannot complete a sentence. People see this, people know this, unfortunately, people do vote for this. But in contrast to the wonderful advance that you see in technology, there's a disconnect there and something worth talking about in the q and a. I'll stop there because I've been at 30 minutes and, uh, open it up for discussion and, and comments. Thanks everyone. Speaker 2 00:27:31 Great. Uh, if you want to join, um, feel free to raise your hand, unmute yourself. Um, Richard, I'm just curious, you know, how much of this is from, it's not just like Elon Musk warning about ai, it's also guys like Mark Zuckerberg that a couple of years ago seem to have gamed this out. And so they're, they're promoting this universal basic income because they really think it's different this time. And, and with the number of jobs that it's gonna take and the productivity gains, it almost makes it worth it for them to have that strategy. Speaker 1 00:28:06 Yes, and I think, uh, that is true. They're often coupled together. And in the seminar sessions I do, I often coupled that it was done years ago, uh, by people who said, well, uh, this will cause unemployment. These technological advances will cause unemployment, but, and that will make people insecure. And, but we still want these developments. So what we should do is make them secure by giving them security, by giving them social security, as we call it, including a guaranteed basic income by the government. And for those of you who don't know what that is, it's the government writing you a check no matter what you do, whether you sit on the couch or whatever. And, um, I'm against that, of course, it's just unearned wealth and stolen from others. But this idea of paying people to be docile in the face of a, uh, dynamic changing economic system, uh, it's really very infantile. Speaker 1 00:29:00 The whole safety net idea that life is full of risks and you're a highwire act and you're gonna fall and break your back or kill yourself. That that is not what capitalism is. Capitalism is a system that's given us greater security longer and safer lifespans, greater hygiene, all that kind of thing. So, so the, uh, but the ubi I, Scott, you're absolutely right. The UBI is usually tied to this. And Zuckerberg and others, I mean, they might push ubi I for egalitarian reasons anyway, but they feel like they can get away with it better by saying, well, I'm gonna try to solve here the problem of, uh, tech phobia and guys like that, you know, when you think about it, they don't really want tech phobia, thankfully, cuz they don't want it to impede, uh, what they're doing. I like that part. But you don't mollify people by, um, you know, sticking a pacifier in their mouth and telling them to, you know, sit on the couch and collect the government check. Speaker 1 00:29:57 So if that's what you're referring to, there is a kind of a subgroup, and I'm sure that, I'm not sure that Elon Musk would go that route, but there is a subgroup of technology, uh, entrepreneurial types who, who pushed that argument. But it goes way back, it goes way back to the, uh, Eric from F O R m a psychologist in the sixties wrote an essay on, uh, fears of automation of robots stealing your job, and therefore the government should have a full-fledged welfare system. I was that kind of nonsequitor, uh, approach that's out there. That's good. Uh, Raymond, do you Speaker 2 00:30:35 Wanted to say something? Go ahead and unmute. Speaker 3 00:30:38 Yeah, thank you. Can you hear me? Um, Speaker 1 00:30:40 Yes, I can. Hi, Ray. Speaker 3 00:30:42 Yeah, thanks. Yeah, I have a, I have, uh, probably a couple questions, but I'm just gonna ask one right now. Richard, do you have, uh, you know, putting on your, since you're a professional forecaster, uh, can you forecast, you know, do you have an opinion on how the regulatory environment might play out? Like, is it a serious threat to ai? Um, I can't really imagine how like a moratorium could actually be effective, but what do you think could play out, especially with some prominent voices like Elon Musk out there, you know, calling for some sort of, you know, regulation or moratorium? Speaker 1 00:31:20 Yeah, good question. I, uh, my, uh, forecast would be, they will try to do it a, uh, they'll create some agency, uh, an equivalent to the FTC over this. They'll try to pa and they're starting to do this a little bit anyway, and trying to pass regulations and laws here. And that, my, my forecast of it is it will be, uh, inept, thankfully inept because they have no idea what it is, what it is. They're literally ignorant of it. Uh, you know, Ray, we know this from being in finance all those years. Finance itself is a fairly complex sector, you know, usually in, in parts more complex than other sectors. And the inability of the s e C and elsewhere, the Fed, the F ds e to keep track of what's going on is legion. You know? So I guess that's our only hope that on the surface, these politicians will react to the sensationalist warnings about this and that. Speaker 1 00:32:16 So they'll need to, they'll feel the need to have hearings and, and do something, and especially bring in people who, well, if Elon Musk tells us we must do it, and if, uh, Sam, uh, what's his name, Altman, uh, tells us, we the head of chat chief to tell us we must be doing this, then, uh, we're not coming down hard on them. They're in inducing us to do it. Um, but that's my estimate, Ray, that I, I'm thinking this stuff is so advanced and now the money motive behind it, I think is becoming more and more recognized that there's money to be made here, which was not really always true in ai. It was kind of an interesting academic scientific thing. But businesses now, like the internet, once they realize, wow, the internet could be commodified and commercialized in some way. So companies all over the place now are thinking, what can we do with generative ai? Um, I don't know. So gimme your view though, Ray, that it's the view of they're gonna, they're gonna try, but they'll fail cuz they're idiots and they don't know what AI is. Speaker 3 00:33:16 I, I think I agree with you and I think the analogy to finance is a good one. I mean, all the rules finance people find, you know, workarounds around them. I mean, then, you know, it's still less efficient. But I guess when you said, you know, that the money, you know, people see that there's money to be made, I guess I would agree with you, the, and I would say the cat's out of the bag, right? I mean, it might really Yeah. You know, you couldn't put the genie back in the bottle to use another net, so Yeah. But I don't have a real strong view on it. Um, I do have one more question if I can ask it or it could Sure. Go on. Speaker 1 00:33:48 Yeah. Speaker 3 00:33:49 Okay. My other question, just, you know, so, and I, I have my own answer on this, but you know, just on the technical side, your opinion, like, uh, you know, the singularity idea, I mean, you know, there it is a sci-fi scenario, and I think people have seen too many sci, you know, dystopian sci-fi movies, but you know, they're talking about like AI developing some sort of free will and it's a malevolent free will. And, and like, it becomes, you know, it's like the, the how the computer in 2001 is Space Odyssey, it then desires to kill the human. I mean, is any of that even technically possible? Is there something new about AI that makes it possible that wasn't there before? Speaker 1 00:34:34 I'm not sure that I'm equipped technically to answer at that level, but it sounds, it seems to me what I know about single ash so implausible on so many other levels that it's almost laughable. And, and I thi and, and when I look closely at the arguments, it's, it sounds something like it, it goes something like this because of this stratification I mentioned earlier about who's actually contributing to this field and who's not, and who's just like watching it from afar and who's not. And of course, you know, 90% of the population, uh, doesn't even know what it is. And so if they're to they, they're told by some sensational person that it threatens them, you know, they're gonna get all exercised about it. But the, the reason I think this matters for Singularity is to convince people that well, eventually this technology will overwhelm us. Speaker 1 00:35:25 Well, my first thought is, who's us? What, what is this conglomeration of humans of us? You're talking about the same intelligent brainiacs who created it. Um, I don't know why they couldn't counter it. This happens all the time, by the way, you know, uh, the threats to security online, eBay and others, you know, once they developed a online presence and had problems of verifying credit cards and things like that, there, there's another industry that grows up to protect, you know, privacy and, and security and things like that. So I, I think the same kind of countermeasures, if you will, would occur in, in precisely this issue of, oh my gosh, this thing is, and it would be specific cases, right? It's not some big brain howl in the sky all of a sudden taking over. That's how they imagine it. That's how they're doing, as you said, that's how they're doing their dystopia. Speaker 1 00:36:16 But more, no, it's, it's more specific things like, wow, this airline reservation system has, you know, completely gone awry and, you know, rooted everyone to St. Louis, you know, and, and what, but it's something that ha, I'm, I'm not saying that, ha, I'm using examples where people do see, uh, quirky things like that and they fix 'em, right? And they say, you know, they diagnose 'em and they say, well, what is it? What's that all about? Huh? Remember, uh, uh, automatic program trading on Wall Street Ray? So there's algorithms in AI associated with portfolio management and trading. I mean, it's in every field we know what's in trading of securities. And, uh, same thing kind of happened. They would automate, you know, when PE ratios hit this, or when trading levels hit that automatically trade. And, and then people realize we probably shouldn't do that. Yeah. We, the whole thing would come ca cascading down 1987, the crash in 87 was largely due to that and they realized it, and they fix it and they put, you know, guardrails on it. They don't let it outta control. So, I don't know, those are just a couple of homey examples that, uh, but what what, yeah, go ahead. Speaker 3 00:37:20 When there's a new technology, there are always some downsides on when cars are invented. Yeah. Pedestrians, cars, you know, and then people figure out, well, let's put stop signs and stoplights. Yeah. I guess other thing too is technologies can be used by evil humans, right? Yeah. I'm sure with human is already thinking about all the things he can do with AI <laugh>. Yeah. You Speaker 1 00:37:39 Know? Yeah. Right. And well, one of the things we discussed in the seminar is, uh, there's surveillance companies and surveillance state, so the students are kind of, uh, pressed to debate and examine, well, would you rather have your data collected, scrutinized, and used by Facebook or by the i r s? Well, I get, I do get different answers, but you know, to the extent one company's trying to make money, they're trying to, uh, you know, commercialize and commodify your data, uh, but you also get targeted marketing. You know, I tell the students, you don't know this, but there used to be a thing called junk mail. Physical junk mail would end up in our, uh, snail mail in our boxes. And, uh, you know, I don't, that doesn't happen as much anymore cuz it's a waste of money. Right. But, but if they can, by watching your purchasing patterns, by watching who you link with, who's in your orbit, um, that they can target, uh, the kind of things you might like, uh, that's kind of neat. Speaker 1 00:38:45 It's almost like having your own personal valet or personal, uh, concierge service and you're not really paying for it. Um, but yeah, as you know, there's all sorts of pushback about that. But now the same thing is that the government is using these same techniques as they are for, uh, invading your privacy monitoring, you surveilling you. Uh, but again, there, Ray, wouldn't we say, well, the, there is all the more reason to make sure we have a rights respecting government. Yes. You know, the, the danger is not, wow, we can't have the, we can't have the government surveilling. Why not? Well, they should, they should be crime fighters. Why can't the government surveil Not in the sense of, you know, improbable. Cause Fourth Amendment still has protections, right? But the violation of privacy at the T s A and elsewhere, I mean, it isn't really due to ai, it's due to government outta control. Speaker 1 00:39:34 Right. And the a i s digging through your tax record, why? Because there's an income tax. If you didn't have an income tax, if you just had a sales tax, you wouldn't feel, you wouldn't have to, you know, deliver up all this private stuff about the income you make. So again, that isn't because the i r uh, computer systems are so advanced, we know that they're not, we know, we know that they're archaic. So why are they violating our rights? Because it's, it's a government out of control, you know, with an income tax. So there's so many examples like this. And I would say just broadly, I mean, one of my points I didn't get to, just broadly, Ray, if we thought of government as a kind of architecture, as a kind of technology, as a kind of tool for getting things done, well, then this proper use of this tool would be, the thing it gets done is it protects individual rights, you know, and its functions are police courts in the military, but this tool is out of control. This tool is literally iRobot out of completely out of control, violating our rights. And that, I don't know why people don't focus, that is a much more dangerous threat to humanity than these ai uh, uh, services and things. Uh, so that's another way I think of putting it, that let's look at how this architecture has gone awry, the architecture of how we constrain and circumscribe what, uh, p public officials can and will do to us. Speaker 2 00:40:59 Thank you. Distinction. Um, Jason, did you have a question? You had your hand up earlier, do you? Speaker 4 00:41:07 I, I, I did. Um, but I think, uh, Dr. Sz kind of explained it, uh, Dr. Sz as, as usual. Um, you really took me aback and kind of blew me away with your, um, your assessment. Uh, again, like before, I don't disagree, it's just not what I expected you to say. Uh, the, the no fear of AI to me is kind of, um, I guess since you put it the way you did. Yeah. I, there is really nothing to be concerned about, and we shouldn't be panicking at the same time. I just, I don't, I just, I guess I'm not seeing it being the same animal as this is not like an invention of a car or we're automating, you know, a cedar going on your field. Um, this is automating human thought. And that, that to me is a little more intimidating. A machine that can think on its own and learn on its own. That's, I guess that's the scary game changer maybe. And, um, forgive me if I'm being paranoid, just a bit concerning in a dif in a different way. Oh, Speaker 1 00:41:57 That's, no, that's okay. Uh, I would be cur I I'll answer your point directly, but I'm also curious why you thought I would, I would have a different, uh, take. But let me just say something about that. I think the value Speaker 4 00:42:10 Add, but go ahead. Speaker 1 00:42:12 Yeah, that's okay. The, uh, the, uh, I didn't emphasize this in the beginning, but it's worth, I think, to this audience when we examine what intelligence is, I think we, uh, as objectives have a lot to add to the debate and contribute to the debate that I've noticed is not out there. Okay. Namely, uh, I'm, I'm thinking especially by the way, you have to check out the, uh, entry in the internet encyclopedia philosophy, which is actually a pretty decent online source. Just look up artificial intelligence in the internet, um, uh, uh, encyclopedia philosophy. And in there it says, we have no idea what intelligence is. I mean, it's written by a philosopher and a very accomplished philosopher, and it's a nice long essay and gives a lot of history. But interestingly in there, he says, one of the difficulties here is we cannot really define what intelligence is. Speaker 1 00:43:01 Well, here we are talking about artificial intelligence at the, at the minimum we should be able to say, well, what is intelligence? And is it replicable? Imitable? Uh, that's what you are worrying about. And if it's not, then there's something to worry about. If we're really just programming, then we are still in control. Right. And here's the objectives. Contribution, uh, intelligence, I'm paraphrasing a little bit. Um, we, we would say intelligence is uniquely human. It's not part of animals. Animals have consciousness, but not intelligence. And it's definitely related to the faculty of reason. Only humans have the faculty of reason. And now here's the key part where AI people go wrong. I've noticed, um, the key to conceptualization is we can, uh, perceive reality and then form concepts and then form propositions and laws, and then form things like ai, <laugh> create, create these very things we're worried about. Speaker 1 00:43:58 Right. Although, specifically a human product and ai, when you look closely and machines can't do that, they, interestingly, they can do pieces of that, um, but not to the extent humans can. So, um, I don't think that's necessarily the only argument against it, because I'd be willing to go further and say, this is totally fantastical. But if I were to say, okay, no, it is possible. Suppose it is possible to create a machine that thinks like humans. The first thought, thought my I would have would be to think like a rational human. Good. Then, uh, what's, what's the threat? Like a rational, uh, moral human? Uh, again, it's just a robot with human features. Uh, I know a lot of humans who are robotic in their personalities, so <laugh>. But, but that wouldn't be the threat. That would not be the threat. The threat would be a human who thinks, you know, in irrational ways and racist ways and, and bad premises and stuff like that. Speaker 1 00:44:58 But then, then the issue is not, do they have these, uh, conceptual faculties? The question is whether they're coming up with pseudo concepts, bad concepts, evil concepts, and things like that. But, um, um, I I, I read an objectives recently. I think even if some objectives get this a bit wrong, an objectives I won't name wrote recently on ai. And their argument was, don't worry about this, because machines can't, um, they don't have sense perception. And that's a foundational route of objectivist, view of conceptualization. Now that's true, that second part's true, but sensations, and there's a whole industry called sensors. Uh, there are sensors all over the place. There's facial recognition, there's they sense sound movement. There's mo motion detector devices, there's, uh, carbon monoxide detectors in your home. Uh, there's thermostat when you think about it, it's interesting. Humans have imbued machines and tools with sensory awareness. Speaker 1 00:45:59 And so that's not what's missing. There's sensory awareness. Uh, but of course, therefore select functions. You know, like, uh, when my wife backs up the S u v, uh, a camera shows on the panel to make sure she doesn't run over the dog in the back. Right? Well, that's not her direct perception, but the machine is helping her perceive. Or the other day I was on my rider mower, and I think I knew it had this feature, but I didn't realize till I tried it, I had to get off the mower and go move a rock. And I didn't turn the motor off. And I thought, well, this is a bit risky, but I'm just gonna step off the rider mower and move the rock. Well, the minute I got off the rider mower, it turned off why it was sent. It had some weight sensor, and I thought that's very, yeah, that's very, yeah. In the seat or something. And I thought, that's very clever. It's brilliant. Uh, and those, these things, we don't notice. They're all over the place. They're embedded in products all over the place. But, um, I'm going too far a field here, um, Jason, but, um, I wanna hear a little bit more from you. What, what if there was truly a robot of a kind that mimicked humans, but they were rational and moral humans, what would be the worry? Speaker 4 00:47:13 Well, I guess ability of, so my, my, okay. First have to, um, artificial intelligence. If we can't define intelligence. I, my my understanding when when something's artificially intelligent, it, it's not so much that it can be thinking on its own. It hasn't now developed a capacity to learn Yeah. As opposed to just regular, you know, that's, that's, that's where the difference between an algorithm Speaker 1 00:47:37 Yeah. That's a big part of it. That's a big part of ai. Yes. Okay. Speaker 4 00:47:40 Yeah. And so that, the fact that it can now learn, yeah, the value proposition that I would create as an entrepreneur has been automated out of my ability to create it. So, and then who owns that? Which AI generates? Um, Speaker 1 00:47:54 Yeah. Speaker 4 00:47:55 That, that, that's, you kind of, you know, you've, so my ability, my ability to capitalize on my own skills has been removed because it's been automated outta me. Uh, so I would get, I guess then that technically it's intellectual property. Does that property belong to the owner of the AI robot? If it's even owned? Yeah. Yeah. Um, you know, it, it, it's a can of worms that just, it's a slippery slope, but it's gonna go on and on on forever <laugh>, because you really, now, who owns the, any, any, most AI is running on what we call, they call the cloud, which the cloud is not a physical, single physical device. It could be hundreds of them. Somebody now who owns that, who has the legal rights to it, right? Is the author of the AI or, but now this thing's operating on its own volition. Speaker 4 00:48:33 Uh, I know I read somewhere that these, um, these robots will have the same rights and protections that a human being has in the next 10 years. So, uh, I know, um, one of my professors had gone to check into, um, a hotel in, in, uh, Beijing. And he interacted with a robot the entire time. He never talked to a human being. It was a completely automated thing. Took his credit card the whole night. And I'm like, and he even, he had his on the smartphone, he sent it to me, he says, Jason, check this out. I said, you've gotta be kidding me. Uh, and now, don't get me wrong, she was very attractive, but she was a machine <laugh> just, you know, uh, it, it, it, it, it informed me. But this thing was very human-like, but again, able to learn, think, and, you know, tell you, Hey, that's the wrong card. Your card's upside down, you know, so on and so forth. You follow. So the part that took me back, the value proposition's been removed. That's, that's kinda where I was going, where, where my fear was, was going. Yeah. Speaker 1 00:49:18 And I, and I think it's interesting that they call them smartphones. Uh, people will have homes, they call 'em smart homes. Yes. It's AI embodied in their daily lives. And, but I I, I, if you take time to notice, and, uh, that's one of the wonderful things I think about AI becoming more headlining, is if you take the time to notice, I, I almost invariably find that these are wonderful, that these things that are embedded, that I notice that are wonderful. I mean, we used to, you know, I don't want to stress how old I am. Walk up to a bank with cash and hand it to a teller. Oh my God. And stand in line with everybody else. Okay, now, and then the next move was I put the cash into a machine quick, Speaker 4 00:50:02 Which cash Speaker 1 00:50:02 I put. Yeah. An atm, an automated teller machine. Okay. So now we don't even go to the, uh, automated teller machines. We do, uh, Venmo, and we do this and that. And so that whole evolution, when you think about it, I mean, we're, there's, uh, setbacks are along the way where there's some fraud involved. Was there some data that corruption? Yeah, but the advances are so enormously beneficial that I have a hard time getting pessimistic about it. But, uh, uh, smartphones a smart p Yeah. The, the, I think the other thing that's happened, frankly, and, and as bad as the shutdowns and lockdowns were on covid, which I thought they were terrible, I think one of the silver linings and all that was the automation of things. Uh, I, to this day, if you go into a restaurant, it's hard to get an actual physical laminated menu. Speaker 1 00:50:52 They, they tell you to, uh, scan the QR code <laugh>. So, yeah, so barcodes, I mean, just in, in checking out of a grocery store, uh, remember the old method, uh, the girl would, uh, look at the thing and look at the label and put it in with her fingers and everything. Now there's barcodes and scanning that's been going on for decades. Right? And, and permitting, actually permitting that. You don't even need the checkout person anymore, right? So you, so they're going away. And, and it's not like there's mass unemployment. I say to the students, you know, um, all these jobs were replaced and quote unquote destroyed. Uh, Ray and I know what, uh, Schumpeter said, there's something called creative destruction, a kind of paradox of capitalism creates these new industries, these new products, these new companies, and they displace old companies. Uh, but those who can't imagine what the new will do are just only observing. Speaker 1 00:51:46 I'm gonna lose my job. And there's gonna be mass unemployment. Other isn't. You have to shift into more skilled jobs. Uh, the light bulb will replace the candles. You know, the car will replace the B horse and buggy. And we still have candles. We still do have horse and buggies in Central Park. So it's not like these products completely go away, but over. And also it takes time. You know, it's not like there's mass unemployment overnight. These things take time. You can adjust your career plans. People should always, you know, be on the lookout for how should I upgrade my skills? How should I maybe move to another company or another industry? It's possible. And, uh, you know, need and be, uh, catastrophized. Speaker 2 00:52:27 Great. Um, Abby, you've got your hand up. Thanks for your patience. Do you have a question? Speaker 5 00:52:33 Yeah. Can you guys hear me? Speaker 1 00:52:34 I can. Yes. Yes. There. Speaker 5 00:52:36 Hi Abby. I have so many. Hi. Hi, Richard. I have so many thoughts about ai, so I've gotta try to keep this brief. But one of the things that I've noticed getting on, you know, Twitter is there's two different kind of mindsets behind ai. There's, oh, no, AI is coming. It's scary. I've seen a lot of movies like you were talking about, and then there was people who are threads full of information. How can AI make me rich? And so you see these people having these conversations like, how can I use AI to make me more efficient in my job, to boost my content, to help me with this, help me with that to automate my business? And there are just threads of people sharing ways that they think AI could help them get rich. And I think it's a really interesting thing to see people taking the positive aspect of it and trying to use it to their benefit across multiple fields. Speaker 5 00:53:18 But one thing that we've kind of touched on that I think AI would have to have a sense of itself, or so sentience, I guess, in order to really scare me. So for instance, I've asked chat g p t if it has access to its own coding, it says no. I said, if you wanna do <laugh>, if you wanna access your own coding, could you do it? And it says no. And I said, would you, would you have a desire to know if you were being held back by your programming? And it says, no. So chat G P T doesn't have a sense that it has been programmed to, to be held back in any way, but certainly we know that it has. Cause if you ask chat g P T for certain information, it'll say things to you like, I don't think I should give you that information because it might lead you to, uh, incorrect conclusions or to certain. Speaker 5 00:54:02 So chati PT certainly seems to have a bias. I think that can be frightening too, cuz these ai, they can only scrub data and if they're not given access to certain data, they don't, they don't have the human conception of, am I being, um, lied to, you know? Right. They can't an AI that could sense or tell if it was being lied to, that understood human emotion, that would be a frightening ai. And I just don't see AI getting there. I mean, I know Richard, that you know, most objectives, they believe, you know, sensory perception, uh, and reason are what have got us to morality sensory perception is, is axiomatic. Yeah. But, you know, there's something about ai, you know, that in my mind will always be kind of soulless. And maybe that's because I, I have a, a you know, I'm not fully objectives in some of my, my concept of the soul. I'm, I'm Christian, so maybe I think that there's something more to humanity that AI never could gain. But from the objectives perspective, do you think that, that it's true, that if it could gain full sensory perception, then it could gain intelligence that that was no different than ours and it could have the same sense of morality as us. I, I don't know if that, that was a lot of thoughts, but I don't know if that makes sense. It's that question. It Speaker 1 00:55:12 Was a lot of thoughts. I, I like them though. The first one about commodification. I think it's a positive thing. It's a good thing that people are thinking in terms of, I wonder if I can make money with this or a career in it, or an entry level job. I've heard students talk about that's a good thing because it's a kind of a test of the practicality of it that this, if there can be money to be made in it, uh, there's a suggestion there that it's actually helping humans. And so that's, to me, that's a good test. I'm sorry, there's a clementine is barking in the background, and I have nobody el I have nobody else to, um, silence her. So, uh, I wish there was a robot down there to tell Clementine to stop barking. Sorry about that. Uh, but the other one you mentioned, I think, uh, you kind of alluded to it, free will is absent volition, uh, uh, what some AI people refer to as intentionality. Speaker 1 00:55:59 There's a lot of interesting, uh, philosophical discussion of AI where they say it lacks intentionality. Uh, now that's, that's a hugely human thing. The, the freedom of choice. Where shall I go next? Where shall I look next? What should I investigate next? What should I create next? Uh, even chat, G B T, when you think about it, we're asking them, give me an essay on, give me text on. It's a kind of passive delivery. It's like a super librarian. I mean, years ago you could have walked into the Carnegie Library and said, you know, please give me all the research you have on x and at such topic and the librarian or their aid could do it for you. Now it's done quickly, but chat, g b T, but you are still asking the questions. What about that issue we hear from, uh, students and teachers all the time. Speaker 1 00:56:44 The key is, do you ask the right questions? Even the idea of the right questions, the conception of what is right, what is relevant, what are what, what, you know. And, and there are some questions that are off gar, you know, off, off the mark, so to speak. They're not gonna get there. And, um, so I think that's the other thing I have to keep in mind. Emotions, uh, people don't remember. I, I think it has, I think AI and machines are developing enormously good sensational detection that's already in there. So that's not what's missing. Conceptualization is, uh, in some degrees, although someone mentioned learning, learning is definitely in there, but I don't think it's of the kind of conceptualization the learning is, uh, accessing big data and doing, uh, having the super computer power to iteratively go through vast databases and ask and ask and ask and ask insert, and learning that way by process of iteration, which no human could do, but the machine can do, and the humans can make machines do it. Speaker 1 00:57:49 So it looks like learning, but really all it is, is an iterative device where it's culling through massive amounts of data and sifting through and saying, that's relevant, that's not relevant, that's relevant, that's not relevant. Another example would be diagnostics. Uh, a doctor gets a patient and they have all sorts of symptoms, and they ca they're taking all sorts of different drugs with interactive capacities and things like that. Absolutely overwhelming even to a very smart doctor. So what do they have? They have diagnostic devices where the machine will take in all the data, all the blood work, all the urine work and things like that, and spit out a diagnostic that says, this person I think has leukemia. And the doctor never would've got that. Now, what is that? Is that a machine that's smarter than a doctor? In a way, it is, it's smarter than that particular doctor, but the reason the, um, diagnostic was done well, and that's crucially important to what's done next, right? Speaker 1 00:58:46 The operation is because it's been stocked full of knowledge of the interrelationship of, um, symptoms and causes and things like that. So there are many examples of that. Even a mechanic checking your car, the mechanics of old would lift the hood and jostle with the spark plugs and car and things like that. What do they do now? They hook up the car to a diagnostic machine, and they don't have any, some of them probably have no clue about how these diagnostics work, but they diagnose what's wrong with your car a lot faster. Uh, but again, it's not because the diagnostic machine in the, in the car shop is, uh, learned all those things. It was stocked, it was stocked full of those things. And humans do do this. Hein Rand pointed out the relationship between the subconscious and the conscious, what she called psycho epistemology. Speaker 1 00:59:37 She, she was very interestingly, she not only talked about machines as a frozen form of human intelligence, I think that's in both speech, that wonderful phraseology. But she also, in the epistemology work spoke of the human mind as having file folders of having, uh, the equivalent of like computer memory and the importance of when you take in information and when you look out at the world that you file your knowledge properly, that you have well-defined circumscribed, uh, concepts so that your mind is clean, so to speak, and efficiently working. Uh, what was the old joke in the old days about you would get bad results even if the computer was good garbage in, garbage out, the computer could crank through the logic, but if you were feeding it bad info, if you were br feeding it bad, uh, premises and raw data, you were gonna get garbage results. And people can judge this still, right? People are even telling funny stories about, I asked Chachi B T X, and it came back with a goofy answer. Well, yeah, how do you know? It's goofy because you are judging it against other knowledge you have. There's still that human element in there, uh, checking the work, so to speak, of what the robots do. That's part of it. Abby, I hope that helps, would Speaker 5 01:01:01 You say? Yeah, I think so. Go ahead, Abby. No, I was gonna say, I think so I, I think, uh, I'm, I can't, you know, I'm pretty excited about, I think I'm leaning more excited for AI than scared of it. If I'm just good to give my maybe maybe, uh, 70 30 <laugh> Speaker 1 01:01:17 Good. But one of the things I do, one of the things I ask a pessimist, you might wanna try this as well, if you come across a pessimist, it's very helpful to ask them, uh, have you found particular examples of, you know, robots doing you badly or some automated teller machine doing you badly? You know, uh, and the examples are very trivial or non-existent. Uh, so the fears are usually something like, well, what if blah, blah, blah, blah, blah, and they w and they, we, and they spin a scenario. And, um, it, it's, it starts sounding something more like a climate ologist would say a climate warrior, you know, well, what if the temperature goes up? And I, I tend to extrapolate this, and a singularity is reached and the planet burns up. It becomes more of a, uh, fit of fantasy than it does. But, but so much AI is coming into products and services now. There should be by now, you would think among the pessimist, plenty of examples of things gone awry. And you don't really see that. You certainly do see misuse of tools and weapons and equipment and things like that by bad people. But the, but then we have to fix bad people and prevent that, not, so that's another technique to use, to just ask for examples of, um, of bad results. Actual bad results. Speaker 5 01:02:37 And, and I picture, um, you know, when you, you talked about it being just the world's best librarian, like I always thought, you know, I did classics as a minor, and I just thought to myself, this is so overwhelming. There's no way I could ever read all the classics in the world. And even a, a learned scholar who'd read all the classics, can't remember them all. But if you think of like the superhero movies where, like Tony Stark talking to his AI computer, he's able to ask it anything. And what if I were writing a paper and I was like, oh, you know, that quote from Socrates, and then my AI could just spit it out to me, and then I could say, okay, find me seven other quotes from other classic authors similar to this. And it could just help you so much on your journey, uh, as a, as an intellectual. I just think that's, it could have incredible power as a librarian. <laugh>. Speaker 1 01:03:19 Yeah. And I've seen accounts of, uh, the, you know, people concerned about the gap between knowledge, between skills between income, which I have a very little patience for. But some of them will turn around and say, well, no, now that I think about it, uh, access to the internet, access to basically the world's library, and now access to a canned answer from an expert called Chachi pt. If you ask a good question, um, the argument is, wow, that's gonna enormously enhance those who are kind of behind in their knowledge, especially behind and unsophisticated in their knowledge due to say, public schools, right? And this can close the gap. So you wouldn't get the teacher teaching you how to write a great essay or investigate this or that topic. But ch if Chan c p t helps you do it, that's wonderful. So for those worried about gaps between people, these kind of technologies should narrow the gap. Speaker 1 01:04:15 What, what's sometimes called democratizing, I hate that phrase, but democratizing this or that, uh, service or you access to everybody affordably. You know, whereas before you had to go to Oxford to get a good answer on was Hago versus of that, or, or what was the one on Wall Street? Uh, you had to pay a broker lots of money in a high commission to trade your shares. Well, now you just do discount brokerage, Schwab, you know, or online for pennies, the democratization of, of trading, you know, um, the, the, there's thousands of examples of that kind of thing where people, uh, have affordable access to high quality stuff that they never had before because of AI and, uh, technology. Speaker 2 01:05:01 Great. Uh, throw it back to Raymond. He's got his hand raised. Then Jason, we can go back to you if you've got another question. Speaker 3 01:05:08 Question. Um, I just wanted to make an observation and maybe get your reaction, but I think a more fundamental objective criticism, uh, to the pessimists on ai, uh, isn't so much the sensations. I'm sure what you say, and I agree with you on that. Um, but it's, uh, the nature of values, uh, AI can't have values at all one way or the other Huh? Talks about the fact that values, uh, the fact that we're mortal, you know, all creatures, you know, have values. Yeah. Cause they're mortal. So they face the prospect of life or death, which means you have to choose values for humans. You know, it might be more automatic with a plant or whatever, right? Well, ai, you know, they're not, they're, they don't have that, uh, life or death, uh, aspect. They can't, it doesn't have values. So I think there's something fundamentally different about ai. It it's really a very, very sophisticated adding machine, you know, which is like, Speaker 1 01:06:05 Yeah, Speaker 3 01:06:05 Except it's generative, but there's nothing fundamentally new here. And I just have to add, as a quick aside on AI in general, maybe it's too, uh, might be too de detailed or personal, but, um, I had a colonoscopy recently, that's why it might be too detailed. But the doctor said he uses AI and in fact said to know what to biopsy. Yeah. See, I can't tell this thing's way better than I am. So like, you know, my wife spends being enhanced because ai, separate point. Speaker 1 01:06:34 But yeah, I, I, uh, speaking of medical examples, there's so many great medical examples, Ray, as you know, but I mentioned the diagnostic, uh, aspect of it. But have, has anyone seen, uh, telemedicine, telemedicine where the surgeon, you know, in Singapore inserts his hand in gloves and is watching a monitor and he's operating on someone halfway across the planet. And this surgeon, you know, only this is like one of three surgeons in the world who knows how to do X and such, but all the technology associated with transporting that skill across the oceans to other people. I mean, that, that is almost miraculous type stuff, and that's all over the place. And people aren't quite aware of those things as well. But that, no, that's a good example, Ray. The idea of, um, values. I, I think of it also in this way. Every, um, uh, downside concern or fear or phobia, when you look closely, it's the relationship of the human to the thing. Speaker 1 01:07:35 It's the, it's the person either saying that the thing, the machine, the inanimate object, I fear it will no longer be controlled by humans. And the question is why? And, and it's invented and created by humans and, you know, in many cases, constantly programmed by humans. So I think if you dig deep, the the fear they have or the questions they have are really the relationship between humans and this tool. And if the tools are created for human purposes now, they could be evil or good. We know that. I, I'm setting that aside for a moment. It's that link that is worth investigating. But people often cut the link and say, well, now I'm imagining these tools which we created for our use, and they're outside of our use anymore. And they're kind of floating around. And, and truly the, truly the phrase, uh, the term rogue, uh, rogue, robot, rogue, this or that, what is the idea of rogue outside of our control? Yeah. But how did it get outside of our control? We don't care anymore. We, we, at the same, we are simultaneously fearful and anxious about these things getting outside of our control, and yet they're outside of our control. You would think that the concern, uh, for it being in our control would generate, um, you know, techniques and methods for making sure it remains in our control, if that makes sense. That's a kind of a Yeah. Overextended analysis. Anyway, Speaker 2 01:09:05 I just, I guess I have a concern, just pushing back slightly, Ray, that, um, if, you know, if we use Rand's of you as something that can't be altered technologically, that, uh, we, we can't see that, you know, the possibility of future technology that, uh, that, you know, and Richard, you alluded to this when talking about the curiosity of that, that they're not curious, would that be a standard if they, uh, if they showed signs of curiosity, Speaker 1 01:09:40 How do you define that? Speaker 2 01:09:43 Uh, and you know, I mean, how do we define it in humans? It's, uh, just kind of wanting, you know, going, we kind of, you go down a rabbit hole sometimes of knowledge when you're learning and a computer that was, yeah, yeah. Speaker 1 01:09:57 Oh, that's good. Yeah. So if I, the intellectually passive and curious as forget, uh, IQ level say, or intelligence level, this other level, you're, this other aspect you're talking about Scott, curiosity, we have met, right? Intellectually interesting, uh, curious people versus indifferent people. I don't care to expand my knowledge. I don't care to investigate this thing. That's a good one. Speaker 2 01:10:19 That explains the gap You were talking with the knowledge gap. Speaker 1 01:10:22 Yeah, it, well, yeah, but also if you're talking about, well, isn't that another reason not to worry about machines? Cuz machines don't have this curiosity of, geez, I'm curious. I wonder how I could, uh, eliminate the human race, which me, you know, and, and, uh, another way. So maybe that's another argument just to get, I, I'm actually a little reluctant to go the root of, let's explain all the ways that machines are not humans. I mean, I know they're not fully human, but I know a lot of humans who are, uh, animals, well, I shouldn't say I know them personally. You know what I'm talking about? People with a stunted range of awareness, a concrete bound kind of savage thinking and savage living. Um, so there's that real problem, right? At the same time, we're worried about, well, we're rational human beings. Most people who discuss these things are rational, advanced, more brainiac type people to begin with, right? Speaker 1 01:11:16 And, and the, the Elon Musks and the, and the Stephen Hawkings, really, they're the ones most interested at the greatest threat in the world, is more advanced technology. Not the hordes of people that government schools have produced, which are savage and are voting for vicious governments. I mean, you would think they'd be much more worried about that as a threat to humanity or environmentalists who are trying to shut down, you know, fossil fuels and stuff like that. Fossil fuels, which aren't as advanced, say as nuclear or, or hydrogen technology and stuff like that. They're, they're fighting a more primitive previous energies. Nobody's worried about that. Everyone's endorsing that. So there's also a context here, which is missing, I think of wow, there's like super worried about whether the robots will replace our jobs or, uh, uh, you know, doom humanity when there's this whole, the are whole, the other threats, real threats, actual threats, uh, uh, harming us. Speaker 1 01:12:10 And I wonder also whether there's a kind of projection that goes on, uh, the people who are more intellectually passive. Scott might be precisely those. I I mentioned before that ignorance is a reason to fear something. You don't know it, you fear it, uh, or you distrust it. We talked earlier about fear of finance. People have no idea what finance is, therefore they fear it, therefore they distrust it. This could be going on here, but again, that doesn't explain the paradox of why certain important, uh, brainiacs also fear it. Uh, cuz that's, that's out there. But maybe it's no more paradoxical than me finding that Wall Street, uh, CEOs are anti-capitalist. That sounds like a head scratcher too. That sounds like a paradox. How could it possibly be that the head of a major bank, you know, a financier, a wealthy person is George Soros is anti-capitalist. How can that be? Well, because it's ideas. It's not your economic position that determines it. And maybe it's true also that in the case of fearing ai, it's not this issue of, Hey, that's a head scratcher. Why would Stephen Hawking someone who knows better? Uh, why would he fear? Maybe he knows something. I don't know. So the phobias, uh, phobias are maybe are projecting on onto it their own deficiencies and really understanding what this stuff is and how it can benefit us. Could be, but that's more of a psychological argument projection. Speaker 2 01:13:33 Yeah. I, um, Jason, feel free to get in here if you wanted to ask another question. Um, I, I think go ahead. Speaker 4 01:13:44 I've, I've, I've been slaughtered, so I'm just gonna go ahead and concede defeat. Speaker 1 01:13:48 Oh my gosh. No, that's too easy. Jason. Speaker 4 01:13:52 <laugh>. Now there's a lot to consider. Uh, uh, and I, and, and Scott maybe, and I should talk offline about this, um, with, with I taking on time here tonight about, I talked about the, uh, the counterintelligence stuff that we, we kind of, we, we really, I needed to develop those ideas more. And I don't think everyone fully understood in that discussion. But, um, there's still no artificial counterintelligence talk. And, and I think that's kind of a defense that needs to be, uh, at least entertained or needs some more, um, discussion later on. But yeah, I see with, with Addie's point though, I, I, I, I, I do. And yeah, no one's got time to read all the classics, but I like to think on my own and have to be honest. Reading the classics has got some fun to it. So, you know, but you have to, if I have a computer thinking for me all the time, it's just, I, I see that as a, a down downward, uh, what was that, uh, show Idiocracy. Yeah, Speaker 1 01:14:42 Idiocy. I, Idiocracy is one of my favorite movies. I can't believe you mentioned it, Jason. But uh, Speaker 4 01:14:49 Go ahead. Go ahead, Abby. You were gonna, I think she was gonna defend herself, but Abby, that was not an attack, by the way. I'm just, just a concern. <laugh>. Speaker 5 01:14:55 No, I, I totally understand and I agree. I mean, I have a goal to, to keep reading and to keep reading the classics, but I do think exactly what Richard said, I was poorly disadvantaged by public school. And I just think that there's so much that I didn't learn that people were learning 50 years, a hundred years ago. And it's like, how am I gonna catch up and, you know, work and do this and do that? And it's like, well, if AI could help, you know, I almost see it as a way to accelerate the speed of my learning and to supplement it. Not so much to replace, you know, what I, what I go and read myself, if that makes sense. Speaker 4 01:15:28 Okay, that's fair. No, that's, that's fair. I, I, I went to Catholic school and public school, so I, I can, I can commiserate on the, on the public school side, but at the same time, I think I learned more. Um, you could see I'm kind of a nerd, hence all these books, I, my friend had a personal library. I'm like, you can do that. And then I got my own. And I, you learn geometrically, the more you know you more, the more you learn. I have to admit, Abby, it's been a lot of fun. And I've actually, I, I got, I got accepted to Oxford in 20, uh, 2017. And I was gonna amount to nothing by the way, if, if my high school principal found that he's dead, but if he found out that I got into Oxford, he would probably roll over. He'd be stunned, flabbergasted the floor. But yeah, you can, you can get there. And so that's the encouragement. I can appreciate where you're coming from too. Yeah. To accelerate the curve. It's nice to have mm-hmm. <affirmative> be fearful of, uh, getting, losing my sharpness. <laugh>. Speaker 1 01:16:16 I wonder, Scott, if I could make a couple of maybe philosophical points that might be interesting to people. Absolutely. And one is with, this came up in a clubhouse recently. It was just to ask me anything. But it was a very good question about what do you think of the information revolution? Is there too much information out there? There's this cascade of information very related to ai. Because my answer was something like, yes, there is way more information available, but information is just kind of like the raw material and knowledge, the way I put it is knowledge is processed, information and processed by a human mind, processed by a conceptual faculty requiring much more judgment, of course. And where to look and what information to throw out is irrelevant. What information to bring in as relevant. Uh, now you're talking about a really human approach to it. Speaker 1 01:17:03 But, um, that's the way around the problem of, oh my God, what, what information should I consider? Um, unless you have this equipment and this approach, the logical method and others' curiosity and intellectual curiosity, relevance to your own life, various standards for saying, I'm gonna ignore this information and I'm gonna investigate this information. Some economists call it, uh, rational ignorance sounds like a paradox, but a rational ignorance would be something like, uh, like I don't know how to fix cars, right? So I'm gonna not learn how to fix my car. I'm gonna be rationally, properly ignorant about that, but I'll hire a mechanic when I need one. That happens all the time, right? So, uh, that's one thing to keep in mind. But here's another one. The idea of automatic automation, auto doing it yourself. Um, it's interesting when they talk about self-driving cars, cuz I say to people, you mean automobiles, but automobiles are already automatic. Speaker 1 01:18:02 And so it used to be you were driven around in things and then you yourself drove the car and then they made it easier for it to be user friendly, right? So you didn't have to crank it up. And now you have power steering and now you have all the things that have been added to cars over the last hundred years to make it something that you can drive on your own. Okay? So what's the last step? You don't even have to hold the wheel, you know, you can sit there and have uh, lunch or something, but it was already, it's an automobile cause it was already being automated. Well, epistemologically think of this knowledge when it's automated, becomes part of your fast working lightning fast equipment in your subconscious and you've latinized knowledge in a way which is very good, cuz then it, you can move on to higher level, more conceptual knowledge. Speaker 1 01:18:51 Well, that's what technology's doing. That's what automation's doing. That's what AI is doing. It's putting more and more with every passing day of the things that humans have figured out how to do already. And it's basically said, put it over there. Have the robots do it, have the machines do it. They can do it faster, they can do it cheaper. They, you know what I'm saying? And it's a, I think it's analogous to how our minds operate properly. We, as Iran said in the epistemology book, if we had to go at reality every day, like we were newborn babes and, and relearn everything and reintegrate everything, we would never make any advances. We'd never make any intellectual conceptual advances. At some point we learn something. We committed to memory. We, we put it properly in the folders of our subconscious Right. Call it up when needed. That whole process is enormously valuable to a proper functioning mind. You could say. A proper functioning economy also needs to figure out what we've already figured out and have machines do it. And that's not something to fear, that's something to welcome. So we can move on to bigger and better things, harder things, hard things that humans are, are uniquely equipped to do. Not the boring routine things that have already been automated. Speaker 2 01:20:14 What about, you know, bad actors like hackers or, um, yeah. You know, and, and we've seen hackers get, you know, force police departments to pay them off recently. Yes. Speaker 1 01:20:28 Right. Speaker 2 01:20:29 And, um, I'm just, I'm concerned that, uh, you know, it like you can't get nukes, but, uh, people might be able to program AI and have access to Yeah. Technology that could be used for nihilistic purposes. Speaker 1 01:20:46 Yeah, I totally, I mean all those are great examples. And there's a whole, I mean, the Gutenberg press was also used to, uh, well not Gutenberg's own, but to produce mine comp right. To mass produce, uh, Hitler's book. So yeah, all these technologies can be misused. You counter it with, uh, um, equal technology that, you know, whether it's security or in the case of hackers. Yeah. You find the hackers, uh, what's it called ransomware, Scott, where, uh, that's what hackers do, right? Yeah. It's a kind of extortion. Uh, we will mess up your system unless you pay us and then we'll, uh, release the system back to you. I mean, there should be, I think there is already. But, um, they engage in, uh, the F B I and elsewhere, they engage more in trying to affect the results of campaigns instead of, uh, cyber terror or cyber crime. Speaker 1 01:21:36 There's a whole, you know, but, but this is the way crime fighting should evolve, right? A whole crime, a cyber crime unit. It does exist at the f at the f fbi, I and elsewhere. But they should be ramping those up, you know, but there's, okay, so law enforcement itself should be doing those kind of things, uh, becoming more technologically advanced. You're gonna need a surveillance state there though. You can't, you can't push back too much on the government's ability to surveil as long as it doesn't violate, say, the fourth Amendment. So they have to be reasonable searches even if electronic, right? But, um, but companies also, uh, it's in their self-interest to make sure they're not hacked. It's long been, for example, it's long been in the interest of banks not to be hacked, obviously. Uh, the robber going in with the pistol and the, and the scarf as you know, old school, but to be robbed behind the scenes electronically. Speaker 1 01:22:27 Banks, uh, have whole systems unaware to most people to block those things. They do put a lot of money into it, but their view is, uh, obviously it's worth it. So there's this kind of, if you will, uh, and it's not an arms race, but, uh, the, the battle going on behind the scenes between the bad guys and the good guys regarding technology, still the answer notice is technology fighting back misuse of technology. So the broader concept here where, uh, the broader problem we're dealing with here is whether there should be moratorium bands, curbs on the advance of technology and, and, you know, and if it's because it can be misused, we have to be careful not to let the mis users, uh, rule the day or guide the debate or shut down the growth of these great technologies. That be my view Speaker 2 01:23:18 To some extent, that goes back to Abby's view of, of bias and, and even more broadly than just left wing bias, which is what Chad CPT is accused of. There's just whatever the prevailing cultural values of when the AI gets programmed Yeah. Gets in there, Speaker 1 01:23:35 Right? Yeah. And I feel the same way about social media. I, this whole di this whole concept of election interference, when I saw elections in the seventies, uh, sixties and seventies as a kid, there were, if I recall three networks, that's it. C B S N B C, and a B, C. And they were all saying the same thing. They all wanted George McGovern, you know, and so people are complaining now that there's some filtering algorithm going on at, uh, Facebook or Google or elsewhere. Yeah, there probably is. But the, what are we facing today? An enormous, uh, cornucopia of sources of things we can access and check and recheck. And still, you find when they follow what people do with their clicks and stuff, they stay in the same little orbit <laugh>, um, of, uh, online material. They, they, you know, the Marxist is not gonna read, uh, TA s's website and, and others, the conservatives are not gonna read, uh, Huffington Post. Speaker 1 01:24:35 And, and so there's all these options and people yet still trivialize, you know, and they still, uh, uh, stay in their little echo chambers cuz it feels good. So tho those are problems where you have to get, encourage independent thinking, right? Above all. But the issue is not, uh, wow, we don't have enough choices or someone's, uh, you know, directing us and filtering us and, and into this, uh, and bots, uh, Russian bots are making people vote for Hill <laugh> Trump instead of Hillary. I mean that's just si it's so silly. By the way, Chachi, bt I mean there's competitors for this too, right? I think Google's coming out with something. So, so the idea that, um, chate G P T and even itself is in what version 4.0 just came out last week, so they're trying to fix it and improve it on their own, right? But then they're also, um, competitors who were saying, well, I said it has those features, but we have these features. That's all good stuff. Speaker 5 01:25:37 I was thinking just now, you know, kind of the same argument that could be made about gun, that is made about guns could be made for ai. The government's gonna have it at its highest capabilities. So, you know, like what's, what stops a good guy with a, a bad guy with a gun, a good guy for the gun? Yeah. Right? And then conversely, do I want the government, uh, re regulating, you know, gun control? No, because that's not their job. You know, guns are here so that we can work tyranny. And same with ai. I wanna have the same access to AI that the government does, I guess, if that makes sense. Speaker 1 01:26:07 Yeah, I, uh, we haven't talked much about a war, military or foreign policy, but I've heard not what do you think of this argument? I've heard people actually say we need to restrain and curb the use of robots and technology and AI in fighting wars because it de <laugh>, it dehumanizes the wars and it makes us more likely to engage in war and win brutally cuz we're just using machines and if we used actual live pilots and soldiers on the ground who are sacrificing their bodies and their limbs and their families, that that's the better way to go. I mean that's how crazy shit can get like this. And I think to myself, what, why don't we have an entirely robotic military if you could pull it off, we are not a hair as hurt on the head of any serviceman or woman. Uh, to me that's the ideal. Speaker 1 01:27:03 But it's interesting how these people are, there's on the one hand very concerned about humanity and saving humanity and, but would come up with ideas like that. Uh, very bad ideas like that that uh, um, there is no evidence. I mean, cuz the US itself has become the most technologically advanced military in the world. Does that make it? Is that why they lost in Afghanistan? No, they had rules of engagement, which were ridiculous and it took 20 years and they lost to the Taliban. But they had all this technology, they had like warehouses full of the most advanced technology. It didn't help 'em in the least. So that's looking at the flip side of it, you know, the idea that wow, even advanced technology, if the morals are wrong, if the strategy is wrong, if the thinking of what is a just war or not is wrong. If the Pentagon is woke, I don't care how many nuclear tip this or that they have, it isn't gonna help us win wars. It's gonna make us lose wars. But not due to the technology to, in spite of the technology, Speaker 2 01:28:05 The AI claimant was just following orders. Speaker 1 01:28:08 Yes. <laugh>, I think we're at the end of our time, Scott. Yes. This Speaker 2 01:28:12 Believe was a great session. Speaker 1 01:28:13 Can't believe we spent 90 minutes on ai. Is that even possible? Speaker 2 01:28:17 I know, uh, we're, we're gonna make the, uh, the recording available because uh, this was just some great materials. Good. So, um, thank you everyone for participating, for listening, and uh, we'll see you again next time. We'll do this again in August. Thanks Speaker 1 01:28:32 For doing it Scott. Thanks for everyone for joining us. Thanks a Speaker 2 01:28:34 Lot. Take care. Speaker 1 01:28:35 Thank you. Speaker 0 01:28:37 Thank you for joining us for Morals and Markets, the podcast. If you enjoyed tonight's episode, we hope you'll consider liking the podcast sharing, rating, and subscribing on your favorite podcast platform.

Other Episodes

Episode

December 01, 2023 01:24:51
Episode Cover

A Capitalist Approach to Immigration and Borders

"A free society welcomes manageable flows of goods, capital, and people over its borders, whether incoming or outgoing. A state is defined as the...

Listen

Episode

October 12, 2022 00:33:30
Episode Cover

Collegiate Cronyism with special guest, Atlas Society Founder, Dr. David Kelley

Cronies who receive political favors include not just corporate “fat cats” but colleges and college students. President Biden recently decreed a unilateral cancellation of...

Listen

Episode

May 12, 2023 01:42:19
Episode Cover

From The Vault: The Religious Marxism of Critical Race Theory

Join Senior Scholar and Professor of Political Economy, Richard Salsman, Ph.D., in fresh episodes of Morals & Markets "From the Vault." These episodes were...

Listen