I just found out one of my best friends from college is quoted in a recent New York Times magazine on artificial intelligence. The article is long but can be viewed here, and it's quite interesting. My friend is Lijin Aryananda, and I've never really had someone I know quoted so extensively in anything important like the New York Times. Word on the alumni grapevine is she was angry with her portrayal and felt she was misquoted, but that's fame for you.
I point this out because I think one of the more paradigm shifts I have ever heard about is articulated in this article. Basically, when I typically think of a robot I think of a fully formed, thinking machine that does things, has superhuman strength, and will ultimately turn on humanity. It's just basic science that that is what will happen. As Jurassic Park showed (much better in the book than the movie), we can't possibly control all possibilities and ultimately, something will slip through that shouldn't have. Sort of like our president.
In this classic artificial intelligence (AI) scenario, we program robots everything through codes sequences that must be very long and complex, and we give robots certain rules they cannot break. Isaac Asimov had a list of 3 laws of robotics I think, including a robot cannot through action or inaction allow a human to be injured and some other theoretically good protection for us. This type of logic, built into an infinite number of coded sequences, should keep us safe and robots useful. As we all know from movies, short stories, and general reality, that won't work. In jurassic park, it had something to do with chaos theory. In other books, there are other ways the robots "reprogram" themselves. What to do? What to do?
If you look at life, fully functioning minds are generally dangerous, because if they desire, they remove obstacles and push their lives upon others. See most world leaders and other powerful people for signs of this. They aren't smarter, there is just a part of their brain that clicks with removing obstacles, even if they don't what to do once the obstacles are removed (although falling into paranoiac realms seems to be a common next step unfortunately). H0wever, children are generally harmless, and usually well raised children grow up to be normal, well adjusted, adults. The paradigm shift in AI that is so interesting was this, to develop child-like robots and see if they could learn. Basically, instead of programming a brain with so many sequential codes and rules for all sorts of situations, develop code for some basic situations and see if the robot can learn. This is how our mind develops. Can we recreate our own minds? And as importantly, if I had a robot child to raise, would it play well with my humenguin?
Well, it seems to be pretty early in the technology but so far we cannot recreate our own minds. We will at some point. Just like I believe we will at some point code full thinking robots for war and whatnot that will eventually either destroy us or live entirely separately from us, maybe on the moon because they won't need to breath and they can run and jump high and frolic on the moon, which might make an enemy robot race happy who knows.
I don't know enough to really comment on the facts around the article, but I wanted to say kudos to the clever paradigm shift and good luck developing cute child robots that learn. I figure developing an adaptable brain is much more interesting than a lot of code, which will be buggy anyway and probably require me to shake my human looking robot slave like an etch a sketch once a day to "refresh" it.
So for now, I say read the article and think about your own brain. Think about how complex and powerful our bodies actually are, and how amazing it is we work. Then think about recreating that! It's incredible! I say, "Good luck to the leaders of science!" and may the force be with you all.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment