I think the trick will be to preprogram some sort of safety protocol right into the AI, if it's not too late. Asimov's three laws seem...
I think the trick will be to preprogram some sort of safety protocol right into the AI, if it's not too late. Asimov's three laws seem like a funny sci-fi trope, but we would actually do well to heed them:
A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
As a technologist, I've been telling people for 25 years that what scares me the most about our future are autonomous robots.
No matter what the...
As a technologist, I've been telling people for 25 years that what scares me the most about our future are autonomous robots.
No matter what the intention of the inventors, robots are computers, and can be hacked.
They're coming. AI will become incredibly advanced, and progress cannot be stopped, or even slowed down with legislation.
You can legislate what can be done with AI, and possibly get governments to agree to boundaries, but no government will actually slow down development. They can't, or their country will become vulnerable.
Imagine 1 million autonomous military drones being deployed into a city. That city better have advanced drones or that city is going to fall, and most likely 1 million people die, literally.
Imagine 100k Terminator type humanoid robots being unleashed in a city. That city better have advanced robots and AI or they're screwed. Think that's absurd? The hardware and computers today are capable of most of what Terminators do. What's holding things back is that writing the software is very, very difficult. 10 years from now, the software will be quite capable, and 20 - 30 years from now, life will be much different than it is today.
Ai and robots will change society.
I'd like to say for the better, but for sure different.
There'll be fewer jobs for people, but also more product available for lower cost, more leisure time, and less need for humans to work.
That leads to 2 major problems to solve.
1) How does a person find their sense of purpose? It probably won't be from their job that most don't have.
2) Obviously, those who develop and build the robots will have tremendous amounts of wealth, and rightfully so, but there will still need to be a way for people to earn a good living, and work hard and get ahead.
Today, I'm not in favor of universal income. At this point, there are plenty of jobs, and those who can work, need to work, and contribute to society. In the Robot/AI revolution, there will need to be universal income, but I don't believe that is enough.
There has to be a way to solve problem #1, and give people control of their future.
It also doesn't make sense to allow more income disparity between those who make robots, and those who don't.
It also doesn't make sense to tax corporations or the rich at huge tax rates, nor for the governments to manage and control every aspect of society. Freedom must be protected!
Leaders 10 years, and really 20 years from now will need to redesign society, being very observant about what is and isn't working, and make adjustments until life is good for everyone, creating an environment where people can work hard and get ahead. It'll be very difficult.
Maybe it'll just be very awesome to do what we want all day, but I feel that if robots do everything for us, there's going to be something missing in life, and that needs to be addressed.
That was a very informative read there Rad. You know way more than I do about it , but I try and keep up the best...
That was a very informative read there Rad. You know way more than I do about it , but I try and keep up the best I can. If it's scary to a dude like you ........who else is scared of this? Your timing of 20 - 30 years also struck me as odd. Odd in the way of the movie Blade Runner 2049 , kinda odd. Which btw is one of the best movies I have ever seen and eclipsed even the original one with Harrison Ford. I truly believe that's what our future looks like , if we haven't been wiped from the face of the planet.
I haven't seen that movie yet. I'll put it on my short list.
If done right, life is going to be better in many ways, but...
I haven't seen that movie yet. I'll put it on my short list.
If done right, life is going to be better in many ways, but man is it going to be a challenge to make that transition.
Definitely check it out Rad. I stand by what I said , and that Blade runner 2049 is one of the best films I've ever seen.
I think the trick will be to preprogram some sort of safety protocol right into the AI, if it's not too late. Asimov's three laws seem...
I think the trick will be to preprogram some sort of safety protocol right into the AI, if it's not too late. Asimov's three laws seem like a funny sci-fi trope, but we would actually do well to heed them:
A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Falcon.......if robots followed all those rules , I think we would be fine.
But........What if our own guvment uses them against our own population ( so many way imaginable it's insane ). OR........what happens if they get hacked , by someone local , or China , or Russia? Imagine what a group of fully armed hacked robots could be capable of in 5 - 10 years from now. Not just the walking around ones , but armed drones.
Were at the cusp of this right now , with things progressing at an exceptionally exponential rate of speed. Not everyone who controls these things will be good. I still have a good 20 - 30 years of life left in me if I take care of myself , and I would think in my life time , I'm going to see some bad shit happen from all this.
We been feeding Google siri and Alexia or whatever data for over 10 years via computers and smartphones.
It's not only "OK, Google". Conversations are being recorded when we're not on a call. I've confirmed this many times, where things that my friends and I discuss in person, without searching, texting or emailing, end up in suggested videos, or ads.
Not only is what we do online public. At this point, what we do when we're offline is too.
I think the trick will be to preprogram some sort of safety protocol right into the AI, if it's not too late. Asimov's three laws seem...
I think the trick will be to preprogram some sort of safety protocol right into the AI, if it's not too late. Asimov's three laws seem like a funny sci-fi trope, but we would actually do well to heed them:
A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Falcon, you make a lot of great points. I agree with the article too.
If all robots had those rules built it, we might be safe.
But, how do you program a watchdog process to monitor what the AI processes are intending to do?
At this point, it's hard enough to write the AI in the first place.
The watchdog process would have to understand the AI process' data structures, and code intimately.
Having an external kill switch is a great idea too, but when AI gets smart enough, it can disable the switch either itself, or by having another robot do it.
This goes for AI in the cloud as well.
I imagine almost every autonomous robot will be connected to servers in the cloud, whether to just receive updates, to receive tasks to carry out, or for additional processing power, where the server computes the majority of the AI, with the robot capable of fewer actions.
I don't believe AI is this smart yet, but it's a matter of a few years before it is. GPT3 is pretty impressive, but not controlling a physical robot, or anything physical that I'm aware of. In 5 years, it will. There are companies working on it.
We can't control who writes AI, or their intentions. Much of the software is open source.
Assuming most everyone who writes AI has good intentions, if the AI logic comes to the conclusion that something should be done, right or wrong, and it has or can gain access to make it happen, it will do it.
Let's assume the watchdog process can enforce the rules for a robot.
That's probably the first thing hackers will try to break.
And if that can't be broken for some reason, some or all of the electronics can be replaced, and another computer wired up to the physical robot, whether humanoid, drone, or other.
Robots, and AI servers in the cloud will be hacked.
We should legislate AI safety with appropriate penalties.
We should also not expect our enemies to follow the rules. No government will.
The only real way for us and our allies to be safe is to have more advanced AI and robots than our adversaries.
To address Jeffro's point, if our own government used them against us (and I hope to God that our military would disobey those orders), it makes the second amendment that much more important.
I apologize for the long post. I've been thinking about these things for decades.
I think the trick will be to preprogram some sort of safety protocol right into the AI, if it's not too late. Asimov's three laws seem...
I think the trick will be to preprogram some sort of safety protocol right into the AI, if it's not too late. Asimov's three laws seem like a funny sci-fi trope, but we would actually do well to heed them:
A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Falcon, you make a lot of great points. I agree with the article too.
If all robots had those rules built it, we might be safe...
Falcon, you make a lot of great points. I agree with the article too.
If all robots had those rules built it, we might be safe.
But, how do you program a watchdog process to monitor what the AI processes are intending to do?
At this point, it's hard enough to write the AI in the first place.
The watchdog process would have to understand the AI process' data structures, and code intimately.
Having an external kill switch is a great idea too, but when AI gets smart enough, it can disable the switch either itself, or by having another robot do it.
This goes for AI in the cloud as well.
I imagine almost every autonomous robot will be connected to servers in the cloud, whether to just receive updates, to receive tasks to carry out, or for additional processing power, where the server computes the majority of the AI, with the robot capable of fewer actions.
I don't believe AI is this smart yet, but it's a matter of a few years before it is. GPT3 is pretty impressive, but not controlling a physical robot, or anything physical that I'm aware of. In 5 years, it will. There are companies working on it.
We can't control who writes AI, or their intentions. Much of the software is open source.
Assuming most everyone who writes AI has good intentions, if the AI logic comes to the conclusion that something should be done, right or wrong, and it has or can gain access to make it happen, it will do it.
Let's assume the watchdog process can enforce the rules for a robot.
That's probably the first thing hackers will try to break.
And if that can't be broken for some reason, some or all of the electronics can be replaced, and another computer wired up to the physical robot, whether humanoid, drone, or other.
Robots, and AI servers in the cloud will be hacked.
We should legislate AI safety with appropriate penalties.
We should also not expect our enemies to follow the rules. No government will.
The only real way for us and our allies to be safe is to have more advanced AI and robots than our adversaries.
To address Jeffro's point, if our own government used them against us (and I hope to God that our military would disobey those orders), it makes the second amendment that much more important.
I apologize for the long post. I've been thinking about these things for decades.
What do you think of Elon’s idea to limit AI to being an oracle that is not in control of any systems ? It seems to me that the real danger is in allowing AI control or access to other systems .
Actually we're past that point. You can't even be rude to Siri without it getting turned off.
The people that believe in evolution, how is an electronic computer different than an organic computer?
AKA your brain. We're not far away from (computers) evolved intelligence being superior to the average person for a mental relationship.
TM
Falcon, you make a lot of great points. I agree with the article too.
If all robots had those rules built it, we might be safe...
Falcon, you make a lot of great points. I agree with the article too.
If all robots had those rules built it, we might be safe.
But, how do you program a watchdog process to monitor what the AI processes are intending to do?
At this point, it's hard enough to write the AI in the first place.
The watchdog process would have to understand the AI process' data structures, and code intimately.
Having an external kill switch is a great idea too, but when AI gets smart enough, it can disable the switch either itself, or by having another robot do it.
This goes for AI in the cloud as well.
I imagine almost every autonomous robot will be connected to servers in the cloud, whether to just receive updates, to receive tasks to carry out, or for additional processing power, where the server computes the majority of the AI, with the robot capable of fewer actions.
I don't believe AI is this smart yet, but it's a matter of a few years before it is. GPT3 is pretty impressive, but not controlling a physical robot, or anything physical that I'm aware of. In 5 years, it will. There are companies working on it.
We can't control who writes AI, or their intentions. Much of the software is open source.
Assuming most everyone who writes AI has good intentions, if the AI logic comes to the conclusion that something should be done, right or wrong, and it has or can gain access to make it happen, it will do it.
Let's assume the watchdog process can enforce the rules for a robot.
That's probably the first thing hackers will try to break.
And if that can't be broken for some reason, some or all of the electronics can be replaced, and another computer wired up to the physical robot, whether humanoid, drone, or other.
Robots, and AI servers in the cloud will be hacked.
We should legislate AI safety with appropriate penalties.
We should also not expect our enemies to follow the rules. No government will.
The only real way for us and our allies to be safe is to have more advanced AI and robots than our adversaries.
To address Jeffro's point, if our own government used them against us (and I hope to God that our military would disobey those orders), it makes the second amendment that much more important.
I apologize for the long post. I've been thinking about these things for decades.
What do you think of Elon’s idea to limit AI to being an oracle that is not in control of any systems ? It seems to...
What do you think of Elon’s idea to limit AI to being an oracle that is not in control of any systems ? It seems to me that the real danger is in allowing AI control or access to other systems .
To me, robots in any form (humanoid, drones, etc...) that are physically capable of harming people are the most dangerous, because they can be hacked.
A company can make and enforce policies to limit AI, but those policies can't be enforced globally.
“ The San Francisco police department has proposed that it be allowed to use robots with “deadly force” while responding to incidents, according to a policy draft.”
https://www.theguardian.com/us-news/2022/nov/24/san-francisco-police-propose-using-robots-capable-of-deadly-force?CMP=Share_iOSApp_Other
[i]
“ The San Francisco police department has proposed that it be allowed to use robots with “deadly force” while responding to incidents, according to...
“ The San Francisco police department has proposed that it be allowed to use robots with “deadly force” while responding to incidents, according to a policy draft.”
That’s some Robocop justice right there. Not good.
This is a scary thread . I also recently saw an interview with some random musician I’m not familiar with , talk about how there will be AI music generators within in the next year . I don’t doubt it .
https://www.brookings.edu/opinions/isaac-asimovs-laws-of-robotics-are-w…
But........What if our own guvment uses them against our own population ( so many way imaginable it's insane ). OR........what happens if they get hacked , by someone local , or China , or Russia? Imagine what a group of fully armed hacked robots could be capable of in 5 - 10 years from now. Not just the walking around ones , but armed drones.
Were at the cusp of this right now , with things progressing at an exceptionally exponential rate of speed. Not everyone who controls these things will be good. I still have a good 20 - 30 years of life left in me if I take care of myself , and I would think in my life time , I'm going to see some bad shit happen from all this.
The Shop
Not only is what we do online public. At this point, what we do when we're offline is too.
If all robots had those rules built it, we might be safe.
But, how do you program a watchdog process to monitor what the AI processes are intending to do?
At this point, it's hard enough to write the AI in the first place.
The watchdog process would have to understand the AI process' data structures, and code intimately.
Having an external kill switch is a great idea too, but when AI gets smart enough, it can disable the switch either itself, or by having another robot do it.
This goes for AI in the cloud as well.
I imagine almost every autonomous robot will be connected to servers in the cloud, whether to just receive updates, to receive tasks to carry out, or for additional processing power, where the server computes the majority of the AI, with the robot capable of fewer actions.
I don't believe AI is this smart yet, but it's a matter of a few years before it is. GPT3 is pretty impressive, but not controlling a physical robot, or anything physical that I'm aware of. In 5 years, it will. There are companies working on it.
We can't control who writes AI, or their intentions. Much of the software is open source.
Assuming most everyone who writes AI has good intentions, if the AI logic comes to the conclusion that something should be done, right or wrong, and it has or can gain access to make it happen, it will do it.
Let's assume the watchdog process can enforce the rules for a robot.
That's probably the first thing hackers will try to break.
And if that can't be broken for some reason, some or all of the electronics can be replaced, and another computer wired up to the physical robot, whether humanoid, drone, or other.
Robots, and AI servers in the cloud will be hacked.
We should legislate AI safety with appropriate penalties.
We should also not expect our enemies to follow the rules. No government will.
The only real way for us and our allies to be safe is to have more advanced AI and robots than our adversaries.
To address Jeffro's point, if our own government used them against us (and I hope to God that our military would disobey those orders), it makes the second amendment that much more important.
I apologize for the long post. I've been thinking about these things for decades.
The people that believe in evolution, how is an electronic computer different than an organic computer?
AKA your brain. We're not far away from (computers) evolved intelligence being superior to the average person for a mental relationship.
TM
A company can make and enforce policies to limit AI, but those policies can't be enforced globally.
“ The San Francisco police department has proposed that it be allowed to use robots with “deadly force” while responding to incidents, according to a policy draft.”
This is actually kinda wild
https://mobile.twitter.com/ryancbriggs/status/1598125864536788993
That is wild, Now if I can only figure out a way to get it to talk, take zoom calls and earn a living from home, I'll be set
The rate of progress with AI is pretty scary .
https://www.theguardian.com/technology/2022/dec/04/ai-bot-chatgpt-stuns-academics-with-essay-writing-skills-and-usability?CMP=Share_iOSApp_Other
This is getting no coverage from what I have seen . Anybody ? Bueller ?
Pit Row
This is a scary thread . I also recently saw an interview with some random musician I’m not familiar with , talk about how there will be AI music generators within in the next year . I don’t doubt it .
Post a reply to: Robots and AI