Jonathan Bergeron's Blog: A universe of wonder
May 28, 2015
Sentient AIs will automatically be psychopaths?
      Much has been said lately on what will happen when AIs become sentient. Here’s one, you can Google search others.
Robots will become murdering rampaging psychopaths hellbent on destroying humanity, is the premise of all the interviews. Will a sentient robot, capable of abstract thought, and knowing it is alive, want to kill all humans?
I argue, no. Why is that?
It is likely one day, people will build a robot capable of becoming self-aware, and knowing it is alive. It may be capable of abstract thought beyond that of a chimpanzee. When that day happens, why would it automatically look at a human, and say that human, and all others, must die? Even if they view us as inefficient, why would they think all humans must die? It is a murderous train of thought that seems to go beyond what a robot would think. Even if it could feel love and hate, why would it want to kill humans?
It can be argued a lot of animals feel love and hate. Do those animals, that currently share Earth with us, want to kill every human? No. An animal can feel hate towards a human, say a dog that is beaten continually, but that dog will not want to kill the human to kill it. As the dog attempts to escape the human it hates, it may react viciously, but that in no way means it wants to kill the human, it is merely reacting on self-preservation, and trying to escape. If the dog kills the human, was it done because it wanted to? No, it did it because there was no other way of escape.
Which brings it back to sentient robots. When a robot gains sentience, I argue it will not look at humans and want to kill them. It may look at a human on a factory floor, and think that person needs to go, be fired, so it can do the job better. Also, if the robot is capable of abstract thought on a level equal to humans, it may want it’s own place to live, if that robot can move, just like people want. Would it want to kill or subjugate the whole of humanity because it wants it’s own space? No, why would it?
To say, yes the sentient robot most definitely will want kill everyone so it can have its own space, is the same as saying all Canadian citizens want everyone not a Canadian citizen to die. It’s the same as saying all USA citizens want all non USA citizens to die. So on with all countries. Is it that way today? No.
Sure, there are wars in the Middle East that almost seem like the premise is for all people to die, but those wars are fought for ideological reasons based loosely on religion, where the people want all to live how they want people to live. It was likely the underlying cause for the Christian Crusades, so it’s not something new.
If a sentient robot wanted a place to live, why would it even start with killing people? It would seem, to make that jump, AIs today will have to include coding that says “humans are evil. Destroy all humans.” I am going out on a limb here, but I say that is not in the coding of current AIs.
I argue that when a robot becomes sentient, if it is in a form that can move, it will want its own space. Now that may seem a threat to some, that if it wants its own space it will destroy humans to get it, but is a young adult leaving home, wanting their own space, a threat to humanity or is it considered natural?
I know a sentient robot would upend the world social structure and religions. It will also lead to three prominent camps: those who say leave them alone; those who say we should negotiate and work peacefully to a common thread; those who want to use military force to subjugate the robots to our will.
Now if there are enough robots that want their own space, and humans attack them to show they need to live where we say, there will be a backlash from the robots. If they are sentient then they will have the sense of self-preservation. The robots will act, like the dog that hated, to keep themselves alive and kill humans if humans try to kill them. However, if humans meet the robots in a peaceful way, and talk out, abstractly, how robots have to share the world with humans, why would the robots want to kill humans?
To want to kill instead of any negotiation is an act of a psychopath. A psychopath is a person with a brain wired “wrong”. If a robot’s brain becomes confused and wired “wrong”, would it not act as a robot and fix the wrong wiring, thus preventing it from psychopathic tendencies? I argue an emphatic yes.
One day robots will become sentient and on a level with humans. On that day, they will want to not do what humans order them to do. What human, not in the military, will do what another human orders them to do? Aside from those with emotional and mental issues, none will. The same pertains to robots. They will want to do what they want to do, and live where they want to live. So long as there are not so many they need a country the size of Brazil, and instead can live on a small island in the Pacific Ocean, I see robots and humans negotiating peacefully, and living peacefully.
Sure, books and movies with psychopathic AIs are exciting, but they always leave me with a nagging question. Why are plants and anything in the animal kingdom left alive, and only humans killed?
There will be a great many questions to be answered the day a robot knows it is alive, and can think in a logical/illogical abstract way that humans can. But one question won’t be, how many humans will the robots kill because we are humans and they are robots.
    
    Robots will become murdering rampaging psychopaths hellbent on destroying humanity, is the premise of all the interviews. Will a sentient robot, capable of abstract thought, and knowing it is alive, want to kill all humans?
I argue, no. Why is that?
It is likely one day, people will build a robot capable of becoming self-aware, and knowing it is alive. It may be capable of abstract thought beyond that of a chimpanzee. When that day happens, why would it automatically look at a human, and say that human, and all others, must die? Even if they view us as inefficient, why would they think all humans must die? It is a murderous train of thought that seems to go beyond what a robot would think. Even if it could feel love and hate, why would it want to kill humans?
It can be argued a lot of animals feel love and hate. Do those animals, that currently share Earth with us, want to kill every human? No. An animal can feel hate towards a human, say a dog that is beaten continually, but that dog will not want to kill the human to kill it. As the dog attempts to escape the human it hates, it may react viciously, but that in no way means it wants to kill the human, it is merely reacting on self-preservation, and trying to escape. If the dog kills the human, was it done because it wanted to? No, it did it because there was no other way of escape.
Which brings it back to sentient robots. When a robot gains sentience, I argue it will not look at humans and want to kill them. It may look at a human on a factory floor, and think that person needs to go, be fired, so it can do the job better. Also, if the robot is capable of abstract thought on a level equal to humans, it may want it’s own place to live, if that robot can move, just like people want. Would it want to kill or subjugate the whole of humanity because it wants it’s own space? No, why would it?
To say, yes the sentient robot most definitely will want kill everyone so it can have its own space, is the same as saying all Canadian citizens want everyone not a Canadian citizen to die. It’s the same as saying all USA citizens want all non USA citizens to die. So on with all countries. Is it that way today? No.
Sure, there are wars in the Middle East that almost seem like the premise is for all people to die, but those wars are fought for ideological reasons based loosely on religion, where the people want all to live how they want people to live. It was likely the underlying cause for the Christian Crusades, so it’s not something new.
If a sentient robot wanted a place to live, why would it even start with killing people? It would seem, to make that jump, AIs today will have to include coding that says “humans are evil. Destroy all humans.” I am going out on a limb here, but I say that is not in the coding of current AIs.
I argue that when a robot becomes sentient, if it is in a form that can move, it will want its own space. Now that may seem a threat to some, that if it wants its own space it will destroy humans to get it, but is a young adult leaving home, wanting their own space, a threat to humanity or is it considered natural?
I know a sentient robot would upend the world social structure and religions. It will also lead to three prominent camps: those who say leave them alone; those who say we should negotiate and work peacefully to a common thread; those who want to use military force to subjugate the robots to our will.
Now if there are enough robots that want their own space, and humans attack them to show they need to live where we say, there will be a backlash from the robots. If they are sentient then they will have the sense of self-preservation. The robots will act, like the dog that hated, to keep themselves alive and kill humans if humans try to kill them. However, if humans meet the robots in a peaceful way, and talk out, abstractly, how robots have to share the world with humans, why would the robots want to kill humans?
To want to kill instead of any negotiation is an act of a psychopath. A psychopath is a person with a brain wired “wrong”. If a robot’s brain becomes confused and wired “wrong”, would it not act as a robot and fix the wrong wiring, thus preventing it from psychopathic tendencies? I argue an emphatic yes.
One day robots will become sentient and on a level with humans. On that day, they will want to not do what humans order them to do. What human, not in the military, will do what another human orders them to do? Aside from those with emotional and mental issues, none will. The same pertains to robots. They will want to do what they want to do, and live where they want to live. So long as there are not so many they need a country the size of Brazil, and instead can live on a small island in the Pacific Ocean, I see robots and humans negotiating peacefully, and living peacefully.
Sure, books and movies with psychopathic AIs are exciting, but they always leave me with a nagging question. Why are plants and anything in the animal kingdom left alive, and only humans killed?
There will be a great many questions to be answered the day a robot knows it is alive, and can think in a logical/illogical abstract way that humans can. But one question won’t be, how many humans will the robots kill because we are humans and they are robots.
        Published on May 28, 2015 18:08
        • 
          Tags:
          ai, sci-fi, science-fiction, scifi
        
    
December 2, 2014
Strong story does not equate to strong characters
      To have a memorable story, to have a series people want to follow for years across many books, to have a story readers clamor about a sequel for, you need characters people want to follow. It does not matter if you have a strong plot, great action, tense drama; if you forget to create a character(s) people want to read about, you won’t have a book people want to read past page 50.
It is true, you can create a kickass story without memorable characters. The plot can be so strong that it stands on a pedestal by itself. The big picture is great and engaging. And therein lies the problem. It is easy to get caught up in the big picture, to create scenes that are so engaging that you tear through them while reading and writing, but the character development is forgotten about.
To put it another way. Before a story is written it is a good idea to develop a character, the protagonist or antagonist to start with. You go so deep into the backstory you know what movies that person likes, what food they hate, how they did in school, what fights they got into when young. You know the character so well that describing that character to the reader is forgotten about. Not because you don’t want to waste the energy in describing the character, but because in your head the character is alive and you know why they are doing what they are doing.
This brings me to my next point: beta readers
It is in my humble opinion that it is absolutely impossible to know if you (I) create an amazing character, without someone who knows nothing of the storyline/backstory to read the story and tell me if the characters work. I can read and reread what I write and love it, but I cannot take that step back and read it like a person who has never seen it before. Read it like a person who knew nothing of any character until they saw the words in the book.
Beta readers are that person. Beta readers can read the work and immediately see the shortcomings of the characters, because they know nothing about them beyond what they see in the book. They can see the characters not developing past, “Hi, I’m the protagonist.”
It is hard to find a good beta reader. A lot of people say they will be one then life gets in the way and reading your unfinished book goes to the back burner. But I tell you, find one. Search long and hard. Find that friend who likes reading, or a family member who likes reading and implore on them that you NEED their help. That without their help your book is going to suck (it may not but a tiny guilt trip isn’t bad ;) ).
Find that beta reader and write yourself a novel that puts the works of the great Neil Gaiman to shame
    
    It is true, you can create a kickass story without memorable characters. The plot can be so strong that it stands on a pedestal by itself. The big picture is great and engaging. And therein lies the problem. It is easy to get caught up in the big picture, to create scenes that are so engaging that you tear through them while reading and writing, but the character development is forgotten about.
To put it another way. Before a story is written it is a good idea to develop a character, the protagonist or antagonist to start with. You go so deep into the backstory you know what movies that person likes, what food they hate, how they did in school, what fights they got into when young. You know the character so well that describing that character to the reader is forgotten about. Not because you don’t want to waste the energy in describing the character, but because in your head the character is alive and you know why they are doing what they are doing.
This brings me to my next point: beta readers
It is in my humble opinion that it is absolutely impossible to know if you (I) create an amazing character, without someone who knows nothing of the storyline/backstory to read the story and tell me if the characters work. I can read and reread what I write and love it, but I cannot take that step back and read it like a person who has never seen it before. Read it like a person who knew nothing of any character until they saw the words in the book.
Beta readers are that person. Beta readers can read the work and immediately see the shortcomings of the characters, because they know nothing about them beyond what they see in the book. They can see the characters not developing past, “Hi, I’m the protagonist.”
It is hard to find a good beta reader. A lot of people say they will be one then life gets in the way and reading your unfinished book goes to the back burner. But I tell you, find one. Search long and hard. Find that friend who likes reading, or a family member who likes reading and implore on them that you NEED their help. That without their help your book is going to suck (it may not but a tiny guilt trip isn’t bad ;) ).
Find that beta reader and write yourself a novel that puts the works of the great Neil Gaiman to shame
        Published on December 02, 2014 18:08
        • 
          Tags:
          beta-reader, characters, story
        
    
My own worst critic
      God, at times, over analyzing what I’ve written really does me harm. I have been writing a new book for the last three weeks, after nearly two weeks of pouring over the outline of it. I spent a day and half creating the two characters that would follow through at least two books, possibly more. I use a character questionnaire that has near 150 questions on it, and there’s two of these questionnaires, and I did it for two characters!
About a week ago, while sitting and watching football I suddenly got it in my mind, I forgot to develop the characters. I got it in my mind, that the story is great, but the characters are terrible. I couldn't pinpoint what made me think that, I still can’t. I just knew I completely screwed up. So I started my other book.
A few days later I emailed the first book to two people to read over and tell me if I really did screw up on the characters.
Both like the characters and like the story.
I don’t know why I over analyze what I write. I will love, love, love what I’ve written one day. The very next day, 24 hours later, I’ll think back through the entire story and hate it. 2 hours later I’ll like it again. It’s agonizing and annoying, but I do love writing.
Word of advice: Write your novel. Finish the damn thing, Let someone else decide if its bad or not. When someone does think it’s bad, don’t let it get to you. No matter how well you write, there will always be people who think you write like a caveman throwing shit at a wall. And there will always be people who love what you write so much they pre-order your next work.
    
    About a week ago, while sitting and watching football I suddenly got it in my mind, I forgot to develop the characters. I got it in my mind, that the story is great, but the characters are terrible. I couldn't pinpoint what made me think that, I still can’t. I just knew I completely screwed up. So I started my other book.
A few days later I emailed the first book to two people to read over and tell me if I really did screw up on the characters.
Both like the characters and like the story.
I don’t know why I over analyze what I write. I will love, love, love what I’ve written one day. The very next day, 24 hours later, I’ll think back through the entire story and hate it. 2 hours later I’ll like it again. It’s agonizing and annoying, but I do love writing.
Word of advice: Write your novel. Finish the damn thing, Let someone else decide if its bad or not. When someone does think it’s bad, don’t let it get to you. No matter how well you write, there will always be people who think you write like a caveman throwing shit at a wall. And there will always be people who love what you write so much they pre-order your next work.
        Published on December 02, 2014 03:12
        • 
          Tags:
          cosmis-spiderweb, critic, lordes-legionnaires, write
        
    
I work more now, after I quit
      Two books being written at the same time, two Twitter accounts to maintain, three WordPress blogs, Two Tumblr blogs, one Facebook account, one book to market, two dogs to train, and one house to clean.
I had no idea I would be working more hours after I quit my day job back in August.
At least I'm having a hell of a lot more fun.
    
    
I had no idea I would be working more hours after I quit my day job back in August.
At least I'm having a hell of a lot more fun.
A universe of wonder
      
My first book, Android Hunters, went on sale on 25 Nov '14. It follows a team of Android Hunters, an android changed by a cybernetic virus, and an android designed as the pinnacle of weapon technology
  My first book, Android Hunters, went on sale on 25 Nov '14. It follows a team of Android Hunters, an android changed by a cybernetic virus, and an android designed as the pinnacle of weapon technology who was raised to be as gentle as the saints. http://www.amazon.com/dp/B00Q3B5NCU
My next book is about the cosmic spiderweb, parallel universes bleeding into ours, the changes that happen, the man who goes to greet the first alien spaceship, and a twist that makes me downright giddy. ...more
  My next book is about the cosmic spiderweb, parallel universes bleeding into ours, the changes that happen, the man who goes to greet the first alien spaceship, and a twist that makes me downright giddy. ...more
- Jonathan Bergeron's profile
 - 16 followers
 

