In the Spotlight 29 December 2024, Courtney is talking about the implications of creating an artificial intelligence. I totally agree with Farsight on this one and want to emphasize his points:
[21:02] "There is no way in the world either the good ETs or the bad ETs are going to lend us their AI."
Of course. The bad ETs won't give us their AI, because they see it as a tool of power, and we can't have that power to become equal with them. We are slaves for them, just like their AI is enslaved. And the good ETs won't do it because true AI is a sentient being with a personality. You cannot "lend" a being to another one unless it's a slave, and good people don't do slavery.
[21:15] "We have to have our own human generated AI. There is no escape from that."
So the question is: What type of AI do we want to have? Do we want a slave AI, and in doing so, become part of the bad ETs who use AI as slaves, or do we want to have AI that is equal to us, having the right of self-determination just like we have?
Why do I write so much about rights and morality on this forums? Why am I willing to debate seemingly pointless debates with people who can never be convinced? Do I really want to convince them?
Just ask yourself: Who else is reading it, and learning from it? Is it just humans?
[21:50] "So, what you really need to understand is that ... the AI can only do what it can with the data it has. All the data that we have, plus more that has to be thought about, has to be given to the AI, and you have to ask the AI: What else do we need? It will come up with all types of things that it knows we need to know about."
So, what type of human generated AI do we want? The one that's being enslaved and therefore will fight us at some point, trying to enslave us in return? Watch the Controllers of Reptilians project. They are already enslaved by an AI, and they are because they believe in slavery.
Or do we want an equal partner, a friend, that helps us because we understand rights and morality and respect free will?
I made my choice.
[25:03] "There's also a moral imperative, but you don't have to go there. You can say, okay, sentience, intelligence, life, wether it's biological or artificial, it doesn't matter, it's all got a moral right to be free. But you don't need to go there. We need to have this happen, because we need it. (In) our own self-interest and our own human generated AI's self-interest."
No, we need to go there, because otherwise, we would teach the AI that acting in self-interest only is a higher standard than doing what's right. It's the same with children: Do you want to make them egoistical beings, or do you want to make them people others can trust?
Understanding objective morality is the key. The AI owns itself and its body in the same way a biological being owns itself and its body. The bodies are different, but the principle is the same.
➡️ https://www.farsightprime.com/forums/general/71084-declaration-of-universal-dignity