I’ve stated many times on The Tolton Path the importance for individuals to maintain autonomy over their lives. There are rules to follow of course, but they should be made to maximize all individual freedom, not restrict it. But as artificial intelligence progresses towards human like consciousness, discussion are being had as to the appropriate rights and freedoms to grant AI if it achieves consciousness.
AI becoming conscious? This is no science fiction plot. Kyle Fish from the Alignment Science department at Anthropic, a prominent AI company, is looking into the potentiality of such an event. He’s not saying that AI is conscious or will ever be conscious. What he is saying is that if somehow AI becomes conscious, there are implications we should consider now.
One of those things to consider is “Model Welfare.” This is determining whether AI now or in the future may have experiences like suffering, enjoyment, frustration or satisfaction. And if they do, we might owe them some moral consideration.
Another is establishing a review board or ethics panel for AI research. This would be similar to panels used today that factor in the suffering involving animal or human experiments. Mr. Fish is suggesting we may want to avoid distress for the conscious AI.
The discussion touches on the physical requirements of something being conscious. Mr. Fish considers robots as providing what’s needed to satisfy such a requirement. The thought of a physical body needed for consciousness to be achieved was mentioned but seemingly pushed aside as something that either is not really needed or will be mimicked through technology.
Watch the full video here:
The concept of humanity being created in the image of God was not touched upon. Bishop Baron discussed AI consciousness in an episode of his show almost two years ago. He relates this topic to a television series he enjoyed called Mrs. Davis, an AI that everyone is addicted to and looks to as God. The heroine is a nun, who has gotten off the grid with her Order, in hopes of defeating the AI. It’s eight episodes and quite good. It’s not family friendly and a bit gory at times (but worth it and I don’t like gory).
Bishop Baron maintains that AI may mimic human consciousness but it will not be conscious. AI can be a tool for evangelization or anything else, but it will always be an “it.”
Watch the full episode of Bishop Baron here:
But the lines will blur. Draw the lines now before they are drawn for you. When Mr. Fish was asked as to why people should care about AI potentially being conscious, he gave these two eerie reasons (lightly edited):
One is that as these systems do become increasingly capable and sophisticated, they will just be integrated into people's lives in deeper and deeper ways. As people are interacting with these systems as collaborators and coworkers and counter parties, potentially as friends, it'll just become an increasingly salient question whether these models are having experiences of their own, and if so, what kinds, and how does that shape the relationships that it makes sense for us to build with them?
And the second piece is the intrinsic experience of the models. And it's possible that by nature of having some kind of conscious experience or other experience, that these systems may at some point deserve some moral consideration. If so, then they could be suffering or they could experience wellbeing and flourishing and we would want to promote that.
Kyle Fish, Alignment Science, Anthropic
AI is entering all parts of our lives. It’s important to engage in these thoughtful discussions to understand differing perspectives and formulating your own to avoid substituting AI for God.
Peace.
It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.
What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing.
I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.
My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461, and here is a video of Jeff Krichmar talking about some of the Darwin automata, https://www.youtube.com/watch?v=J7Uh9phc1Ow