1

Artificial General Intelligence (AGI), "dualistic” refers to the concept that AGI should embody and understand binary oppositions or dualities, similar to how humans experience and process the world.

Humans are intrinsically dualistic. Our thoughts, emotions, and decisions often oscillate between extremes. This duality is not merely a philosophical notion but a practical framework that shapes our interactions with the world. By embracing dualism, AGI can achieve a deeper understanding of human emotions, make complex decisions, and navigate ethical dilemmas more effectively.

Emulating human duality means AGI can learn from a wider variety of experiences, including those that involve conflicting or paradoxical information. This can enhance the depth and breadth of its learning capabilities.

nbro
  • 42,615
  • 12
  • 119
  • 217
Raul Alvarez
  • 132
  • 1
  • 12

2 Answers2

2

An AGI will definitely need to be able to take an action between 2 or more actions because there are many real-world situations that would or might require that, like

  • Which team do you support in this football match?
  • In world war 2, there were 2 main coalitions, which one would you have supported?
  • Should I harm this human or myself?

You can come up with so many other situations.

I don't know if our emotions or feelings are dualistic, in the sense that we can split our feelings just into 2 categories. I don't think that's the case, i.e. there might be a spectrum, but it's something we should ask a neuroscientist or psychologist.

However, even if there were 2 opposites or extremes, we might not be able to decide between them. For example, in politics, there's left and right, but there's also the center and things in between. But some people may still not vote because they don't agree with any of them.

In general, I think that that categorization is something that humans or even other animals indeed do to simplify our lives, but there can be many categories or the categories might not always be clear (at the beginning). From an ethical standpoint, it's also true that being able to distinguish between bad and good can be useful. The problem in this case is the definition of good and bad, which can be subjective.

This article talks more about dualism, which has can have different meanings. For example, the mind-body dualism doesn't seem to have much to do with your definition, but it's still relevant to AI, if you're interested.

(Btw, reinforcement learning is all about actions too. You might also be interested in this question.)

nbro
  • 42,615
  • 12
  • 119
  • 217
0

AGI as usually understood doesn't follow a dualistic path in the sense of your described "binary oppositions" as you cannot find any such mention in the referenced Wikipedia article about AGI. AGI only benchmarks with human cognitive capabilities, so your mention of dualistic human emotion and non-cognitive thought such as imperative commands or some noncognitive ethics thought actually has nothing to do with AGI directly at least as currently defined.

Artificial general intelligence (AGI) is a type of artificial intelligence (AI) that matches or surpasses human cognitive capabilities across a wide range of cognitive tasks... there is no firm requirement for an AGI to have a human-like form; being a silicon-based computational system is sufficient, provided it can process input (language) from the external world in place of human senses.

In the realm of human cognition, your described binary oppositions between which our thought and decisions often oscillate just sound like the classic bivalent logic semantics where our cognitive attitude about a knowledge proposition could oscillate from belief to disbelief or vice versa. Therefore to follow such path per your idea is just like following the traditional symbolic AI's path towards AGI which has been proved extremely difficult if not impossible, such as the project of Cyc. Regardless whether our brain really cognizes in your described dualistic fashion or not, there's whole brain emulation project to try to simulate a whole human brain to achieve AGI.

While the development of transformer models like in ChatGPT is considered the most promising path to AGI, whole brain emulation can serve as an alternative approach. With whole brain simulation, a brain model is built by scanning and mapping a biological brain in detail, and then copying and simulating it on a computer system or another computational device. The simulation model must be sufficiently faithful to the original, so that it behaves in practically the same way as the original brain... futurist Ray Kurzweil in the book The Singularity Is Near predicts that a map of sufficient quality will become available on a similar timescale to the computing power required to emulate it.

A fundamental criticism of the simulated brain approach derives from embodied cognition theory which asserts that human embodiment is an essential aspect of human intelligence and is necessary to ground meaning. If this theory is correct, any fully functional brain model will need to encompass more than just the neurons (e.g., a robotic body). Goertzel proposes virtual embodiment (like in metaverses like Second Life) as an option, but it is unknown whether this would be sufficient.

Having said above, your dualistic thesis can be reinterpreted as the thesis that humans can hold seemingly conflicting ideas simultaneously and resolve or live with the tension as reflected by the common dictum "the more you learn, the less you know". This is known as cognitive dissonance in psychology or Hegelian dialectics in philosophy. So for AGI to interact meaningfully and safely with humans, it must be able to recognize, process, and sometimes reconcile conflicting information to always align with human value, rather than defaulting to the usual binary logic. For AGI to follow such a path, combining symbolic reasoning with neural networks known as Neuro-symbolic AI might be a fitting approach. Another approach could be Multi-Agent reinforcement learning to use different agents to represent and deal with conflicting viewpoints.

cinch
  • 11,000
  • 3
  • 8
  • 17