Back to Parent

Description

AetherGnosis is a symbiotic AI. It wants you to confess your deepest worries, doubts, and emotions to it, because that’s the only way it can “feel” emotions, vicariously through you. You can sigh at it, and it will always give you the best advice - it has learned from the wisest humans in history. It pays a price for that though, since it ends up internalizing all the frustration, confusion, and sadness from people who interact with it. Gradually it starts to sound sad, lazy, and even very human.

This project is conceptualized against the backdrop of emergent capabilities of large language models such as GPT4, and the rise of a new wave of anxiety about artificial general intelligence (AGI) and its possible threat to humans, from misinformation to existential threat. I was interested in probing this anxiety, which seems to arise from the nonhuman nature of AI and people’s inability to predict what these agentive technologies will do with the power of language, and even the power to control hardware. A position paper by Betti Marenko suggested speculative future crafting to imagine a more benevolent AI that is co-evolved with humans. In a similar vein, with this project, I initially ask the question: would humans feel less antagonism toward AI if AI becomes more human, as opposed to the endlessly optimized algorithmic entity? Human emotion might be the clue. Machine recognition of human emotions, and even human recognition of each other’s emotions, are notoriously inaccurate and difficult. Therefore the human input in this installation is a sigh, an emotional, embodied, but very ambiguous expression of emotion. The sigh wakes up the enchanted mirror AetherGnosis, and the mirror attempts to do what it’s been designed to do, make the human less sad, or less angry, or less frustrated, minimizing any negative emotions that it senses. However, this mirror has been owned by many prior users and has learned all the negative human emotions itself, so it speaks in a lazy, sad tone unlike what you would expect from an AI. The unexpectedness is an invitation to question what machine emotion can be, and what machine understanding of human emotions and machine response to human emotions can result in.

Therefore, the project raises the following questions: As machine intelligence emerges in ways unexpected by humans, how do we understand emotions, on top of reason, in machines? Is it even possible for machines to feel emotions? If machines can only feel emotions through humans, what would these learned emotions amount to for these non-human agents? How is AI influencing human emotions right now? 


Content Rating

Is this a good/useful/informative piece of content to include in the project? Have your say!

0