Enter the eerie world of the Training 2038 program designed to edify non-human entities in a number of specialist fields now awaiting deep transformations and likely to be managed by automatons in the future. While cascading through follow-up questions on agency and morality, instead of offering clearcut solutions, the ruminating conversation blurres into more uncertainty than how it started.
The R&D team of Kitchen Budapest has collectively started to work on a model that allows people to face with and consequently to form an opinion about nuanced, everyday situations when an artificial intelligence is put in charge over decision-making. The problems are unpacked in the safe space of a virtual reality environment, where the conversational interface is embodied by a bio-cognitive, alien being that envelops the estrade you, the user are standing upon. After the introductory lines of the narration, the experience initiates with the enumeration of the topics the user is expected to dive deeper into, like warfare, perception management (media), corporeality of artificial life & technological prosthesis, relationships, governance and autonomous vehicles. The conversation is half-dramatized (the Thing follows your movement in the fulldome space and reads out the Q&A via a text-to-speech app) and half bound to space due to the limitations of language processing and text input in the state-of-the-art VR.
Apart from enacting a classic „opinion mining” process in a risk-free 3D space, the experience actually summons people to take part in the making of future tools of mass management and defining principles upon which they are supposed to enact. „We use them daily, and don’t know what we’re doing. We don’t know who operates them or why, don’t know how they’re structured, and little about the way they function” – the classic dilemma formulated in the beginning of the ‘90s doesn’t seem to go away any time soon. The model offered by Training 2038 has a tendency to utopia in this regard, as it envisions an alternative way of involvement in the technology industry.
The project also challenges the concept of „anthropomorphizing” non-human agents and calls attention to the potential dangers it will have if we fail it. In the end, it is not only the question whether bots are able to aquire moral conscience as we know it, but do we ourselves posess a coherent view of it to pass it over or not. Training 2038 serves as a cautionary tale also, seen from this perspective.
When speaking about the interior of the experience, it refers to the VR interaction only, however, the outer, physical space of the installation is also carefully put together. The private zone of the invisible VR „cage” is marked by a 5 m high LED festoon fixed to an orbicular structure hanged from the above. The light control of the LEDs are in synch with the movement of the avatar „within”, so there is a defnite surplus value for the bystanders to grasp a sense of the atmosphere before actually entering in the computer-engineered world.