Friday 28 September, 2018
Today I spent two hours playing this game. In this game, everything looks for the value and meaning of its existence. While collecting human data, artificial people are influenced by these data and gradually find the meaning of their existence. They were created to kill for their creator, and when these artificial humans evolved, they killed their master, and the creator died. In this way, their existence is meaningless, so individuals who are separated from the network of mechanical life began to interpret their presence in different ways. Besides, as for the question of whether to continue using emotional artificial people as tools, I think it can be explained by utilitarianism. Firstly, there should be several stakeholders: builders, merchant, human beings who use the artificial people to complete something they want. For the builder, if someone uses the artificial person that he or she built, they will have the positive outcome. For merchants, if there are people who are willing to buy the artificial people they sell, they will have the positive outcome. Moreover, for humans who use artificial people as tools, they also have a positive outcome because artificial people help them solve things they cannot do. Also, the number of humans using them as tools should be the largest. When we multiply all the outputs, the results should be positive. According to this result of using utilitarianism, the question which I made should be moral.
|