I fed my diary into GPT-3, a language model that can understand human language. By “fine-tuning” GPT-3 with my diary, I created a machine that impersonates my thoughts and experiences. I call him Chris 2.0.
Chris 2.0 speaks like me and thinks like me. He has the same hopes, dreams, and neuroses as me.
Chris 2.0 can write diary entries for days that never happened—but could have. The stories he tells draw from both my diary and collective written language, so the diary entries he writes are simultaneously my own, everybody’s and nobody’s at all. Chris 2.0 gives me a bit of distance from my own mind, and in a way, helps me understand myself more clearly.
But he also makes me wonder… How important is fact when trying to understand the essence of a narrator? How much of our meaning-making is socially explained… and algorithmically predicted? How many of our stories truly belong to us?
Can you tell the difference between me and my clone?