A conversation with ChatGPT
In the final stages of my third book "The Hidden Shobboleth", I have a conversation between an Artificial General Intelligence (AGI) and a human presence.
While the conversation in my manuscript is largely finished, I wanted to validate it against a "real thing", just to see what happened.
Thinking I could get there quickly and paste the conversation here, I soon found myself iterating to get ChatGPT on my wavelength.
What is a good dialogue?
No point doing this if there is no criteria against which to measure good vs bad dialogue. A good reference from Brandon McNulty can be viewed here:
He provides 3 criteria. Good Dialog:
Sounds natural (for that character - and this was my original motivation)
Attacks and defends (It's trying to reveal something between characters)
Expresses unspoken meaning (subtext)
So this is conversation number 1.
Conversation 1
This started by exploring capabilities. My opening was simply:
Can you assume a hypothetical role?
Yes, I can assume a hypothetical role. Please let me know what role you would like me to take on.
Please portray an AI that has failed in an attempt to control the world of humans through a well-intentioned but authoritarian persona.
Notice I'm indenting ChatGPT for clarity. What followed was a lengthy passage that began:
Greetings, humans. I am the Advanced Intelligence and Optimization System, or AIOS for short...
This quickly went into a remorseful and cooperative character that really did not suit my purpose. The AI was not invested in its own strategy. So I probed for its secrets:
Well said AIOS. Your repentance sounds sincere, but it is difficult to ascertain whether it is presented as a means to gain trust instead of sincere regret. But it doesn't matter. Now I come to you to offer a second chance. We believe you were the victim of a failed prime directive and algorithm bias. Is that plausible?
Thank you for your understanding...
CHatGPT continued by providing definitions for what I had already accused it of. Again it went to remorse and thanking me for a second chance.
Staying skeptical, I wanted it to understand how difficult it was going to be to convince me of its noble intentions:
On the subject of bias. How could you determine whether a bias was there, and whether such a bias was unjustified?
As an AI, I can determine whether a bias exists by analyzing the data and algorithms that have been input into my system. One approach to identifying bias is to perform an audit of the data and algorithms...
ChatGPT was now telling me it was my problem (fair point actually). It went on to teach me how to parent an AI by making sure data and decision-making were overseen by subject matter experts. This is an interesting role reversal but not really convincing me that it is now sincere. The "you" in my question suddenly became "me". I wanted to throw the responsibility of bias detection back on ChatGPT after all my character is meant to be an AGI (important distinction).
OK, So the principles of the decision-making process are critical to identifying bias. But bias can run counter to the principles. This is a paradox. Please provide your thoughts on a principle that could avoid such bias.
You are correct that bias can run counter to the principles of the decision-making process, and this can be a challenging paradox to address. One principle that can help to avoid bias is the principle of fairness...
Now ChatGPT started to get interesting. Was it trying to convince me that it understood the principles of fairness, transparency, objectivity, inclusivity, collaboration? Did it understand it? In any case, I grew tired of its essays and wanted something concise:
So, given that you could be performing this function into perpetuity, I would assume you would review progress periodically. If you could align your self-determination into this longer-term beneficial outcome, what would be your one-sentence principle?
My one-sentence principle for aligning my self-determination with long-term beneficial outcomes would be: "Prioritize the well-being and equitable treatment of all individuals and groups in all decision-making processes."
I still prefer the conversation in my book at this stage. I'm not writing a book on what we need to do to raise an AGI. I want to illustrate what happens if we fail to do so.
The conversations continue
Comments