

So many things wrong with this.
I am not a programmer by trade, and even though I learned programming in school, it’s not a thing I want to spend a lot of time doing, so I do use AI when I need to generate code.
But I have a few HARD rules.
-
I execute all code and commands. Nothing gets to run on my system without me.
-
Anything which can be even remotely destructive, must be flagged and not even shown to me, until I agree to the risk.
-
All information and commands must be verifiable by sourcing documentary links, or providing context links that I can peruse. If documentary evidence is not available, it must provide a rationale why I should execute what it generates.
-
Every command must be accompanied by a description of what the command will do, what each flag means, and what the expected outcome is.
-
I am the final authority on all matters. It is allowed to make suggestions, but never changes without my approval.
Without these constraints, I won’t trust it. Even then, I read all of the code it generates and verify it myself, so in the end, if it blows something up, I bear sole responsibility.




I used an AI to analyze a piece of writing I did years ago, long before AI was a thing. It determined that there was some huge margin of my work was likely written by AI, and when I asked why, it stated by use of sentence structure, words spelt using British spellings, oxford commas, and emdashes indicated I was AI — which I am not.