Someone: takes a selfie with their phone under low lighting conditions
You: "not a photo, it’s the output of an algorithm taking the luminosity from an array of light detectors, giving information of the colour and modifying it according to lighting conditions, and then using specific software to sharpen the original capture*
Its not hard to find that there are legitimate academic criticism of this ‘photo’. For example here. The comparison you made is not correct, more like I gave a blurry photo to an AI trained on paintings of Donald Trump and asked it to make an image of him. Even if the original image was not of Trump, the chances are the output will be because that’s all the model was trained on.
This is the trouble with using this as ‘proof’ that the. Theory and the simulations are correct, because while that is still likely, there is a feedback loop causing confirmation bias here, especially when people refer to this image as a ‘photo’.
Someone: takes a selfie with their phone under low lighting conditions
You: "not a photo, it’s the output of an algorithm taking the luminosity from an array of light detectors, giving information of the colour and modifying it according to lighting conditions, and then using specific software to sharpen the original capture*
Its not hard to find that there are legitimate academic criticism of this ‘photo’. For example here. The comparison you made is not correct, more like I gave a blurry photo to an AI trained on paintings of Donald Trump and asked it to make an image of him. Even if the original image was not of Trump, the chances are the output will be because that’s all the model was trained on.
This is the trouble with using this as ‘proof’ that the. Theory and the simulations are correct, because while that is still likely, there is a feedback loop causing confirmation bias here, especially when people refer to this image as a ‘photo’.