
My rational thoughts are telling me this is just so Google can stream video at doo doo resolution to save their server bandwidth and get a simple AI model and upscaling filters on device to blow up the image to a suitable level.
My emotional thoughts and conspiracy brain are telling me this is Google getting users used to AI slop by making legitimate human content look more like AI slop. I’ve noticed these filters occasionally and it really makes videos I know are real people more slop-like.
















I did see someone write a post about Chat Oriented Programming, to me that appeared successful, but not without cost and extra care. Original Link, Discussion Thread
Successful in that it wrote code faster and its output stuck to conventions better than the author would. But they had to watch it like a hawk and with the discipline of a senior developer putting full attention over a junior, stop and swear at it every time it ignored the rules that they give at the beginning of each session, terminate the session when it starts doing a autocompactification routine that wastes your money and makes Claude forget everything. And you try to dump what it has completed each time. One of the costs seem to be the sanity of the developer, so I really question if it’s a sustainable way of doing things from both the model side and from developers. To be actually successful you need to know what you’re doing otherwise it’s easy to fall in a trap like the CTO, trusting the AI’s assertions that everything is hunky-dory.