

Maybe closed source organizations are more willing to accept slop code that is bad but can barely work versus open source which won’t?
Because most software is internal to the organisation (therefore closed by definition) and never gets compared or used outside that organisation: Yes, I think that when that software barely works, it is taken as good enough and there’s no incentive to put more effort to improve it.
My past year (and more) of programming business-internal applications have been characterised by upper management imperatives to “use Generative AI, and we expect that to make you nerd faster” without any effort spent to figure out whether there is any net improvement in the result.
Certainly there’s no effort spent to determine whether it’s a net drain on our time and on the quality of the result. Which everyone on our teams can see is the case. But we are pressured to continue using it anyway.
(Thank you, this indirectly answers one question: the specific optimisation you’re asking about, it seems, is optimised speed of execution when deployed in production. By stating that as the ideal to be optimised, necessarily other properties are secondary and can be worse than optimal.)
Some do pursue that ideal, yes. For example: many businesses seek to deploy their internal applications on hosted environments where they pay not for a machine instance, but for seconds of execution time. By doing this they pay only when the application happens to be running (on a third-party’s managed environment, who will charge them for the service). If they can optimise the run-time of their application for any particular task, they are paying less in hosting costs under such an agreement.
This is a question now about paying for the time spent by people to develop and maintain the application, I think? Which is thoroughly different from the time the application spends running a task. Again, I don’t see clearly how “optimise the application for execution speed” is related to this question.