You are speaking of “model collapse”, I take it? That doesn’t happen in the real world with properly generated and curated synthetic data. Model collapse has only been demonstrated in highly artificial circumstances where many generations of model were “bred” exclusively on the outputs of previous generations, without the sort of curation and blend of additional new data that real-world models are trained with.
There is no sign that we are at “the peak” of AI development yet.
Are we, though? Newer models almost universally perform better than older ones, adjusted for scale. What signs are you seeing?