• daniskarma@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    0
    arrow-down
    2
    ·
    edit-2
    13 days ago

    That seems a very reasonable approach on the impossibility to achieve AGI with current models…

    The first concept I was already kind of thinking about. Current LLM are incredibly inefficient. And it seems to be some theoretical barrier in efficiency that no model has been able to surpass. Giving that same answer that with the current model they would probably need to have trillions of parameters just to stop hallucinating. Not to say that to give them the ability to do more things that just answering question. As this supposedly AGI, even if only worked with word, it would need to be able to do more “types of conversations” that just being the answerer in a question-answer dialog.

    But I had not thought of the need of repurpose the same are of the brain (biological or artificial) for doing different task on the go, if I have understood correctly. And it seems pretty clear that current models are unable to do that.

    Though I still think that an intelligent consciousness could emerge from a loop of generative “thoughts”, the most important of those probably being language.

    Getting a little poetical. I don’t think that the phrase is “I think therefore I am”, but “I can think ‘I think therefore I am’ therefore I am”.