

I use the term myth loosely in abstraction. Generalization of the tools of industry is still a mythos in an abstract sense. Someone with a new lathe they bought to bore the journals of an engine block has absolutely no connection or intentions related to class, workers, or society. That abstraction and assignment of meaning like a category or entity or class is simply the evolution of a divine mythos in the more complex humans of today.
Stories about Skynet or The Matrix are about a similar struggle of the human class against machine gods. These have no relationship to the actual AI alignment problem and are instead a battle with more literal machine gods. Point is that the new thing is always the boogie man. Evolution must be deeply conservative most of the time. People display a similar trajectory of conservative aversion to change. In this light, the reasons for such resistance are largely irrelevant. It is a big change and will certainly get a lot of push back from conservative elements that collectively ensure change is not harmful. Those elements get cut off in the long term as the change propagates.
You need a 16 GB or better GPU from the 30 series or higher, but then run Oobabooga text gen with the API and an 8×7b or like a 34b or 70b coder in a GGUF quantized model. Those are larger than most machines can run but Oobabooga can pull it off by splitting the model between CPU and GPU. You’ll just need the ram to initially load the thing or deepspeed to load it from NVME.
Use a model with a long context and add a bunch of your chats into the prompt. Then ask for your user profile and start asking it questions about you that seem unrelated to any of your previous conversations in the context. You might be surprised by the results. Inference works both directions. You’re giving a lot of information that is specifically related to the ongoing interchanges and language choices. If you add a bunch of your social media posts, it is totally different in what the model will make up about you in a user profile. There is information of some sort that the model is capable of deciphering. It is not absolute or like some kind of conspiracy or trained behavior (I think), but the accuracy seemed uncanny to me. It spat out surprising information across multiple unrelated sessions when I tried it a year ago.
I like to write, but have never done so professionally. I disagree that it hurts writers. I think people reacted poorly to AI because of the direct and indirect information campaign Altmann funded to try and make himself a monopoly. AI is just a tool. It is fun to play with in unique areas, but these often require very large models and/or advanced frameworks. In my science fiction universe I must go to extreme lengths to get the model to play along with several aspects like a restructure of politics, economics, and social hierarchy. I use several predictions I imagine about the distant future that plausibly make the present world seem primitive in several ways and with good reasons. This restructuring of society violates both some of our cultural norms in the present and is deep within areas of politics that are blocked by alignment. I tell a story where humans are the potentially volatile monsters to be feared. That is not the plot, but convincing a present model to collaborate on such a story ends up in the gutter a lot. My grammar and thought stream is not great and that is the main thing I use a model to clean up, but it is still collaborative to some extent.
I feel like there is an enormous range of stories to tell and that AI only makes these more accessible. I have gone off on tangents many times exploring parts of my universe because of directions the LLM took. Like I limit the model to generate a sentence at a time and I’m writing half or more of every sentence for the first 10k tokens. Then it picks up on my style so much that I can start the sentence with a word or change one word in a sentence and let it continue with great effect. It is most entertaining to me because it is almost as fast as me telling a story as fast as I can make it up. I don’t see anything remotely bad about that. No one makes a career in the real world by copying someone else’s writing. There are tons of fan works but those do not make anyone real money and they only increase the reach of the original author.
No, I think all the writers and artists hype was all about Altmann’s plan for a monopoly that got derailed when Yann LeCunn covertly leaked the Llama weights after Altmann went against the founding principles of OpenAI and made GPT3 proprietary.
People got all upset about digital tools too back when they first came on the scene; about how they would destroy the artists. Sure it ended the era of hand painted cartoon cell animation, but it created stuff like Pixar.
All of AI is a tool. The only thing to hate is this culture of reductionism where people are given free money in the form of great efficiency gains and they choose to do the same things with less people and cash out the free money instead of using the opportunity to offer more, expand, and do something new. A few people could get a great tool chain together and create a franchise greater, better planned, and more rich than anything corporations have ever done to date. The only thing to hate are these little regressive stupid people without vision, without motivation, and far too conservatively timid to take risks and create the future. We live in an age of cowards worthy of loathing. That is the only problem I see.