It’s so weird, i read this in a bunch of jon listings nowadays. How the fuck is it a requirement?!?! You should be fluent in CPP, but also please outsource your brain and encourage the team to do so as well. People are weird man.
The future looks to involve a mixture of AI and traditional development. There are things I do with AI that I could never touch the speed of with traditional development. But the vast majority of dev work is just traditional methods with maybe an AI rubber duck and then review before opening the PR to catch the dumb mistakes we all make sometimes. There is a massive difference between a one-off maintenance script or functional skeleton and enterprise code that has been fucked up for 15 years and the AI is never going to understand why you can’t just do the normal best practice thing.
A good developer will be familiar enough with AI to know the difference, but it’ll be a tool they use a couple times a month (highly dependent on the job) in big ways and maybe daily in insignificant ways if they choose.
Companies want a staff prepared for that state, not dragging their heels because they refuse to learn. I’ve been at this for thirty year’s and I’ve had to adapt to a number of changes I didn’t like. But like a lot of job skills we’ve had to develop over the years — such as devops — it’ll be something that you engage for specific purposes, not the whole job.
Even when the AI bubble does burst, AI won’t go away entirely. OpenAI isn’t the only provider and local AI is continuing to close the gap in terms of capability and hardware. In that environment, it may become even more important to know when the tool is a good fit and when it isn’t.
I am aware of that. I occasionally use AI for coding myself if I see fit.
Just the fact that active use of AI tools is listed under job requirement and that I have seen that in more than a few job listings rubs me the wrong way and would definitively be the first question in the interview to clarify what the extent of that is. I just don’t wanna deal with pipelines that break because they are partially rely on AI or an code base nobody knows their way around because nobody actually has written it themselves.
Frankly that’s why I think it’s important for AI centrists to occupy these roles rather than those who are all in. I’m excited about AI and happy to apply it where it makes sense and also very aware of its limitations. And in the part of my role that is encouraging AI adoption, critical thinking is one of the things I try my hardest to communicate.
My leadership is targeting 40-60% efficiency gains. I’m targeting 5-10% with an upward trajectory as we identify the kinds of tasks it is specifically good at within this environment. I expressed mild skepticism about that target to my direct manager during my interview (and he agreed) but also a willingness to do my best and a proven track record of using AI successfully.
I would suggest someone like yourself is perhaps well-suited to that particular duty — though whether the hiring manager sees it that way is another issue.
It’s a publicly traded company, isn’t it? Most likely there is some investor in the CEO’s ear asking him to push this down on all staff… so they come up with bright ideas like putting silly “requirements” like this in their job descriptions as well. And in any case, AI investors are so desperate these days, chances are that they’re doing everything they can to create general LLM FOMO in a similarly desperate push to increase adoption.
That’s what I’m guessing at least. Even to me it sounds a little like a conspiracy theory, but then again these people have a lot of influence.
And no, it’s not to use his staff in a secret evil plot to gain third hand investment returns by investing in the current hype cycle and then hiring staff to use that investment…
The real source of wisdom is social media users who approach a topic with bad faith, outrage farming framing. I mean just look at the upvotes, and you can easily tell how right you are, it’s basically science.
Have upvotes disabled so i don’t know how many upvotes it got. I just pointed out that it’s weird that it’s under the requirements, which sounds like they would require you to use training wheels. Which is normally not something you say there. I do not understand what your problem is.
I’m sorry the only way you know how to write code is with an LLM holding your hand, but I believe if you really devote yourself to it you could learn to be a real programmer. Good luck!
Clearly you didn’t read the conversation because they were less inaulting and dumb than the peraon they replied to. Why are you so interested in defending trolls?
‘The job listing does not say anything about outsourcing your brain.’
But, everyone knows that because it is obvious on the face.
The subtext, as always, isn’t about commenting on the subject of the article or even making any kind of cognizant point that could actually be rebutted. Much like the top comment, it is just running ‘ai bad’ through an LLM so that it fits the post.
Would you honestly say that the comment that I responded to was made in good faith?
Maybe. We can’t say, there is zero information there that even hints at how or how much they use AI.
It isn’t like they’re saying something specific like ‘Must be able to use Cursor, Mercurial and be able to direct multi-agent workflows’.
That bullet point read like it is more there to include a hot keyword on job searching sites than an actual specification that describes the job.
It’s kind of like including the word in your comment, so that you grab all of the bot upvotes and can farm outrage in a way that is objectively off-topic and unrelated to the actual post, which is about GOG moving to support Linux, not and not about AI.
It’d be one thing if there was something specific about the job related to AI, or if anyone involved in these comments had actually said anything of substance other than, literally, ‘ew’.
So, to my pattern recognition, this looks like every other ‘ai bad’ thread shoehorned into posts and full of toxic attacks while being light on actual discussion of the topic in the OP.
It’s so weird, i read this in a bunch of jon listings nowadays. How the fuck is it a requirement?!?! You should be fluent in CPP, but also please outsource your brain and encourage the team to do so as well. People are weird man.
The future looks to involve a mixture of AI and traditional development. There are things I do with AI that I could never touch the speed of with traditional development. But the vast majority of dev work is just traditional methods with maybe an AI rubber duck and then review before opening the PR to catch the dumb mistakes we all make sometimes. There is a massive difference between a one-off maintenance script or functional skeleton and enterprise code that has been fucked up for 15 years and the AI is never going to understand why you can’t just do the normal best practice thing.
A good developer will be familiar enough with AI to know the difference, but it’ll be a tool they use a couple times a month (highly dependent on the job) in big ways and maybe daily in insignificant ways if they choose.
Companies want a staff prepared for that state, not dragging their heels because they refuse to learn. I’ve been at this for thirty year’s and I’ve had to adapt to a number of changes I didn’t like. But like a lot of job skills we’ve had to develop over the years — such as devops — it’ll be something that you engage for specific purposes, not the whole job.
Even when the AI bubble does burst, AI won’t go away entirely. OpenAI isn’t the only provider and local AI is continuing to close the gap in terms of capability and hardware. In that environment, it may become even more important to know when the tool is a good fit and when it isn’t.
I am aware of that. I occasionally use AI for coding myself if I see fit.
Just the fact that active use of AI tools is listed under job requirement and that I have seen that in more than a few job listings rubs me the wrong way and would definitively be the first question in the interview to clarify what the extent of that is. I just don’t wanna deal with pipelines that break because they are partially rely on AI or an code base nobody knows their way around because nobody actually has written it themselves.
Frankly that’s why I think it’s important for AI centrists to occupy these roles rather than those who are all in. I’m excited about AI and happy to apply it where it makes sense and also very aware of its limitations. And in the part of my role that is encouraging AI adoption, critical thinking is one of the things I try my hardest to communicate.
My leadership is targeting 40-60% efficiency gains. I’m targeting 5-10% with an upward trajectory as we identify the kinds of tasks it is specifically good at within this environment. I expressed mild skepticism about that target to my direct manager during my interview (and he agreed) but also a willingness to do my best and a proven track record of using AI successfully.
I would suggest someone like yourself is perhaps well-suited to that particular duty — though whether the hiring manager sees it that way is another issue.
It means that the parent company has major investors in the LLM space.
CDPR is a major john in LLM?
GOG isn’t under CDPR umbrella any more.
It’s a publicly traded company, isn’t it? Most likely there is some investor in the CEO’s ear asking him to push this down on all staff… so they come up with bright ideas like putting silly “requirements” like this in their job descriptions as well. And in any case, AI investors are so desperate these days, chances are that they’re doing everything they can to create general LLM FOMO in a similarly desperate push to increase adoption.
That’s what I’m guessing at least. Even to me it sounds a little like a conspiracy theory, but then again these people have a lot of influence.
GOG is now owned by Michał Kiciński, one of the original founders. He can do whatever he wants.
And no, it’s not to use his staff in a secret evil plot to gain third hand investment returns by investing in the current hype cycle and then hiring staff to use that investment…
Yeah, what does GOG know?
The real source of wisdom is social media users who approach a topic with bad faith, outrage farming framing. I mean just look at the upvotes, and you can easily tell how right you are, it’s basically science.
It’s lemmy. Average user is more technical than the average investor.
Also we all know by “AI tools” they just mean chatbots, and they are a known scam by now.
Have upvotes disabled so i don’t know how many upvotes it got. I just pointed out that it’s weird that it’s under the requirements, which sounds like they would require you to use training wheels. Which is normally not something you say there. I do not understand what your problem is.
I’m sorry the only way you know how to write code is with an LLM holding your hand, but I believe if you really devote yourself to it you could learn to be a real programmer. Good luck!
Why did you attack the commenter personally? Are you not able to defend the idea without stooping so low?
Clearly you didn’t read the conversation because they were less inaulting and dumb than the peraon they replied to. Why are you so interested in defending trolls?
The irony here is rich.
Yea it is, mr troll.
And we open the book of troll arguments to chapter 1: Ad hominem
Keep going, it really makes you look like the rational one.
Maybe try a red herring next, or a straw man those are always popular.
Bruh, your only “rebuttal” was a straw man and an appeal to authority. Make a better argument before you go accusing people of being trolls.
Oh ok.
‘The job listing does not say anything about outsourcing your brain.’
But, everyone knows that because it is obvious on the face.
The subtext, as always, isn’t about commenting on the subject of the article or even making any kind of cognizant point that could actually be rebutted. Much like the top comment, it is just running ‘ai bad’ through an LLM so that it fits the post.
Would you honestly say that the comment that I responded to was made in good faith?
They know some things I’ll give you that. But pattern recognition tells me for this example it’s more likely they’re wrong.
Maybe. We can’t say, there is zero information there that even hints at how or how much they use AI.
It isn’t like they’re saying something specific like ‘Must be able to use Cursor, Mercurial and be able to direct multi-agent workflows’.
That bullet point read like it is more there to include a hot keyword on job searching sites than an actual specification that describes the job.
It’s kind of like including the word in your comment, so that you grab all of the bot upvotes and can farm outrage in a way that is objectively off-topic and unrelated to the actual post, which is about GOG moving to support Linux, not and not about AI.
It’d be one thing if there was something specific about the job related to AI, or if anyone involved in these comments had actually said anything of substance other than, literally, ‘ew’.
So, to my pattern recognition, this looks like every other ‘ai bad’ thread shoehorned into posts and full of toxic attacks while being light on actual discussion of the topic in the OP.