You should be questioning the game developers if they want to implement server side solutions instead of installing rootkits on users PCs and dictating what settings they should use.
Fuck off Eurogamer. No game should require any sort of kernel level access or setting change on your PC.
It really is as simple as “don’t trust the client.” Just assume that everyone is trying to cheat and go from there.
Servers should know what valid inputs from clients look like, and aggressively validate and profile those inputs for cheating. Meanwhile, the server should only send data to the client that is needed to render a display. Everything else stays server-side.
The key is to build a profile of invalid activity, like inhumanly fast mouse velocity coupled with accurate kills. There’s an art to this, but for things like FPS games, the general envelope of valid user activity should be straightforward to define. The finer points get caught during QA, and then further refined post-release. Someone might even come up with a library for this if there isn’t one already.
As a bonus, this also catches situations where people are using kernel circumvention like external hardware, in order to cheat. The behavior as seen by the server is what ultimately gets flagged.
You are imagining cheats must be superhuman, rather than merely better than the player making use of them. It’s perfectly possible to create a cheat which doesn’t move the mouse impossibly-fast or impossibly-accurately; it just moves it as quickly and accurately as a top-1% player. The bottom-10% player doesn’t care that he can’t beat the world champion, because he can still pwn some noobs.
You are ignoring cheats which display extra information on the screen, which the server can never tell. Did the player shoot someone too soon as they came around the corner, or did they just react quickly? Or did someone on their team warn them? The server doesn’t know.
But that doesn’t matter, if they have to play so carefully, they will be placed into an ELO where their “skill” matches, so they won’t be any more effective than a real player at that level, then they will be forced to be more suspicious and better players sniff out cheaters a lot better than others. So they wouldn’t even last long. This is basically what happens in CS now.
Smurfs can “pwn some noobs” just the same and get called cheaters all the time. Like I said in the other comment chain, we dont need to prevent people from cheating (endless game of cat and mouse), just make it ineffective.
Some things are harder, but for starters a few ideas:
Either check that the reported positions of players, their movement speed, etc are consistent to what the game would allow you to do (don’t fly, don’t go faster, don’t go through walls,…) or only accept player input, process it server side and then send positions etc back to the client. (You can do some local interpolation, but the server wins, when there’s a miss match). That should get rid of flying, no clip, teleportation, evasion of projectiles, … You can also analyze the inputs for abnormal behavior like the precision with which you aim for the (center of) the head, aiming through walls, etc.
Do all hitscan and projectiles etc. server side. Never let clients report that they’re hitting other players. This is calculated on the server.
Do only report other player positions when they’re on screen or almost on screen. If the client doesn’t know where the enemies are, wallhacks are impossible or harder (note that some information may be transferred to the client for the sake of spatial audio etc!)
And so on. Do not, never ever, rely on client side data or validation. If a cheat program can alter the client, it can alter the data it sends. How do you ensure, that the client is actually official and “your code”, when it can tell you anything it wants to tell you? You can only make it harder for others to impersonate your client, but never impossible. Especially on PC, when you can execute just about any code you want?
All of those things are already computed on server. The purpouse of anti cheat is to not let computer to game for you. To not precisely click heads, step out of danger within 1ms of seeing it or reliably hit timings and combos. Such things can be hard to detect, and it is an ongoing battle between detectors and cheats. And ordinary people are on the loosing side, as they face forced kernel rootkits, false cheat detections and grace periods during which cheaters are still allowed to play.
Which is what server side AC solves, they don’t want to do it because of money and expertise required vs here have a rootkit.
VACnet has always been like this, trained on all the games played. It’s had its problems sure, but I have never had to install a rootkit to play their video games. That’s the baseline any other game should be achieving.
Doesn’t look like Vacnet solves anything. Not even wall-hacks.
I’m not defending kernel level anticheats btw. I’m argueing that dismissive comments like “devs are lazy” do not have real world basis.
It is possible that comprehensive anti cheat is an unsolvable problem. Trust based solutions may be the right way, but in the form of peers trusting each other, as opposed to third party having obscure trust to participants system.
IMO, I think it also has a lot to do with consoles, and how relying on the platform as a closed and secure system feeds into the thinking going on here. “Turn the PC into something we trust like a console” explains everything.
You’re probably right, I can’t wrap my head around people wanting to be controlled like that sometimes, wanting such intrusive and dangerous software installed just to play a video game. It’s PC, we don’t want a console experience. Even Valve that are making these products is making sure you can just use it like a PC.
Machine learning. Oh this player did this impossible move more than once, maybe we should flag that.
Valve have been doing it for more than a decade. Now imagine what others could do, they are so caught up on “AI”, but wont try to use it for anything it could actually be useful for.
You can’t tell with client side either, so that’s a null argument. Anti-cheat is always bypassed, most good cheats don’t even run on the same device anymore, completely circumventing any kernel anti-cheat anyway.
On the server, they have all the data of where a player could be, what they could see, what they could hear, what human mouse movement looks like etc. that can all be used to target cheaters in a way they cannot get around. Player reporting would still exist of course for any other edge cases.
Client side anti-cheat has more data than server-side, because that is where the player’s actual screen, mouse and keyboard are.
The cheat only uses data available on the client - obviously - so the extra data about game state on the server is irrelevant.
“ML” is also not relevant. It doesn’t make the server any more able to make up for the data it doesn’t have. It only forces cheats to try and make realistic inputs, which they already do. And it ends up meaning that you don’t understand the decisions your anti-cheat model is making, so the inevitable false positives will cause a stink because you can’t justify them.
It doesn’t have to extinguish 99% of cheaters, hell, it doesn’t even need to extinguish cheating all together. It just has to make the problem manageable and invisible to players. That’s something server side can achieve. I’ll take the odd game with a cheater in if my entire PC isn’t ransom to some random company.
If cheaters exist but can only do it in a way that makes them look like a real player, then it doesn’t really effect the game anymore and the problem isn’t visible to players. You are never going to get rid of cheaters, even at LAN they have injected software in the past. It’s a deeper problem than we can solve with software.
Client-side AC has proven futile over and over again, even today with all the kernel AC. As I already said: most good cheats don’t even run on the same device anymore, completely circumventing any kernel (client side) anti-cheat anyway.
Why be allergic to trying something new? Something that isn’t invasive, a massive security threat or controlling of your own personal system.
It doesn’t have to extinguish 99% of cheaters, but if it affects 1% of legitimate players that’s a big problem. Good luck tuning your ML to have a less than 1% false positive rate while still doing anything.
Good luck tuning your ML to have a less than 1% false positive rate while still doing anything.
Already exists with VACnet in the largest competitive FPS, Counter-Strike. And machine learning has grown massively in the last couple years, as you probably know with all the “AI” buzz.
I mean, Valve could explicitly say that they have some trusted hardware and software stack or something and let games know whether the environment’s been modified.
That’d require support from Valve and be about the only way that you could have both a way to run in locked down mode for multiplayer games where addressing cheating is a problem (and where I think the “closed console system” model is probably mote appropriate and the “open PC model” is at best kludged into kimda-sorta working like a console) and also let the system still run in an “open mode”.
My own approach is just to not play most multiplayer competitive games on PCs. I’ve enjoyed them in the past, but for anything seriously reflex-oriented like FPSes, your reflexes go downhill with age anyway. And they come with all kinds of issues, even on a locked-down system that successfully avoids cheating. People griefing. You can’t generally pause the game to use the toilet, deal with a screaming kid, or answer the door. The other players, unlike game AIs, aren’t necessarily going to be optimized to play a “fun” game for me. You don’t need an Internet connection, and being in a remote area isn’t a limiting factor.
I think that the future is gonna be shifting towards better game AIs. Hard technical problems to solve there, but it’s a ratchet — we only get better over time.
The burden should be on the developers and a server side solution. No PC should be invaded with software to stop cheating. It’s cat and mouse anyway with client side detection, by chasing it so hard they are just incentivizing the creation of less and less detectable cheats.
The whole “its an untampered system” thing doesnt work. It’s like Secure Boot now randomly being required in games. No user should have to enable or disable anything like that just to run a game. It’s their device, they should have the freedom to do what they want and still run an application.
I think the invasion of bots in games is ruining them personally, no matter how old I get, or how bad I get at them, I still want to play against real players. I wouldnt mind a mode with just AI for people, but they should never be mixed in with real players.
The burden should be on the developers and a server side solution.
There are some fundamental limitations on what you can do with purely server-side solutions. If you’re playing online card games, sure, you can do viable pure server-side stuff to resist most cheating. That’ll get everything short of using, say, a calculator to compute probabilities or count cards or something.
But with, say, FPSes, that’s not really practical. You need to have some information on the client that the player shouldn’t be privy to to mitigate things like latency. For example, if another player runs around the edge of a wall and becomes visible, your client needs to know that it’s behind the wall and rounding the corner to rapidly show the opposing character becoming visible. And that means trusting client side code. And that entails trusted hardware to do reliably, and that can’t be done by the game developer — it’s gotta have support from Valve if you want that.
It’s practical, VACnet has existed for over a decade. It might not be perfect, but it’s a start and any company serious about anti-cheat could take that premise further.
The downside is that cheaters have to play at least a game before they are detected. Client side stuff is better for initial prevention, but even that’s becoming trivial as most good cheats dont even run on the same computer as the game anymore, circumventing all AC software anyway. If your game costs money to play, that’s already one of the biggest hurdles, so prevention isn’t worth chasing at the expense of privacy and security of users.
Any downsides from server-side are nothing in comparison to the downsides of client side anti-cheat.
What kind of shit question is that???
You should be questioning the game developers if they want to implement server side solutions instead of installing rootkits on users PCs and dictating what settings they should use.
Fuck off Eurogamer. No game should require any sort of kernel level access or setting change on your PC.
How can you implement server-side anti-cheat?
It really is as simple as “don’t trust the client.” Just assume that everyone is trying to cheat and go from there.
Servers should know what valid inputs from clients look like, and aggressively validate and profile those inputs for cheating. Meanwhile, the server should only send data to the client that is needed to render a display. Everything else stays server-side.
The key is to build a profile of invalid activity, like inhumanly fast mouse velocity coupled with accurate kills. There’s an art to this, but for things like FPS games, the general envelope of valid user activity should be straightforward to define. The finer points get caught during QA, and then further refined post-release. Someone might even come up with a library for this if there isn’t one already.
As a bonus, this also catches situations where people are using kernel circumvention like external hardware, in order to cheat. The behavior as seen by the server is what ultimately gets flagged.
It really isn’t.
You are imagining cheats must be superhuman, rather than merely better than the player making use of them. It’s perfectly possible to create a cheat which doesn’t move the mouse impossibly-fast or impossibly-accurately; it just moves it as quickly and accurately as a top-1% player. The bottom-10% player doesn’t care that he can’t beat the world champion, because he can still pwn some noobs.
You are ignoring cheats which display extra information on the screen, which the server can never tell. Did the player shoot someone too soon as they came around the corner, or did they just react quickly? Or did someone on their team warn them? The server doesn’t know.
But that doesn’t matter, if they have to play so carefully, they will be placed into an ELO where their “skill” matches, so they won’t be any more effective than a real player at that level, then they will be forced to be more suspicious and better players sniff out cheaters a lot better than others. So they wouldn’t even last long. This is basically what happens in CS now.
Smurfs can “pwn some noobs” just the same and get called cheaters all the time. Like I said in the other comment chain, we dont need to prevent people from cheating (endless game of cat and mouse), just make it ineffective.
Doesn’t the same logic apply to any sort of cheating that isn’t literally granting immunity or unlimited ammo?
Some things are harder, but for starters a few ideas:
Either check that the reported positions of players, their movement speed, etc are consistent to what the game would allow you to do (don’t fly, don’t go faster, don’t go through walls,…) or only accept player input, process it server side and then send positions etc back to the client. (You can do some local interpolation, but the server wins, when there’s a miss match). That should get rid of flying, no clip, teleportation, evasion of projectiles, … You can also analyze the inputs for abnormal behavior like the precision with which you aim for the (center of) the head, aiming through walls, etc.
Do all hitscan and projectiles etc. server side. Never let clients report that they’re hitting other players. This is calculated on the server.
Do only report other player positions when they’re on screen or almost on screen. If the client doesn’t know where the enemies are, wallhacks are impossible or harder (note that some information may be transferred to the client for the sake of spatial audio etc!)
And so on. Do not, never ever, rely on client side data or validation. If a cheat program can alter the client, it can alter the data it sends. How do you ensure, that the client is actually official and “your code”, when it can tell you anything it wants to tell you? You can only make it harder for others to impersonate your client, but never impossible. Especially on PC, when you can execute just about any code you want?
All of those things are already computed on server. The purpouse of anti cheat is to not let computer to game for you. To not precisely click heads, step out of danger within 1ms of seeing it or reliably hit timings and combos. Such things can be hard to detect, and it is an ongoing battle between detectors and cheats. And ordinary people are on the loosing side, as they face forced kernel rootkits, false cheat detections and grace periods during which cheaters are still allowed to play.
Which is what server side AC solves, they don’t want to do it because of money and expertise required vs here have a rootkit.
VACnet has always been like this, trained on all the games played. It’s had its problems sure, but I have never had to install a rootkit to play their video games. That’s the baseline any other game should be achieving.
Doesn’t look like Vacnet solves anything. Not even wall-hacks. I’m not defending kernel level anticheats btw. I’m argueing that dismissive comments like “devs are lazy” do not have real world basis. It is possible that comprehensive anti cheat is an unsolvable problem. Trust based solutions may be the right way, but in the form of peers trusting each other, as opposed to third party having obscure trust to participants system.
IMO, I think it also has a lot to do with consoles, and how relying on the platform as a closed and secure system feeds into the thinking going on here. “Turn the PC into something we trust like a console” explains everything.
You’re probably right, I can’t wrap my head around people wanting to be controlled like that sometimes, wanting such intrusive and dangerous software installed just to play a video game. It’s PC, we don’t want a console experience. Even Valve that are making these products is making sure you can just use it like a PC.
So, nothing that can defeat a good aimbot or limited wall-hack then, and a lot that would interfere with lag compensation.
I mean yeah, all that can be done server side should be, but there’s a lot that can’t be.
Machine learning. Oh this player did this impossible move more than once, maybe we should flag that.
Valve have been doing it for more than a decade. Now imagine what others could do, they are so caught up on “AI”, but wont try to use it for anything it could actually be useful for.
How do you tell the difference between someone with a good aimbot (that simulates real input) and someone who’s just really good?
You can’t (server side).
Very easily, that’s what machine learning is for.
You can’t tell with client side either, so that’s a null argument. Anti-cheat is always bypassed, most good cheats don’t even run on the same device anymore, completely circumventing any kernel anti-cheat anyway.
On the server, they have all the data of where a player could be, what they could see, what they could hear, what human mouse movement looks like etc. that can all be used to target cheaters in a way they cannot get around. Player reporting would still exist of course for any other edge cases.
Client side anti-cheat has more data than server-side, because that is where the player’s actual screen, mouse and keyboard are.
The cheat only uses data available on the client - obviously - so the extra data about game state on the server is irrelevant.
“ML” is also not relevant. It doesn’t make the server any more able to make up for the data it doesn’t have. It only forces cheats to try and make realistic inputs, which they already do. And it ends up meaning that you don’t understand the decisions your anti-cheat model is making, so the inevitable false positives will cause a stink because you can’t justify them.
It doesn’t have to extinguish 99% of cheaters, hell, it doesn’t even need to extinguish cheating all together. It just has to make the problem manageable and invisible to players. That’s something server side can achieve. I’ll take the odd game with a cheater in if my entire PC isn’t ransom to some random company.
If cheaters exist but can only do it in a way that makes them look like a real player, then it doesn’t really effect the game anymore and the problem isn’t visible to players. You are never going to get rid of cheaters, even at LAN they have injected software in the past. It’s a deeper problem than we can solve with software.
Client-side AC has proven futile over and over again, even today with all the kernel AC. As I already said: most good cheats don’t even run on the same device anymore, completely circumventing any kernel (client side) anti-cheat anyway.
Why be allergic to trying something new? Something that isn’t invasive, a massive security threat or controlling of your own personal system.
It doesn’t have to extinguish 99% of cheaters, but if it affects 1% of legitimate players that’s a big problem. Good luck tuning your ML to have a less than 1% false positive rate while still doing anything.
Already exists with VACnet in the largest competitive FPS, Counter-Strike. And machine learning has grown massively in the last couple years, as you probably know with all the “AI” buzz.
I mean, Valve could explicitly say that they have some trusted hardware and software stack or something and let games know whether the environment’s been modified.
That’d require support from Valve and be about the only way that you could have both a way to run in locked down mode for multiplayer games where addressing cheating is a problem (and where I think the “closed console system” model is probably mote appropriate and the “open PC model” is at best kludged into kimda-sorta working like a console) and also let the system still run in an “open mode”.
My own approach is just to not play most multiplayer competitive games on PCs. I’ve enjoyed them in the past, but for anything seriously reflex-oriented like FPSes, your reflexes go downhill with age anyway. And they come with all kinds of issues, even on a locked-down system that successfully avoids cheating. People griefing. You can’t generally pause the game to use the toilet, deal with a screaming kid, or answer the door. The other players, unlike game AIs, aren’t necessarily going to be optimized to play a “fun” game for me. You don’t need an Internet connection, and being in a remote area isn’t a limiting factor.
I think that the future is gonna be shifting towards better game AIs. Hard technical problems to solve there, but it’s a ratchet — we only get better over time.
The burden should be on the developers and a server side solution. No PC should be invaded with software to stop cheating. It’s cat and mouse anyway with client side detection, by chasing it so hard they are just incentivizing the creation of less and less detectable cheats.
The whole “its an untampered system” thing doesnt work. It’s like Secure Boot now randomly being required in games. No user should have to enable or disable anything like that just to run a game. It’s their device, they should have the freedom to do what they want and still run an application.
I think the invasion of bots in games is ruining them personally, no matter how old I get, or how bad I get at them, I still want to play against real players. I wouldnt mind a mode with just AI for people, but they should never be mixed in with real players.
There are some fundamental limitations on what you can do with purely server-side solutions. If you’re playing online card games, sure, you can do viable pure server-side stuff to resist most cheating. That’ll get everything short of using, say, a calculator to compute probabilities or count cards or something.
But with, say, FPSes, that’s not really practical. You need to have some information on the client that the player shouldn’t be privy to to mitigate things like latency. For example, if another player runs around the edge of a wall and becomes visible, your client needs to know that it’s behind the wall and rounding the corner to rapidly show the opposing character becoming visible. And that means trusting client side code. And that entails trusted hardware to do reliably, and that can’t be done by the game developer — it’s gotta have support from Valve if you want that.
It’s practical, VACnet has existed for over a decade. It might not be perfect, but it’s a start and any company serious about anti-cheat could take that premise further.
The downside is that cheaters have to play at least a game before they are detected. Client side stuff is better for initial prevention, but even that’s becoming trivial as most good cheats dont even run on the same computer as the game anymore, circumventing all AC software anyway. If your game costs money to play, that’s already one of the biggest hurdles, so prevention isn’t worth chasing at the expense of privacy and security of users.
Any downsides from server-side are nothing in comparison to the downsides of client side anti-cheat.