It really is as simple as “don’t trust the client.” Just assume that everyone is trying to cheat and go from there.
Servers should know what valid inputs from clients look like, and aggressively validate and profile those inputs for cheating. Meanwhile, the server should only send data to the client that is needed to render a display. Everything else stays server-side.
The key is to build a profile of invalid activity, like inhumanly fast mouse velocity coupled with accurate kills. There’s an art to this, but for things like FPS games, the general envelope of valid user activity should be straightforward to define. The finer points get caught during QA, and then further refined post-release. Someone might even come up with a library for this if there isn’t one already.
As a bonus, this also catches situations where people are using kernel circumvention like external hardware, in order to cheat. The behavior as seen by the server is what ultimately gets flagged.
You are imagining cheats must be superhuman, rather than merely better than the player making use of them. It’s perfectly possible to create a cheat which doesn’t move the mouse impossibly-fast or impossibly-accurately; it just moves it as quickly and accurately as a top-1% player. The bottom-10% player doesn’t care that he can’t beat the world champion, because he can still pwn some noobs.
You are ignoring cheats which display extra information on the screen, which the server can never tell. Did the player shoot someone too soon as they came around the corner, or did they just react quickly? Or did someone on their team warn them? The server doesn’t know.
But that doesn’t matter, if they have to play so carefully, they will be placed into an ELO where their “skill” matches, so they won’t be any more effective than a real player at that level, then they will be forced to be more suspicious and better players sniff out cheaters a lot better than others. So they wouldn’t even last long. This is basically what happens in CS now.
Smurfs can “pwn some noobs” just the same and get called cheaters all the time. Like I said in the other comment chain, we dont need to prevent people from cheating (endless game of cat and mouse), just make it ineffective.
Some things are harder, but for starters a few ideas:
Either check that the reported positions of players, their movement speed, etc are consistent to what the game would allow you to do (don’t fly, don’t go faster, don’t go through walls,…) or only accept player input, process it server side and then send positions etc back to the client. (You can do some local interpolation, but the server wins, when there’s a miss match). That should get rid of flying, no clip, teleportation, evasion of projectiles, … You can also analyze the inputs for abnormal behavior like the precision with which you aim for the (center of) the head, aiming through walls, etc.
Do all hitscan and projectiles etc. server side. Never let clients report that they’re hitting other players. This is calculated on the server.
Do only report other player positions when they’re on screen or almost on screen. If the client doesn’t know where the enemies are, wallhacks are impossible or harder (note that some information may be transferred to the client for the sake of spatial audio etc!)
And so on. Do not, never ever, rely on client side data or validation. If a cheat program can alter the client, it can alter the data it sends. How do you ensure, that the client is actually official and “your code”, when it can tell you anything it wants to tell you? You can only make it harder for others to impersonate your client, but never impossible. Especially on PC, when you can execute just about any code you want?
All of those things are already computed on server. The purpouse of anti cheat is to not let computer to game for you. To not precisely click heads, step out of danger within 1ms of seeing it or reliably hit timings and combos. Such things can be hard to detect, and it is an ongoing battle between detectors and cheats. And ordinary people are on the loosing side, as they face forced kernel rootkits, false cheat detections and grace periods during which cheaters are still allowed to play.
Which is what server side AC solves, they don’t want to do it because of money and expertise required vs here have a rootkit.
VACnet has always been like this, trained on all the games played. It’s had its problems sure, but I have never had to install a rootkit to play their video games. That’s the baseline any other game should be achieving.
Doesn’t look like Vacnet solves anything. Not even wall-hacks.
I’m not defending kernel level anticheats btw. I’m argueing that dismissive comments like “devs are lazy” do not have real world basis.
It is possible that comprehensive anti cheat is an unsolvable problem. Trust based solutions may be the right way, but in the form of peers trusting each other, as opposed to third party having obscure trust to participants system.
IMO, I think it also has a lot to do with consoles, and how relying on the platform as a closed and secure system feeds into the thinking going on here. “Turn the PC into something we trust like a console” explains everything.
You’re probably right, I can’t wrap my head around people wanting to be controlled like that sometimes, wanting such intrusive and dangerous software installed just to play a video game. It’s PC, we don’t want a console experience. Even Valve that are making these products is making sure you can just use it like a PC.
Machine learning. Oh this player did this impossible move more than once, maybe we should flag that.
Valve have been doing it for more than a decade. Now imagine what others could do, they are so caught up on “AI”, but wont try to use it for anything it could actually be useful for.
You can’t tell with client side either, so that’s a null argument. Anti-cheat is always bypassed, most good cheats don’t even run on the same device anymore, completely circumventing any kernel anti-cheat anyway.
On the server, they have all the data of where a player could be, what they could see, what they could hear, what human mouse movement looks like etc. that can all be used to target cheaters in a way they cannot get around. Player reporting would still exist of course for any other edge cases.
Client side anti-cheat has more data than server-side, because that is where the player’s actual screen, mouse and keyboard are.
The cheat only uses data available on the client - obviously - so the extra data about game state on the server is irrelevant.
“ML” is also not relevant. It doesn’t make the server any more able to make up for the data it doesn’t have. It only forces cheats to try and make realistic inputs, which they already do. And it ends up meaning that you don’t understand the decisions your anti-cheat model is making, so the inevitable false positives will cause a stink because you can’t justify them.
It doesn’t have to extinguish 99% of cheaters, hell, it doesn’t even need to extinguish cheating all together. It just has to make the problem manageable and invisible to players. That’s something server side can achieve. I’ll take the odd game with a cheater in if my entire PC isn’t ransom to some random company.
If cheaters exist but can only do it in a way that makes them look like a real player, then it doesn’t really effect the game anymore and the problem isn’t visible to players. You are never going to get rid of cheaters, even at LAN they have injected software in the past. It’s a deeper problem than we can solve with software.
Client-side AC has proven futile over and over again, even today with all the kernel AC. As I already said: most good cheats don’t even run on the same device anymore, completely circumventing any kernel (client side) anti-cheat anyway.
Why be allergic to trying something new? Something that isn’t invasive, a massive security threat or controlling of your own personal system.
It doesn’t have to extinguish 99% of cheaters, but if it affects 1% of legitimate players that’s a big problem. Good luck tuning your ML to have a less than 1% false positive rate while still doing anything.
Good luck tuning your ML to have a less than 1% false positive rate while still doing anything.
Already exists with VACnet in the largest competitive FPS, Counter-Strike. And machine learning has grown massively in the last couple years, as you probably know with all the “AI” buzz.
How can you implement server-side anti-cheat?
It really is as simple as “don’t trust the client.” Just assume that everyone is trying to cheat and go from there.
Servers should know what valid inputs from clients look like, and aggressively validate and profile those inputs for cheating. Meanwhile, the server should only send data to the client that is needed to render a display. Everything else stays server-side.
The key is to build a profile of invalid activity, like inhumanly fast mouse velocity coupled with accurate kills. There’s an art to this, but for things like FPS games, the general envelope of valid user activity should be straightforward to define. The finer points get caught during QA, and then further refined post-release. Someone might even come up with a library for this if there isn’t one already.
As a bonus, this also catches situations where people are using kernel circumvention like external hardware, in order to cheat. The behavior as seen by the server is what ultimately gets flagged.
It really isn’t.
You are imagining cheats must be superhuman, rather than merely better than the player making use of them. It’s perfectly possible to create a cheat which doesn’t move the mouse impossibly-fast or impossibly-accurately; it just moves it as quickly and accurately as a top-1% player. The bottom-10% player doesn’t care that he can’t beat the world champion, because he can still pwn some noobs.
You are ignoring cheats which display extra information on the screen, which the server can never tell. Did the player shoot someone too soon as they came around the corner, or did they just react quickly? Or did someone on their team warn them? The server doesn’t know.
But that doesn’t matter, if they have to play so carefully, they will be placed into an ELO where their “skill” matches, so they won’t be any more effective than a real player at that level, then they will be forced to be more suspicious and better players sniff out cheaters a lot better than others. So they wouldn’t even last long. This is basically what happens in CS now.
Smurfs can “pwn some noobs” just the same and get called cheaters all the time. Like I said in the other comment chain, we dont need to prevent people from cheating (endless game of cat and mouse), just make it ineffective.
Doesn’t the same logic apply to any sort of cheating that isn’t literally granting immunity or unlimited ammo?
Some things are harder, but for starters a few ideas:
Either check that the reported positions of players, their movement speed, etc are consistent to what the game would allow you to do (don’t fly, don’t go faster, don’t go through walls,…) or only accept player input, process it server side and then send positions etc back to the client. (You can do some local interpolation, but the server wins, when there’s a miss match). That should get rid of flying, no clip, teleportation, evasion of projectiles, … You can also analyze the inputs for abnormal behavior like the precision with which you aim for the (center of) the head, aiming through walls, etc.
Do all hitscan and projectiles etc. server side. Never let clients report that they’re hitting other players. This is calculated on the server.
Do only report other player positions when they’re on screen or almost on screen. If the client doesn’t know where the enemies are, wallhacks are impossible or harder (note that some information may be transferred to the client for the sake of spatial audio etc!)
And so on. Do not, never ever, rely on client side data or validation. If a cheat program can alter the client, it can alter the data it sends. How do you ensure, that the client is actually official and “your code”, when it can tell you anything it wants to tell you? You can only make it harder for others to impersonate your client, but never impossible. Especially on PC, when you can execute just about any code you want?
All of those things are already computed on server. The purpouse of anti cheat is to not let computer to game for you. To not precisely click heads, step out of danger within 1ms of seeing it or reliably hit timings and combos. Such things can be hard to detect, and it is an ongoing battle between detectors and cheats. And ordinary people are on the loosing side, as they face forced kernel rootkits, false cheat detections and grace periods during which cheaters are still allowed to play.
Which is what server side AC solves, they don’t want to do it because of money and expertise required vs here have a rootkit.
VACnet has always been like this, trained on all the games played. It’s had its problems sure, but I have never had to install a rootkit to play their video games. That’s the baseline any other game should be achieving.
Doesn’t look like Vacnet solves anything. Not even wall-hacks. I’m not defending kernel level anticheats btw. I’m argueing that dismissive comments like “devs are lazy” do not have real world basis. It is possible that comprehensive anti cheat is an unsolvable problem. Trust based solutions may be the right way, but in the form of peers trusting each other, as opposed to third party having obscure trust to participants system.
IMO, I think it also has a lot to do with consoles, and how relying on the platform as a closed and secure system feeds into the thinking going on here. “Turn the PC into something we trust like a console” explains everything.
You’re probably right, I can’t wrap my head around people wanting to be controlled like that sometimes, wanting such intrusive and dangerous software installed just to play a video game. It’s PC, we don’t want a console experience. Even Valve that are making these products is making sure you can just use it like a PC.
So, nothing that can defeat a good aimbot or limited wall-hack then, and a lot that would interfere with lag compensation.
I mean yeah, all that can be done server side should be, but there’s a lot that can’t be.
Machine learning. Oh this player did this impossible move more than once, maybe we should flag that.
Valve have been doing it for more than a decade. Now imagine what others could do, they are so caught up on “AI”, but wont try to use it for anything it could actually be useful for.
How do you tell the difference between someone with a good aimbot (that simulates real input) and someone who’s just really good?
You can’t (server side).
Very easily, that’s what machine learning is for.
You can’t tell with client side either, so that’s a null argument. Anti-cheat is always bypassed, most good cheats don’t even run on the same device anymore, completely circumventing any kernel anti-cheat anyway.
On the server, they have all the data of where a player could be, what they could see, what they could hear, what human mouse movement looks like etc. that can all be used to target cheaters in a way they cannot get around. Player reporting would still exist of course for any other edge cases.
Client side anti-cheat has more data than server-side, because that is where the player’s actual screen, mouse and keyboard are.
The cheat only uses data available on the client - obviously - so the extra data about game state on the server is irrelevant.
“ML” is also not relevant. It doesn’t make the server any more able to make up for the data it doesn’t have. It only forces cheats to try and make realistic inputs, which they already do. And it ends up meaning that you don’t understand the decisions your anti-cheat model is making, so the inevitable false positives will cause a stink because you can’t justify them.
It doesn’t have to extinguish 99% of cheaters, hell, it doesn’t even need to extinguish cheating all together. It just has to make the problem manageable and invisible to players. That’s something server side can achieve. I’ll take the odd game with a cheater in if my entire PC isn’t ransom to some random company.
If cheaters exist but can only do it in a way that makes them look like a real player, then it doesn’t really effect the game anymore and the problem isn’t visible to players. You are never going to get rid of cheaters, even at LAN they have injected software in the past. It’s a deeper problem than we can solve with software.
Client-side AC has proven futile over and over again, even today with all the kernel AC. As I already said: most good cheats don’t even run on the same device anymore, completely circumventing any kernel (client side) anti-cheat anyway.
Why be allergic to trying something new? Something that isn’t invasive, a massive security threat or controlling of your own personal system.
It doesn’t have to extinguish 99% of cheaters, but if it affects 1% of legitimate players that’s a big problem. Good luck tuning your ML to have a less than 1% false positive rate while still doing anything.
Already exists with VACnet in the largest competitive FPS, Counter-Strike. And machine learning has grown massively in the last couple years, as you probably know with all the “AI” buzz.
And it’s still used together with client side cheat detection.