For frame perfect cuts you need to re-encode. You can use lossless H264 encoding for intermediary cuts before the final one so that you don't unnecessarily degrade quality.
I wonder if there is a solution which would just copy the pieces in between the starting and ending points while only re-encoding the first and last piece as required.
At the time of the threats the odds were likely very skewed, as in over 99% to one side.
For events where a single article could be a fulcrum, it seems like a feasible strategy to wager on the 1% and then try to manipulate the writer into changing the outcome. The chances of success will be low, but likely higher than 1% therefore the expected value[1] may be high.
Most people on Polymarket are gamblers and have no idea what they are doing, but the so-called sharps know how to play the game: Purely on expected value, for example if the market shows an 80% chance on an outcome but the sharp concludes that it's actually 90% then they buy it, then if the market rises above 90% but their conclusions don't change then they sell their shares and if it continues to rise they may even buy the other position. Evaluating the real odds of an event can of course be very hard, having insider information will greatly help here - it need not even pertain to the event itself, just something that will improve your model.
If a casino sets up a normal roulette wheel but pays out red at 1.5x and black at 2.5x, betting 5% of your bankroll on black over and over is "gambling" but it's not "gambling", if you get what I'm saying.
There is a solution to cheating, but it's not clear how hard it would be to implement.
Cheaters are by definition anomalies, they operate with information regular players do not have. And when they use aimbots they have skills other players don't have.
If you log every single action a player takes server-side and apply machine learning methods it should be possible to identify these anomalies. Anomaly detection is a subfield of machine learning.
It will ultimately prove to be the solution, because only the most clever of cheaters will be able to blend in while still looking like great players. And only the most competently made aimbots will be able to appear like great player skills. In either of those cases the cheating isn't a problem because the victims themselves will never be sure.
There is also another method that the server can employ: Players can be actively probed with game world entities designed for them to react to only if they have cheats. Every such event would add probability weight onto the cheaters. Ultimately, the game world isn't delivered to the client in full so if done well the cheats will not be able to filter. For example: as a potential cheater enters entity broadcast range of a fake entity camping in an invisible corner that only appears to them, their reaction to it is evaluated (mouse movements, strategy shift, etc). Then when it disappears another evaluation can take place (cheats would likely offer mitigations for this part). Over time, cheaters will stand out from the noise, most will likely out themselves very quickly.
So are very good players, very bad players, players with weird hardware issues, players who just got one in a million lucky…
When you have enough randomly distributed variables, by the law of big numbers some of them will be anomalous by pure chance. You can't just look at any statistical anomaly and declare it must mean something without investigating further.
In science, looking at a huge number of variables and trying to find one or two statistically significant variables so you can publish a paper is called p hacking. This is why there are so many dubious and often even contradictory "health condition linked to X" articles.
But a good way of solving this in community managed multiplayer games is this: if a player is extremely good to the point where it’s destroying the fun of every other player: just kick them out.
Unfair if they weren’t cheating? Sure. But they can go play against better players elsewhere. Dominating 63 other players and ruining their day isn’t a right. You don’t need to prove beyond reasonable doubt they’re cheating if you treat this as community moderation.
Why do you feel someone has a right to play anywhere?
If a community manages a server, it’s basically private property. And community managed servers are always superior to official publisher-managed servers. Anticheat - or just crowd management - is done hands on in the server rather than automated, async, centralized.
Buying the game might mean you have a ”right” to play it, but not on my server you don’t.
It's like if Nikola Jokic showed up to your local court every day and consistent beat you day after day. You'd eventually give up because it's not fun anymore.
People who engage in competitive sports all agree to it. Most people want to play for fun. They have a natural right to do so.
”Your game”? It’s a publisher making a game. If I’m kicking someone off my server I’m not asking EA/Ubisoft etc.
I’m talking about normal old fashioned server administration now, I.e people hosting/renting their game infra and doing the administration: making rules, enforcing the rules by kicking and banning, charging fees either for vip status meaning no queuing etc, or even to play at all.
> So are very good players, very bad players, players with weird hardware issues, players who just got one in a million lucky…
They will all cluster in very different latent spaces.
You don't automatically ban anomalies, you classify them. Once you have the data and a set of known cheaters you ask the model who else looks like the known cheaters.
Online games are in a position to collect a lot of data and to also actively probe players for more specific data such as their reactions to stimuli only cheaters should see.
I've been advocating for a statistical honeypot model for a while now. This is a much more robust anti cheat measure than even streaming/LAN gaming provides. If someone figures out a way to obtain access to information they shouldn't have on a regular basis, they will be eventually be found with these techniques. It doesn't matter the exact mechanism of cheating. This even catches the "undetectable" screen scraping mouse robot AI wizard stuff. Any amount of signal integrated over enough time can provide damning evidence.
> With that goal in mind, we released a patch as soon as we understood the method these cheats were using. This patch created a honeypot: a section of data inside the game client that would never be read during normal gameplay, but that could be read by these exploits. Each of the accounts banned today read from this "secret" area in the client, giving us extremely high confidence that every ban was well-deserved.
This is said very often, but doesn't seem to be working out in practice.
Valve has spent a lot of time and money on machine learning models which analyze demo files (all inputs). Yet Counter-Strike is still infested with cheaters. I guess we can speculate that it's just a faulty implementation, but clearly the problem isn't just "throw a ML model at the problem".
Honeypots are used pretty often, sure. They're not enough, though useful.
Behavioral analysis is way harder in practice than it sounds, because most closet cheaters do not give enough signal to stand out, and the clusters are moving pretty fast. The way people play the game always changes. It's not the problem of metric selection as it might appear to an engineer, you need to watch the community dynamics. Currently only humans are able to do that.
In CS2, a huge portion of cheaters can be identified just by the single stat 'time-to-damage'. Cheaters will often be 100ms faster to react than even the fastest pros. Not all cheaters use their advantage in this way, but simply always make perfect choices because they have more information than their opponents.
I disagree with the premise that it doesn't matter as long as users can't tell. Say you're running a Counterstrike tournament with a 10k purse... Integrity matters there. And a smart cheater is running 'stealth' in that situation. Think a basic radar or a verrrrrry light aimbot, etc.
The problem is that traditional cheats (aimbot, wallhack, etc.) give users such a huge edge that they are multiple standard deviations from the norm on key metrics. I agree with you on that and there are anticheats that look for that exact thing.
I've also seen anticheats where flagged users have a session reviewed. EG you review a session with "cheats enabled" and try to determine whether you think the user is cheating. This works decently well in a game like CS where you can be reasonably confident over a larger sample size whether a user is playing corners correctly, etc.
The issue with probing for game world entities is that at some point, you have to resolve it in the client. EG "this is a fake player, store it in memory next to the other player entities but don't render this one on screen." This exact thing has happened in multiple games, and has worked as a temporary solution. End of the day, it ends up being a cat and mouse game. Cheat developers detect this and use the same resolution logic as the game client does. Memory addresses change, etc. and the users are blocked from using it for a few hours or a few days, but the developer patches and boom, off to the races.
These days game hacks are a huge business. Cheats often are offered as a subscription and can rank from anywhere from 10-hundreds of dollars a month. It's big money and some of the larger hack manufacturers are full blown companies which can have tens of thousands of customers. It's a huge business.
I think you're realistically left with two options. Require in-person LAN matches with hardware provided by the tournament which is tamper-resistant. Or run on a system so locked down that cheats don't exist.
Both have their own problems... In-person eliminates most of that risk but it's always possible to exploit. Running on a system which is super locked down (say, the most recent playstation) probably works, until someone has a 0day tucked away that they hoard specifically for their advantage. An unlikely scenario but with the money involved in some esports... Anything is possible.
> End of the day, it ends up being a cat and mouse game. Cheat developers detect this and use the same resolution logic as the game client does.
This is not well done. Only the server should be able to tell what the honeypot is. The point is to spawn an entity for one or more clients which will be 100% real for them but would not matter because without cheats it has no impact on them whatsoever. When the world evolves such that an impact becomes more likely then you de-spawn it.
This will only be possible if the server makes an effort to send incomplete entity information (I believe this is common), this way the cheats cannot filter out the honeypots. The cheats will need to become very sophisticated to try and anticipate the logic the server may use in its honeypots, but the honeypot method is able to theoretically approach parity with real behavior while the cheat mitigations cannot do that with their discrimination methods (false positives will degrade cheater performance and may even leak signal as well).
For example you can use a player entity that the client hasn't seen yet (or one that exited entity broadcast/logic range for some time) as a fake player that's camping an invisible corner, then as the player approaches it you de-spawn it. A regular player will never even know it was there.
Another vector to push is netcode optimizations for anti-cheating measures. To send as little information as possible to the client, decouple the audio system from the entity information - this will allow the honeypot methods to provide alternative interpretations for the audio such as a firefights between ghosts only cheaters will react to. This will of course be very complex to implement.
The greatest complexity in the honeypot methods will no doubt be how to ensure no impact on regular players.
None of those work when dealing with external services, I wouldn’t even trust them as a solution for dealing with access to a database. It seems like the pushback against MCPs is based on their application to problems like filesystem access, but I’d say there are plenty of cases in which they are useful or can be a tool used to solve a problem.
At the moment LLM's tend to work well when you constrain them, and you can craft the constraints with the help of the same LLM in a different session. Then you can verify if the outputted code obeys the constraints in yet another session, and make it adjust the code to obey the constraints. If one of the constraints was to yield highly functional code, you can start refining function by function as well. There is a pattern here.
If you are a good engineer you can dictate data structures to it too. It then performs even better.
I believe the writing is on the wall a this point, it does a very adequate job if I invest enough time in writing and refining the specs and give it the data structures (&/| database schemas) I want it to use. And there is no comparison in the number of hours I spend wrangling it and the number of the hours it would take me to do the code myself.
This is the worst it's going to be and it's already quite good, it wasn't that good a mere three months ago.
The main pitfall is trying to get an LLM to read your mind, in doing so you are putting too much load on whatever passes for their intelligence quotient. That isn't how you get good results or get a good measure of their capabilities.
Windows in 1998: this is the worst it's ever going to be.
Uber is 2010: this is the worst it's ever going to be.
There's some triumphalism here. What happens when training data becomes scarcer because open source as a paradigm was killed? What happens when investor cash flows elsewhere and training and inference need to become profitable on their own?
The difference between gVisor and a microVM isn't very large.
gVisor can even use KVM.
What gVisor doesn't have is the big Linux kernel, it attempts to roll a subset of it on its own in Go. And while doing so it allows for more convenient (from the host side) resource management.
Imagine taking the Linux kernel and starting to modify it to have a guest VM mode (memory management merged with the host, sockets passed through, file systems coupled closer etc). As you progress along that axis you will eventually end up as a gVisor clone.
Ultimately what all these approaches attempt to do is to narrow the interface between the jailed process as the host kernel. Because the default interface is vast. Makes you wonder if we will ever have a kernel with a narrow interface by default, a RISC-like syscall movement for kernels.
Back in the day if you could find a deal on defective RAM (that wasn't going to degrade further?), Linux could be configured to avoid the defects. Unfortunately this isn't allowed with secure/UEFI boot.
I helped someone turn that on because we still didn’t have full furniture sets in our houses and new memory was going to have to wait a couple paychecks.
Several times I set about trying to turn it on and found out a whole chip was fried and that means the 7th bit of every read was stuck, so nothing much you can do there.
On windows there is a compensation control in the form of start menu written in react that needs 7 msedgewebview processes to run so that you can search for an app.
It's not just that it's compressed - the OS is also intelligently handling which memory should be on hardware vs. virtual. Effectively a lot of the memory concerns have been offloaded to the OS and the VM where one exists.
Yes - I didn't mean to imply it was only one of the OSes. Further up the comments people were talking about how memory efficiency is now more important but I was trying to make the point that with compression and virtual memory it still doesn't matter all that much even if memory is double the price.
If running low on memory seems to matter less now than it did a couple of decades ago, I'd rather say that's because fast SSDs make swapping a lot faster. Even though virtual memory and swapping were available even on PCs since Windows 3.x or so, running out of memory could still make multitasking slow as molasses due to thrashing and the lack of memory for disk cache. The performance hit from swapping can be a lot less noticeable now.
Of course compression being now computationally cheap also helps.
I wonder if there is a solution which would just copy the pieces in between the starting and ending points while only re-encoding the first and last piece as required.
reply