Imagine you trained the large language model which is too dangerous for humanity but you regexp over git commits to solve your subscription subsidy issues
It is a great idea. Most radicalization on the hard-left and hard-right are done by state actors (like China/Pakistan/Russia) because we're not doing anything about it. Perhaps if Joe Smalls in Kentucky and Samantha Milton in Minnesota were not radicalized to hate each other via state actors in social media messages, there would be better discourse and fewer Trumps / Bidens in office.
Most radicalization is actually done by the three letter agencies. They find a mentally deficient, convince them to participate in some concocted plot then arrest them to pad their stats. The FBI agent got congress to give him a new office after he got the homeless guy to try to kidnap Whitmer.
API token with permissions to delete an entire production database in a file? Cool story, this database was destined to vanish. The system rules never mentioned that it shouldn't run destructive POST requests anyway.
I like how they are trying to find a scapegoat – Cursor failure, Railway's failures etc. Guys, it's YOUR failure, is it so hard to admit?
Why? The military power owns things by enforcing their ownership. This is, in fact, the true ownership.
You have to pay taxes to own land so the power which is on your side can prevent another power to re-own it.
If you don't pay taxes to the power which is on your side, why would it allow you to own stuff and provide free protection? Out of good will?
That's how the world works, ownership without the power behind it is non-existent, as well as power without the money behind it is non-existent. When there are enough powers balancing each other, stable systems emerge, and we all can enjoy some few decades of peace and prosperity.
Thoughts are derivative of sensory processing. We have subjective experience and subjective feeling, our symbols are grounded in physical reality. LLM "thoughts" are simulacrum, manipulating symbols according to rules does not imply understanding. One must be quite derealised to think we are predictive machinery or the human brain is just a fuzzier – it is much more than that.
This is Waymo saying Waymo cars are safer than humans. Obviously the "it’s safer than humans" claim is selection biased, statistically underpowered apples-to-oranges comparison with limited sample size
Thought is a derivative of sensory processing. LLM does not have a physical body to interact with the world, nor does it develop itself and learn anything by experiencing the world, it has no subjective experience or subjective feeling, it has no qualia, it's symbols are not grounded in physical reality and it's "thoughts" is a mere simulacrum. Anyone personifying an LLM is just derealised by convincing outputs, not realising that manipulating symbols according to rules does not imply understanding
having to click somewhere is not a shortcut, seriosly who the hell switches the desktops with mouse scrolling or clicking? there is a real shortcut for that
reply