While you are technically correct, since any triangle is a simplex, this is not relevant to this visualization.
For this visualization: get positive quantities in 3D space, normalize to 1, now you have dot on a triangle on 1-sphere in a positive octant. Project triangle into 2D space a this is your visualization.
> Forcing people into a way to think via software is fundamentally authoritarian and sad.
Completely agree.
I understand the problem, and while I see this as a good faith attempt to solve it, something doesn't quite sit right about the framing for me. Really, what's happening is just that certain rules of behavior and language being enforced. And that's fine! That's what communities are. You're allowed to do different kinds of things in different places.
I'd frame it that way rather than the current, more paternalistic framing. There isn't a universal way to be respectful, or to argue. People have different thresholds for aggression, sarcasm, and so on.
Just like signs at the library say "No talking" or "No eating", you might think of this as a way to put up certain signs for your particular community. Configurable knobs to create the kind of place you want. But it's not about "teaching" people anything. It's about saying, "Here, we do things this way. If you like that, come and play. If you don't, this place is not for you."
Isn't that a bit too certain for something that's not settled at all? How else would you explain the Polgar sisters? I'm sure there are other examples, but this is the most famous one.
Few claims in the social sciences are more fully settled. I don't think you could find a researcher in the world at a major university making that claim that randomly selected children could be reliably turned into world-class mathematicians with enough training.
> How else would you explain the Polgar sisters?
Genetics is the obvious explanation. The father was clearly very intelligent.
Also to clarify: I agree that training and effort can have large effects, and that focusing on them is a good strategy. Over-believing in them is probably a good bias, even. But the idea that everyone is more or less the same except for effort is ridiculous.
As a TA, I've seen adults try to pass initial college calculus many times (and failing - you were allowed to try several times) with enormous effort. It's not a small multiplier
And this was still people selected from the small subset of the population choosing an engineering major. Human are much, much more different than you seem to think
Good for you. Those people likely didn’t have a true enthusiasm to learn the content, they just stressed and tried to cram themselves by. Kinda proves my point really.
There are many, many people (math majors, competitive programmers, chess players, etc) who devote incredible effort to becoming better, and simply cannot reach elite levels. And while in most cases elite players are also putting in a lot of effort, there are many cases where it is still relatively less than their peers who are trying harder but still lagging them.
Would you ever be tempted to make such a claim (that everyone is close to the same in ability and effort is the main determiner of success) about athletes? It's so obviously untrue that it's laughable. Why would you think that mental ability is magically distributed evenly?
> Would you ever be tempted to make such a claim (that everyone is close to the same in ability and effort is the main determiner of success) about athletes?
Well yes, absolutely. People don’t do quadruple axels on the ice because they were somehow born with the ability, they can do it because they practice figure skating every day for years. Innate ability (or in this case, let’s be honest, mostly genetics determining body shape) certainly makes the difference between becoming an Olympic gold medalist and just being very good at the sport, but you need to get very far in the field before it truly holds you back.
I don’t have a lot of experience with high-level professional sports, but I’m a classically trained violinist, and I’ve seen first-hand how a lot of the abilities that many people chalk up to “talent” (sense of rhythm, perfect pitch, composing music) are just skills that can be learned. Some students might need to practice more than others, sure, and some might reach a higher ceiling, but I firmly believe anyone can reach a high level with applied effort.
“I don’t have the talent to paint so I won’t learn to do it” is a self-fulfilling prophecy.
> People don’t do quadruple axels on the ice because they were somehow born with the ability, they can do it because they practice figure skating every day for years.
You've changed my claim. My claim isn't that world-class athletes, or even good athletes, don't have to work hard because of their talent to achieve elite levels. It's merely that talent is a huge determiner in success. It's also a huge determiner in how effective training is. An hour of training might improve a talented person 5 or 10x more than an hour of training would improve someone else.
This is all blindingly obvious if you've seen a sample of kids growing up. I remember the sister of one of my daughters friends, at age 3, was easily out-performing her brother and my daughter, who were a couple years older. This little 3 year could fearlessly climb up jungle gyms with ease, and kick around a ball, and swim fast. She hadn't practiced more. She could just do it.
3 year olds are a terrible choice because that’s confounded by developmental timelines varying in children. I.e. you may as well find a child that can walk at 11 months old and compare them to one that can’t and declare that one must be so much more talented than the other.
> The fact that you think that the rules of you being a father are somehow different than the rules of you driving to a appointment indicate that you have a completely incoherent world view based on two incompatible models of epistemology
Two ways to look at this, both of which are coherent:
1. Current AI is better at some stuff than others. Saying "I'm okay driving in a waymo, but not taking spiritual advice from an AI" makes sense if you think it has not advanced to a near-human level in the spritual advice domain.
2. Even if you don't think that's true, it's reasonable to just want a human for certain activities, because communion with other humans in the same existential boat you're in can be the whole point an activity. I'd argue it is a significant reason for a majority of social activities.
It definitely answers why. You are asking for an appeal to some moral justification. But there isn't one, and it doesn't matter. That's the whole point of "might makes right".
CPF makes a moral justification by arguing it is a "savings and pension plan" under the auspices of a moral justification of helping citizens set aside their own money. The very first thing you are greeted with on their website is that it's savings and an overview represents it as "setting aside" your own funds.
The government makes a moral justification of a savings plan but then when we dig down to it it's all ether and really just a scheme for bond rate arbitrage for the government.
The point isn't that might makes right is false, it's that the moral justification is a facade.
The government doesn’t set the retirement age. You can retire whenever you want. There are no laws against a 50 year old retiring and living off his own savings, nor against a 70 year old continuing to work.
There is a minimum age to collect old age benefits from the government. The justification for that should be obvious.
The choice between working and starving to death is not a choice. If your savings have been taken by the government, then you don't have a choice.
The justification is to force people to work until they are too old to do so. Then steal whatever they have left with medical bills and price hikes on necessities.
> The justification is to force people to work until they are too old to do so.
Actually, the justification is to prevent old people from having to work. Retirement didn't really exist until the creation of pension systems in the late 19th century, and the modern social security system was a poverty alleviation measure introduced in the 1930s. Hell, social security was initially resented by older workers because of the cover it gave employers for firing them for being too old.
Social security was sold to the populace for purposes of voting as "insurance." Lawmakers straight up admitted they purposefully wrote the law in a confusing way[] -- resulting in evasion of democratic scrutiny and the scrutiny of the constitution. Then they briefly switched to not calling it insurance just for the purpose of scrutiny of the courts.
Social Security constitutionality was ruled on just months after the 'switch in time that saved 9' associated with a threatening to pack the courts and evade the checks and balances built into our "democracy." They ruled it was covered under 'general welfare' in a way that was totally historically inaccurate.
Furthermore, FDR and congress purposefully had it packaged in an omnibus style bill to evade democratic scrutiny over the individual portions, by purposefully torpedoing other aid to needy individuals if SS didn't pass, so that lawmakers wouldn't be able to vote on democratic view of SS but rather being damned in a catch-22 where they'd be accused of not helping out the needy in other ways.
Basically the whole thing was designed to not only evade democracy but also the constitution.
[] Recollections of the New Deal, by Thomas H. Eliot, pp. 102-115 (Northeastern University Press, Boston, 1991).
But the CPF isn't represented as benefits from the government. It's represented and claimed to be your own savings that you have set aside. At gamed bond rates where the government skims off the top.
To make an overly dramatic analogy, if you were kidnapped and asked why the kidnapper was able to hold you against your will, the answer is because they've chained you up and they have the gun, and so on. That's literally the answer to why. The fact that what they're doing is morally wrong is completely irrelevant.
> The "AI replaces humans in X" narrative is primarily a tool for driving attention and funding.
It's also a legitimate concern. We happen to be in a place where humans are needed for that "last critical 10%," or the first critical 10% of problem formulation, and so humans are still crucial to the overall system, at least for most complex tasks.
But there's no logical reason that needs to be the case. Once it's not, humans will be replaced.
The reason there is a marketing opportunity is because, to your point, there is a legitimate concern. Marketing builds and amplifies the concern to create awareness.
When the systems turn into something trivial to manage with the new tooling, humans build more complex or add more layers on the existing systems.
The logical reason is that humans are exceptionally good at operating at the edge of what the technology of the time can do. We will find entire classes of tech problems which AI can't solve on its own. You have people today with job descriptions that even 15 years ago would have been unimaginable, much less predictable.
To think that whatever the AI is capable of solving is (and forever will be) the frontier of all problems is deeply delusional. AI got good at generating code, but it still can't even do a fraction of what the human brain can do.
> To think that whatever the AI is capable of solving is (and forever will be) the frontier of all problems is deeply delusional. AI got good at generating code, but it still can't even do a fraction of what the human brain can do.
AGI means fully general, meaning everything the human brain can do and more. I agree that currently it still feels far (at least it may be far), but there is no reason to think there's some magic human ingredient that will keep us perpetually in the loop. I would say that is delusional.
We used to think there was human-specific magic in chess, in poker, in Go, in code, and in writing. All those have fallen, the latter two albeit only in part but even that part was once thought to be the exclusive domain of humans.
When I refer to AI, I mean the "AI" that has materialized thus far - LLMs and their derivatives. AGI in the sense that you mean is science fiction, no less than it was 50 years ago. It might happen, it might not, LLMs are in all likelihood not a pathway to get there.
Frigate NVR tied to a home assistant instance has my phone getting proactive notifications about people, birds, and buses (in their select areas...). It's not the easiest thing to setup, but if you're using ethernet cameras it seems to work very very well. The few POS wyze cameras's I have on the system tend to cause some problems, but I know for a fact it's 100% a combination of a) wifi (no matter how 'quality') b) wyze.
If anything, the reverse, in that it devalues engineering. For most, LLMs are a path to an end-product without the bother or effort of understanding. No different than paid engineers were, but even better because you don't have to talk to engineers or pay them.
The sparks of genuine curiosity here are a rounding error.
I'd probably use it now.
reply