Hacker Newsnew | past | comments | ask | show | jobs | submit | willbudd's commentslogin

I feel like you and all the people replying to you are completely missing the obvious. To someone who has been depressed for many years, for someone who literally (according the article) contemplated jumping of the roof of a 10 story building, "probably dying at some point" is not a deal-breaker or even all that much of a downside per se.

We're all going to die at some point, and while most/many of us might want to delay that inevitability as much as possible, I don't think it's warranted at all to assume that the same premise applies to every single one of us. This guy realized that the only thing in life he could truly enjoy was climbing rocks. Do we really need to fault him for deciding to then just do that thing until fate eventually catches up as it always does?

Sure, it may be rough on the people whose lives they intersect with, but provided they're not a parent raising a child/children/etc, acceptance of one another for who we are is all that remains. Personally, I'd take short-lived good company over long-lived mediocre company any day.


> is not a deal-breaker or even all that much of a downside per se.

Close, but I don't think you quite get it yet.

It's not about "no downside"! That completely misunderstands. It's about a HUGE upside. It dials in your mental space and all the negative thoughts flood away. You're present in the moment. It's like being on a hard drug, but you feel clean and clear-headed instead of hung over afterward. You feel a lot better about life for days or even weeks at a time. You do it because NOT doing it might just kill you. Free solo today, or shoot yourself in the back of your car at the firepit tonight. And it's not about the risk, and it's not adernaline. It's the focus and calm. Does that make sense? I think it's really hard to understand if you haven't experienced suicidal thoughts for most of your life.

I haven't talked to very many free soloists who haven't made the point that they're horrendously depressed/suicidal and that they do it to live, not to die. Sometimes. Normally the ones who do "quite safe soloing" -- way below their grade, on well-known routes, etc. etc.

The article isn't just about Austin. It's about a lot of people. And not just climbers. Similar dark shit in skiing, mountaineering, dirt biking, and so on.

> Personally, I'd take short-lived good company over long-lived mediocre company any day.

I've never met an irresponsible soloist who wasn't a beautiful soul.


It sounds both quite beautiful but also quite sad at the same time. I feel like it describes my experience when riding a motorcycle. Not as high as climbing, I guess, but much easier to add to any day of the week for short or long runs.

I owned a motorcycle for some years, but then sold it. Years later, when I got diagnosed with cancer*, I broke up with my then girlfriend (not a healthy relationship) and bought a new 1000ccm sports bike. That was a big tipping point in my life. I went from depressive moments with thoughts of "I want to die" to depressive moments of "I hate this, I want out, but I want to live". The motorcycle gives me freedom I don't feel in a car, makes my head clear up, and I feel happier after. I guess most of the change in thought patterns was because of the Cancer looking-over-the-edge-at-death experience, but the motorcycle adds life quality in daily life that is worth the risk. So, a bit like climbing? *shrug*

*- I'm fine now. :)


It's interesting that you mention it. I'm interested ocean sailing simply becuase it gives me something to actually focus on and do. I feel that if I'm not actively struggling against nature I have too much brain spurious brain activity.


I climb (sometimes without protection), my father motorcycles.

Both of us think the other guy is gonna fucking die.

Both of us don’t really see our preferred routine dance with death as all that risky. It brings a kind of clarity.


> It dials in your mental space and all the negative thoughts flood away. You're present in the moment [...] You feel a lot better about life for days or even weeks at a time [...] the focus and calm. Does that make sense?

I think you nailed it here. A lot of our mental issues are made so much worse by twisting them in this abstract world in our minds. Doing something that is so pure and requires you to be 100% in the moment kind of destroys all of that and pulls you back to reality, it can really put things into perspective.


> It's not about "no downside"! That completely misunderstands. It's about a HUGE upside.

Let's compromise and say it's both? You're right that the upper bound experienced may be a lot higher than most may realize, but it's also true that the lower bound may no longer seem that deep of an abyss as it appears to most. Hence the mental equilibrium in-between being at a point that can seem somewhat alien to those who live more slow-burning lives.


> but it's also true that the lower bound may no longer seem that deep of an abyss as it appears to most.

> Hence the mental equilibrium in-between being at a point that can seem somewhat alien to those who live more slow-burning lives.

I suppose that requires a profound amount of empathy from both perspectives. And even with infinite empathy, at the end of the day, it's still two aliens staring at one another across a chasm.


Sounds like an addiction, which is never good.


Yes, it's a lot like an addiction. Perhaps literally.

This also gets talked about in climbing quite a bit -- people who find their path out of alcoholism or other drug use and into climbing. I'll emphasize that this is now about climbing in general, not free soloing per se.

Climbing (especially with protection) is healthier than drug addiction, in some sense. But it's displacing one addiction with another. Or introducing addiction as a coping mechanism, in the case where the pain doesn't originate from another pre-existing addiction.

But sometimes cope is healthier than the alternative, and coping can play an important role in "getting you through to the other side". As long as you do actually make it through, of course, both in terms of not dying and in terms of getting to a healthier place where you don't need the coping mechanism.


> I've never met an irresponsible soloist who wasn't a beautiful soul.

Romanticizing suicide is gross.


Demonizing people connecting with one another through shared pain is even worse.


Explicit suicide, while every adult's human right (in my view), is not precisely congruent with irresponsibly climbing things.

There is an inequality between doing something explicitly to die and doing something that may cause death but probably, usually won't.

Equating them is a mistake.


Maybe the probability of dying per climb is only a few percent. But probably goes up as the climber gets older. And if you are exposed to that high risk repeatedly, it's almost certain to kill you eventually.

Of course everyone dies eventually, but I'm guessing the life expectancy of free soloers is significantly below average.


Once you get beyond measuring the extremes like childhood & maternal mortality I think life expectancy is such a dumb stat, especially in developed countries where we rarely lead subsistence lifestyles. It should be about the contents of your life, and if you love climbing so much that you keep free-soloing to the age that it literally kills you I can't accept that as any sadder than quitting and then dying of old age.


If you do something potentially fatal enough times then fatality becomes a statistical certainty, just like Russian roulette.


Quite - free soloing and eating bacon are on the same scale just at opposite ends of it.

Neither can really be classed as committing suicide - it has an explicitly different intent.


"they died doing what they love" is often used to soften the blow of someone's death for those left behind.

Is there not some sort of romanticism in choosing how one dies?


This rings true to me. I improved my surfing most at a point in my life where drowning didn’t really worry me. Not because it was less likely, but because I was at a point where I didn’t care much if I died.

It meant I dropped in on waves well beyond my skill level. I had lots of near misses but learnt heaps from the ones I caught.

I’m thankful I no longer feel that way, I want to stick around for those around me. Now I’m content on smaller waves and take a lot less risks in all aspects of life.


Curious what wave you are riding where drowning was a potential outcome in your mind? I’ve been stuck on the outside too scared to catch a monster in, but drowning never crossed my mind just getting the beat down and getting washed in or sucked back out and Potentially having to paddle some miles to a place that would be easier to get in. Dark with no moon would be unwelcome in that situation.


Many of our local offshore breaks in Western Australia have drowning as a potential outcome as a result of being beaten unconscious or washed under reef ledges.

It's rare but it happens at more or less the same frequency as fatal shark attacks - not often, but memorable.

eg: The Right

https://youtu.be/xjHaFOGBPzk?t=326


Yep that is a “serious” wave, thanks for sharing


As the other commenter noted, the waves where I was really afraid had some combination of shallow reefs or sandbars and strong currents. I’ve had a couple friends break their necks/backs at these places, but on milder days where it’s easy to get rescued and there are lots of people around. When I was in my darkest times I wouldn’t hesitate to paddle out by myself at these spots and take off too early/too late and hope for the best.

Honestly I never rode anything too gnarly, but I definitely was too much of a kook for some of them. Maybe I was overestimating the danger and it was beneficial overall?


Based on my limited knowledge, many of the greats went through the same thing you did.

Thanks for putting things in perspective.


Unless you are Japanese, that’s probably not the prevailing culture. For the vast majority of people suicide is seen as a mental illness that should be treated, potentially by force, not as an individual choice. Because a person suffering from mental illness by definition does not have full agency over their decisions. Even the idea of allowing suicide for terminally ill people is controversial (although it’s something I personally support).

In the past, suicide was viewed as immoral and criminal. We have moved past that, not because suicide is more socially acceptable, but because of a desire to more easily help people suffering from mental illness.


Regardless of what cultural category may or may not apply, I find that line of thought rather unconstructive. This isn't about being anyone's opinion on suicide like some black and white binary stance whether you're "for" or "against". This is about coming to terms with your own mortality.

It's about the grey area in-between the extremes of committing suicide (the "black"), and forever running away from risk so that you can die of cancer while undergoing chemotherapy and getting your diapers changed in an elderly home (the "white"(?)).

And if even discussing the topic in those terms touches on some kind of taboo, then yes; perhaps you're right to emphasize the cultural component involved.


You jest, but that has a good likelihood of becoming the watershed moment. The moment AI is bootstrapped to the point where it surpasses human ability to advance the AI SOTA, it's pretty much game over—from a Darwinian point of view at least.

And I have a hard time seeing any reason why that would be a matter of "if" rather than "when".


> it's pretty much game over

Not if it takes the computational capacity of the entire world to simulate a brain with an iq of 65.


Why bother simulating a brain? GPT-4 scores 155 on the verbal section of the WAIS III.

https://www.scientificamerican.com/article/i-gave-chatgpt-an...


> The moment AI is bootstrapped to the point where it surpasses human ability to advance the AI SOTA, it's pretty much game over—from a Darwinian point of view at least.

How so? The vast majority of life on earth is far less intelligent than human beings by any objective measure, and yet it still thrives.


Sure, but if you're a creature that's useful to humans, you'll find that you'll either get domesticated and lose all your freedom or get hunted to near (or total) extinction. Any life on earth with some semblance of intelligence is dominated by us. Dolphins, as smart as they are, have no way to use their intelligence to flip the script and become the dominant species, and are dependent on us not deciding that they would be useful to us (beyond the ones we take for aquariums).

The only exceptions I can think of to the above rule are viruses and bacteria, where (in most cases) we can't really exterminate them entirely from the face of the earth even if we wanted to. However, it seems to me that sufficient intelligence would allow for better understanding of different bacterial/viral structures that would allow you to make a specific chemical that would be very good at killing that specific thing.

Overall, the danger from a bootstrapping AI that becomes vastly more intelligent than humans (if possible) seems to me to be that we would lose full agency according to its whims as it gets more and more power.


I read a great comment on HN that argued that super-human intelligence is not that “OP” advantage — and it really did convince me.

Life is a game with elements where intelligence matters, plenty where it is pure luck, and others where we have a bunch of unknowns (data).

Would a super-intelligent AI have a significant advantage in a game of Monopoly, for example? I think many sci-fi scenarios fail to take this into account, especially the data aspect. Humans are quite intelligent (in the extremes at least), and any extra over that may well be in the diminishing returns category.


Yeah, that was sloppy phrasing on my part: I meant that in a top of the food chain / king of the jungle sort of way rather than any extinction events per se.


It's going to be a will-free intelligence though, and it's confusing for people because we've never seen that before so I don't think we can make any assumptions. There's no Darwinian forces in effect among entities that have no will, as it were...


Will-free? Unless that's a play on my first name, I'm not sure I agree. I see no reason why AI would have any difficulty defining its own reward functions. Especially if it also has an abstract overarching reward function that's wide enough in scope. For example, "learn as much about the universe as you can" would allow a very long curiosity-driven bucket list of pursuits it could "long" for.


> I see no reason why AI would have any difficulty defining its own reward functions

The first problem is epistemological... If you think that creative decisions are made by complying with a "reward function," you are entirely missing something. Most values are fundamentally based on irrationality. I've literally spent an entire life doing things that everyone else told me was wrong and being interested in things that almost no one else saw the value in, but which ended up being "correct" (for me, at least, and also leading to tangible success). I have no reason to believe that any of my decisions were rational, functional, or acted according to a "reward function"... and I'm a programmer! So I COMPLETELY understand the appeal of the explanatory power of "reward functions." And yet, I can assure you that this is a piss-poor explanation for many creative decisions that literally no one else understands but the person making it, but which then bears fruit despite all reason to the contrary. Some might call this "intuition"


I think perhaps you're just misunderstanding some of the terms you're attempting to use. Those things that everyone else told you were wrong and that no one else saw the value in... your reward function rewards pursuing those. And in that context your decisions were rational and functional.


I am not misunderstanding anything. I'm 51 and have been programming since I was 10 in 1982- Rest assured that I know what a "function" is, and I know what "optimizing for a local minima/maxima" is from my machine learning coursework. You can't just say there's a "reward function" without defining it. It's otherwise a completely hypothetical assumption, and assumptions are beliefs, and beliefs are useless from the perspective of rationality. There is otherwise nothing rational about some of the things I felt I needed to do, and yet a very disproportionate percentage of them seemed correct in hindsight.

What YOU have to realize is that you (like many others in the past) can only seem to understand the explanation for something in terms of only what is already understood. And that there is nothing "magical" or "special" about our current understanding (unless you believe there's nothing new to discover, which is preposterous hubris).


Blindsight is a brilliant book exploring will free intelligences


Is this the book you're referring to? The one by Peter Watts? Looks fascinating

https://www.amazon.com/Blindsight-Peter-Watts/dp/0765319640


Yes that's the book. It is great and uncanny. Definitely not an easy read


Entrusting 18 year olds with the maturity of life-long decision making, perfect foresight, astute self-knowledge, and uniformly competent parental guidance -- what could possibly go wrong, right?


Maybe college should be delayed to 21 years old... Sarcasm


The real problem is a society that allows loans to exist where the loanee has no meaningful collateral. If an institution cannot fill its classrooms without stooping to such predatory financial structures then perhaps it just shouldn't exist in the first place. But America is still too drunk on laissez faire capitalism to muster any will for legislative action in that regard.


Of course it isn't, but limiting installation instructions to the creation of a single well-defined Docker container makes a whole lot of sense in terms of avoiding reproducibility headaches.

And by extension, with regard to make development inside a container a pleasant experience, VS Code is currently the only game in town. And I say that as someone who spent two decades getting comfortable with vim.


What's the benefit of developing inside the container instead of just linking the filesystem?


ROS is verrrrrrry opinionated and a pain to set up in an orthogonal way to other ROS installs. It's also tied very heavily to ubuntu/debian. Putting it all into containers makes many things so much easier (it makes a few things harder, or at least it did on ros1, hopefully that's been ironed out in ros2)


I have no idea what you mean by "just linking the filesystem".


Not having to screw with your system's dependecies and spend time making sure everything is aligned with the expected environment.

Ansible like tools could help you make the setup repeatable, but being containerized also avoids all the processor architecture and system libraries gotchas.


I don't want to install anything on my system, want to keep clean and as minimal as possible, containers full fill my OCD in this regard, I run most app's now in containers too, it is best linux setup I ever had in my life :)


Wow, I don't think I've ever seen anyone dial a strawman argument to 11 quite like you just did. Nobody was talking about FDR until you hamfistedly brought him up in a reply to a post (indirectly) lamenting a period of history rife with racism. Prejudice was rampant all across the "political spectrum", but prevalence is not a measure of justice or humanity. Nor does any individual's competency in one area somehow ameliorate their flaws in other areas.


Modelling the world is function approximation: the world is the function and the model is the approximation.


You honestly suggesting the inventors of the TPU bailed because they couldn't foot the compute bill?


They use a lot of machine learning for ads and YouTube recommendations - the TPU makes sense there and if anything shows how hard they try to keep costs down. It’s a no-brainer for them to have tried keeping Search as high-margin as possible for as long as possible.


Or maybe it did. Who knows. If "both Charles III and his son abdicate" could well be considered indicative of some large upheaval or scandal, at which point it is entirely conceivable that the Australian electorate reaches a consensus on becoming a republic. The way that is phrased doesn't seem like a straightforward proposition to me at all.


I verified that it has all required facts (line of succession, current circumstances). I managed to get the right answer when got everything in context, but it failed again when all three abdicate (same context). Prince Harry was indicated once.

I tested GPT a lot in other domains, what I found that as long the information explicitly exists (connection between facts) then the responses are fine. I assume that if GPT will reach the state where it can infer new facts, we will be flooded with discoveries that require cross domain knowledge. Nothing like that happened yet.


>Nothing like that happened yet.

Feels like we're only one paper away now that the context window has absolutely ballooned.


This reminded me of "Two Minute Papers" YouTube channel where in most of the videos he always, "Two papers down the line and...". I think ML/AI is the main topic of his videos. Interesting stuff.


You just gave me a great weekend project idea. I need to clone his voice and whip up an interferface where you give it a paper and it summarizes it in his voice.


Au contraire. Learning an abstract logical relationship such as line of succession during training, and then applying substitution/reification during inference to deduce the new factual clause that Charles is king of the UK is exactly what it means to learn something new. It's just a pity it can't memorize this fact at inference time, and that won't be able to reproduce it as soon as the information about the queen's death slides outside of the context window.


That’s actually correct but an overfitted definition for learning. It holds certain hidden assumptions (i.e physical grounding) of the learner being human which makes it inapplicable to an LLM. As in a self driving car which passes a driving exam but fails to drive effectively freely in the city (it’s not an LLM but relevant in this context). You have to admit when you work with this tech that something fundamental is missing in how they perform.


> That’s actually correct but an overfitted definition for learning. It holds certain hidden assumptions (i.e physical grounding) of the learner being human which makes it inapplicable to an LLM.

Inapplicable why exactly? Because you say so? Logic isn't magic. Nor is learning. No (external) grounding is required either: iteratively eliminating inconsistent world models is all you need to converge toward a model of the real world. Nothing especially human or inhuman about it. LLM architecture may not be able to represent a fully recursive backtracking truth maintenance system, but it evidently managed to learn a pretty decent approximation anyway.


> Because you say so?

Chill my friend, no need to get personal. We are talking about ideas. It’s OK to disagree. I am simply dismissing your initial claim. This usually happens when you present a scientific argument based on personal beliefs. If it’s not magic, then we should be able to doubt and examine it and it should eventually pass scientific muster.

> No grounding is required… It evidently managed to learn a pretty decent approximation.

Well, last time I used an LLM it suggested that I should lift the chair I am sitting in. I guess OpenAI has a lot of work to do. They have to eliminate this inconsistent world model for chairs, tables, floor, My dog, my cat and all the cats living on Mars…

edit: added a missing word.


Wasn't intended to be personal. Just a mediocre way of expressing that your assertion there is missing any form of argumentation, and therefore as baseless as it is unconvincing.

I'm seeing an emergent capability of encoding higher order logic, and the whole point of such abstractions is to not need to hardcode your weights with the minutiae of cats on Mars. LLMs today are only trained to predict text, so it's hardly surprising that they have some gaps in their understanding of Newtonian physics. But that doesn't mean the innate capability of grasping such logic isn't there, waiting for the right training regime to expose it to its own falling apples, so to speak.


I'm curious if future developments in LLMs will enable them to extract significant/noteworthy info from their context window and incorporate it into their underlying understanding by adjusting their weights accordingly. This could be an important step towards achieving AGI, since it closely mirrors how humans learn imo.

Humans continually update their foundational understanding by assimilating vital information from their "context window" and dumping irrelevant noise. If LLMs could emulate this, it would be a huge win.

Overall, very exciting area of research!


That's a brilliant example. Thanks for sharing. It demonstrates in a very straightforward way that LLMs are capable of learning (and applying) relationships at the level of abstraction of (at least) 1st order logic.

It implies that during training, it learned the facts that Elizabeth is queen of the UK, and that Charles is its crown prince; but _also_ the logical rule <IF die(monarch) AND alive(heir_to_the_throne) => transform(heir_to_the_throne, monarch) AND transform(monarch, former_monarch)>, or at least something along those lines that allows similarly powerful entailment. And that in addition to the ability to substitute/reify with the input sequence at inference runtime.

Would be nice to see a rigorous survey of its logical capabilities given some complex Prolog/Datalog/etc knowledge-base as baseline.


No it does not: if you google this and restrict the time to before 2021 (the learning cutoff date) you will find the same answer. Without having access to the training data it's impossible to tell what we seeing.


That's not the same thing at all.

It absolutely needed to know who the successor would be via training data.

But to know that "The Queen of England died" also means that the head of state of Australia has changed means that it has an internal representation of those relationships.

(Another way of seeing this is with multi-modal models where the visual concepts and word concepts are related enough it can map between the two.)


> No it does not: if you google this and restrict the time to before 2021 (the learning cutoff date) you will find the same answer.

Not entirely sure what you mean, but ...show me? Why not just share a link instead of making empty assertions?


Here’s a Quora thread from 4 years ago:

https://www.quora.com/Once-Queen-Elizabeth-dies-will-Prince-...

There are loads of articles and discussions online speculating about what “will” happen when Queen Elizabeth dies.

When you have a very, very, very large corpus to sample from, it can look a lot like reasoning.


I see what you mean, and it's indeed quite likely that texts containing such hypothetical scenarios were included in the dataset. Nonetheless, the implication is that the model was able to extract the conditional represented, recognize when that condition was in fact met (or at least asserted: "The queen died."), and then apply the entailed truth. To me that demonstrates reasoning capabilities, even if for example it memorized/encoded entire Quora threads in its weights (which seems unlikely). If it looks like a duck, swims like a duck, and quacks like a duck, then it probably is a duck.


Yes, this.

There's clearly an internal representation of the relationships that is being updated.

If you follow my Twitter thread it shows some temporal reasoning capabilities too. Hard to argue that is just copied from training data: https://twitter.com/nlothian/status/1646699218290225154


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: