When their actions are sped up to match the speed at which we move, movies of their behavior will start to look like there's intent and will. Plants move towards the light, tendrils "reach" for supports, etc.
Clearly this is humans projecting our mental model onto plants, but... are you sure we're not also projecting it onto ourselves?
Not very long ago, we thought that "life" was due to a non-material life-force thought to inhabit biological entities and thus raise what would be a biological machine to the status of living being.
The Occam's Razor-logic of looking for the simplest explanation possible leads me to the hypothesis that consciousness will similarly turn out to be an emergent property of the mechanical universe [1]. It may be hard to delineate, just as life is (debates on whether a virus is alive, etc.) but the border cases will be the exceptions.
Current research on whether plants are sentient supports this, IMO. (See e.g. "The Light Eaters" and Michael Pollan's new book on consciousness, "A World Appears".)
Meditation adds to this sense. We do not control our thoughts; in fact the "we" (i.e. the self) can be seen to be an illusion. Buddhist meditation instead points to general awareness, closer to sentience, as the core of our consciousness. When you see it that way, it seems much more likely that something equivalent could be implemented in software. (EDIT to add: both because it makes consciousness seem like a simpler, less mysterious thing, but also once you see the self as an illusion, that thing that dominates your consciousness so much of the time, it seems much less of a stretch for consciousness itself to be a brain-produced illusion.)
[1] To be clear, the fact that life turned out to not be a mystical force is not direct proof, it is an argument by analogy, I recognize that.
It is irrelevant whether consciousness is an "illusion." The hard problem of consciousness is why there's any conscious experience at all. The existence of the illusion, if that's what you choose to label it, is still equally as inexplicable.
Of course science may one day be able to solve the hard problem. But at this point in time, it's basically inconceivable that any methodology from any field could produce meaningful results.
One thing scientists are trying is to see what interventions in the brain seem to make consciousness go away. Continued work in that vein may well set bounds on how consciousness can and cannot be caused and give us some idea.
Interesting! Seems like this could very easily be generalized. Tool sharing/swapping, ditto for books, rides to things other than the airport, etc. E.g. on the latter: some medical procedures like colonoscopies (basically anything that involves general anaesthesia) require you to have someone pick you up at the end, you're not allowed to take a Lyft home. That seems highly viable for trading favors, though hard to build trust on unless it's part of a larger sharing network.
This was also kind of built out of the idea of my people on smaller (like small towns) Facebook groups asking for rides to the airport, rides to the doctors, etc, etc.
I'm just not sure how the userbases will discover this but that's one of the things to figure out!
Um, why would anyone be "holding the bag" and who needs protecting by society? He's not taking out a loan, he's getting capital investment in a startup. People are gambling that he will do well and make money for them. If they gamble wrong, that's on them. Society won't be doing anything either way because investors in startups that fail don't get anything.
I think they are reading it correctly. Year 1, they touched one drive and left 9 untouched. Year 2, they read one additional drive and left 8 untouched. Etc.
Agreed, would really like to understand what this (setting the LLM up to assume a role to improve performance) is doing under the cover and why it works.
Why aren't the labs training models to pick a mantra appropriate to the task and do this themselves? "Huh, a database question. I am going to pretend I'm a database expert with lots of experience. OK, here we go!"
My read was roughly that agents require constraining scaffolding (CLAUDE.md) and careful phrasing (prompt engineering) which together is vaguely like working in a DSL?
Many apps are missing many keyboard shortcuts that you may be used to if you’ve used the equivalent on the desktop. You’ll need to keep the iPad screen accessible to tap on UX elements. There’s also the issue that shortcuts that do exist may be hard to discover because there’s no menu bar to look in.
> Many apps are missing many keyboard shortcuts that you may be used to
This is true. To see the ones that are available, hold down the command ⌘ key to get a scrollable list of all of the shortcuts for the app you’re currently using, and use Fn-m or globe key-m to see a list of the system shortcuts.
When their actions are sped up to match the speed at which we move, movies of their behavior will start to look like there's intent and will. Plants move towards the light, tendrils "reach" for supports, etc.
Clearly this is humans projecting our mental model onto plants, but... are you sure we're not also projecting it onto ourselves?
reply