I think #2 risk being incoherent unless you define things very carefully.
"Illusion" ordinarily means there's someone with a subjective experience which creates incorrect beliefs about the world. E.g. I drive on a highway in summer, I see reflections on the road, I momentarily believe there is standing water, but it's an illusion. What does it mean for the basis of subjective experience to be illusory? Who experiences the illusion?
> Pain isn't a real thing any more than an IEEE float is a real thing. A circuit flips bits and an LED shows a number. A set of neurons fire in a pattern and the word "Ow!" comes out of someone's mouth.
But we don't think the circuit has an experience of being on or off. And we _do_ think there's a difference between nerve impulses we're unaware of (e.g. your enteric nervous system most of the time) and ones we are aware of (saying "ow"). Declaring it to be "not any more real" than the led case doesn't explain the difference between nervous system behavior which does or doesn't rise to the level of conscious awareness.
Agreed! The difficulty with consciousness is that there is no observable effect to distinguish between, say, actual pain and simulation of pain (acting like you are in pain).
And I don't think I have a good handle (much less a coherent definition) on what it means for consciousness to be an illusion. What I think it means is that the process that is getting signals about the environment, and making decisions about what to do, is getting a signal that it is in pain. The signal causes the process to alter its behavior, and one of its behaviors is that when it introspects, it notices that it is in pain. The introspection (how am I feeling) is just a data processing loop, but that process, which is responsible for tracking how its feeling, is in the pain state.
There's a lot of hand waving here, which is why this is the Hard Problem of Consciousness and why this paper has not solved it.
> Waymos pull over into bike lanes all the time for pickups and drop-offs and that’s neither legal nor safe.
While perhaps drop-offs are often relatively quick (though perhaps more risky; see the dooring accident description in the article), I'm also really annoyed by Waymos waiting and blocking for pick-ups, which can be multiple minutes.
This is also the original way variational methods pick a parameterization of a model of known architecture which best matches some distribution which generated data but is not otherwise compactly expressible.
The 0.1% thing ... Is that even the right label? I'm guessing one in a thousand people globally isn't using these mechanisms. The article spends some paragraphs on the world's richest person and his company's tax strategy. Is the millionaire next door quietly doing these things or is this about billionaires in which case it's more like one in a million.
ok so it seems pretty bad that they changed the index rules both to allow spacex in early and the wonky weighting stuff.
But if one already has index-based things that are likely to be captive on the wrong side of this, and one wanted to benefit or at least balance out, to confirm my limited understanding the goal would be:
- buy shortly after the IPO, ideally less than 15 days
- and sell less than 6 months later when lockups would end and insiders are set to cash out?
I think the "Leave them Behind" section at the end sort of ignores the whole "they will ruthlessly copy your material, and put aggressive extra load on your server while repeatedly stealing your work" dimension.
You can try to avoid consuming AI-generated material, but of course part-way through a lot of things you may wonder whether it is partly AI-generated, and we don't yet have a credible "human-authored" stamp. But you can't really keep them from using your work to make cheap copies of you, or at least reducing your audience by including information or insights from your work in the chat sessions of people who otherwise might have read your work.
> Microsoft bought it for OpenAI only, to train Copilot on the vast amount of code.
I think this gets the timeline wrong. Microsoft acquired GH in 2018 and started the partnership with OpenAI in summer 2019.
I'm sure there was some strategy to extract value from it that wouldn't serve its users but I think OpenAI was not initially meant to be the beneficiary.
Maybe MS just got extremely lucky, like winning-the-lottery-lucky.
But your timeline is off, however. Their partnership started in 2016[1]. In 2019 MS started to invest publicly in OpenAI - but by then they have had some history.
To me, this is at least suspicious. Granted, I have no hard proof.
While I agree that we keep reinventing stuff, in CS doesn't the ease of creating isomorphisms between different ways of doing things mean that canonicalization will always be a matter of some community choosing their favorite form, perhaps based on aesthetic or cultural reasons, rather than anything "universal and eternal"?
We can still speak of equivalence classes under said isomorphisms and choose a representative out of them, up to the aesthetic preferences of the implementor. We are nowhere near finding equivalence classes or isomorphisms between representations because the things being compared are probably not equal, thanks to all the burrs and rough corners of incidental (non essential) complexity.
I worked for a startup that used clojure and found it so frustrating because, following the idiomatic style, pathways passed maps around, added keys to maps, etc. For any definition which received some such map, you had to read the whole pathway to understand what was expected to be in it (and therefore how you could call it, modify it, etc).
I think the thing is that yes, `[a] -> [a]` tells you relatively little about the particular relationship between lists that the function achieves, but in other languages such a signature tells you _everything_ about:
- what you need to call invoke it
- what the implementation can assume about its argument
i.e. how to use or change the function is much clearer
I think the pipeline paradigm you speak of is powerful, and some of the clarity issues you claim can be improved through clear and consistent use of keyword destructuring in function signatures. Also by using function naming conventions ('add-service-handle' etc.) and grouping functions in threading forms which have additive dependencies, can also address these frustrations.
"Illusion" ordinarily means there's someone with a subjective experience which creates incorrect beliefs about the world. E.g. I drive on a highway in summer, I see reflections on the road, I momentarily believe there is standing water, but it's an illusion. What does it mean for the basis of subjective experience to be illusory? Who experiences the illusion?
> Pain isn't a real thing any more than an IEEE float is a real thing. A circuit flips bits and an LED shows a number. A set of neurons fire in a pattern and the word "Ow!" comes out of someone's mouth.
But we don't think the circuit has an experience of being on or off. And we _do_ think there's a difference between nerve impulses we're unaware of (e.g. your enteric nervous system most of the time) and ones we are aware of (saying "ow"). Declaring it to be "not any more real" than the led case doesn't explain the difference between nervous system behavior which does or doesn't rise to the level of conscious awareness.
reply