Hacker Newsnew | past | comments | ask | show | jobs | submit | rymohr's commentslogin

It is all so wonderfully massive it makes one wonder what lurks in the darkness beyond the bounds of light.


Contemplating infinity is sobering


You are confusing equality and equivalence.

It’s easy to do. Einstein did the same thing with E=mc2.

Equality is a matter of identity. Equivalence is a matter of behavior.

The speed of light is not an absolute constant as Einstein believed. C just represented the speed that light can travel as a relationship between energy and mass.

My favorite form of the equation is C equals the square root of energy divided by mass.

Behaviorally all that means is that as the energy to mass ratio goes up, the speed of light goes up. And as the mass to energy ratio goes up, the speed of light goes down. Hence time dilation, black holes, etc.


> The speed of light is not an absolute constant as Einstein believed. C just represented the speed that light can travel as a relationship between energy and mass. My favorite form of the equation is C equals the square root of energy divided by mass. Behaviorally all that means is that as the energy to mass ratio goes up, the speed of light goes up. And as the mass to energy ratio goes up, the speed of light goes down. Hence time dilation, black holes, etc.

That isn't what's happening in relativity. That might be a neat trick for you to remember whether an effect is dilation or contraction (though I personally find it more confusing), but the speed of light does not change in different frames of reference -- this is a fundamental property of relativity (and is an assertion by Einstein, who argued that the speed of light derived from Maxwell's equations means that if light functions in all frames of reference it must propagate at the same speed).

And while the mass-energy equivalence equation you mentioned is used in the way you described (rearranging it to have the speed of light as one term), it's actually used explicitly because the value must be constant -- the implication being that mass-energy is an invariant in relativity (though technically this derivation is backwards -- E=mc² is the conclusion you reach after assuming mass-energy conservation).

Relativity is already unintuitive enough, I personally find that adding incorrect-but-seemingly-intuitive explanations probably hurts your understanding more than it helps.


One of the (two) core assumptions that Einstein made with special relativity is absolutely that the speed of light in vacuum ("c"), is constant in every inertial reference frame/coordinate system. This has been borne out experimentally and the consequences of special relativity are seen every day at any particle accelerator (for e.g. time dilation of lifetimes of particles).

The full equation is:

E = mc^2 / sqrt(1 - v^2 / c^2)

This is coordinate/frame-dependent since it depends on speed. Here, m = mass of the particle which is also the same in every reference frame. You could define an effective mass as m' = m / sqrt(1 - v^2 / c^2) but that obscures the point imo.

To address your point directly, when you say "as the energy to mass ratio goes up, the speed of light goes up", that is just not true both mathematically and physically.


You seem to know a lot about this stuff. I have a question for you.

If fusion creates the potential for fission (radioactive waste) and radioactive waste can be used to build atomic bombs, how have we not figured out how to make mini perpetual-energy reactors?


Both fission and fusion release net energy by having products with greater binding energy. The greatest binding energy is iron (and some surrounding elements). Once you get there, no more energy can be released by fusion (or fission).

See this graph: https://opentextbc.ca/universityphysicsv3openstax/wp-content...

So it’s not a perpetual motion machine. Iron is the bottom.

(Heavier stuff than iron can be created by fusion, but that absorbs energy instead of releasing it. Supernova create these heavier-than-iron elements like Uranium and gold endothermically... they’re also created by the decaying guts of neutron stars—which are essentially ginormous atomic nuclei held together by gravity instead of nuclear forces—when they collide and some of their guts are released into space.)


The S-Process creates elements with atomic numbers higher than Iron, and it does not rely on supernovas or neutron star dissolution.

https://en.wikipedia.org/wiki/S-process


S-process is still endothermic, though, right?

I'm not sure if endothermic is the best word. IANAP. It seems to usually be used when discussing fusion-based neutron generation. But AFAICT neutron generation, especially as it relates to the s-process, is still largely a thermal process--the greater the temperature, the more neutrons are generated, the faster the s-process evolves. (If you go back to the beginning of the universe all nuclear synthesis represents an endothermic process, right? Though, maybe such semantic games aren't particularly helpful when distinguishing nuclear synthesis processes.)


Nickel-62 is the most stable isotope, if memory serves. It’s not efficiently generated in star fusion, however, so iron-56 is believed by many to be the most stable.


The term you are looking for is "nuclear binding energy curve". Basically lighter isotopes can gain energy by fusing, and heavier isotopes can gain energy by splitting, but somewhere in the middle (around iron and nickle) the isotopes are the most stable. So you gain energy by moving towards iron, whether from the light end or the heavy end of the periodic table.


Thermodynamics has some laws (First or Second, I can't remember) that point out perpetual energy or motion machines are impossible.


I really don't know, but aren't there some caveats about those assuming that space is flat or something about the rate of expansion?


Nope. The three laws are unequivocal. The universe can only increase or maintain entropy through physical processes. It can never return to a lower-entropy state. The laws say nothing about the topography of the universe and it wouldn’t matter anyway.


When you dig into it more you realize the second law of thermodynamics is more of a statistical statement and doesn't have the same status as say the laws of quantum mechanics or relativity.

It's possible to create hypothetical situations where all of the must fundamental laws are being followed but the second law of thermodynamics is violated (for example if there are many more 'ordered' states than 'disordered' ones). And there is some vanishingly small chance that it will be violated in our universe for a macroscopically observable length of time.

In practice you won't go wrong by treating it as absolute.


Actually, there is debate about the conservation of energy over cosmological length scales.

e.g. if new voxels of spacetime are created during, and they contain zero-point energy... may account for photons losing energy as they red shift over large distances.


Yes, at large scales space expansion can sort of make it so energy is not conserved: http://www.preposterousuniverse.com/blog/2010/02/22/energy-i... Depending on how you look at it, anyhow. One way or another you end up with some sort of unintuitive concept being introduced.


Fusion cannot create the potential for fission and radioactive waste cannot be used to build atomic bombs.

I'm afraid you're quite off base here.


Look into transcendental meditation, starting with this short overview by David Lynch: https://www.youtube.com/watch?v=z2UHLMVr4vg


I've been there. Please reach out to me on twitter (@rymohr) and don't throw in the towel just yet.


I hate when I see people throwing in the towel like this.

As a two developer company with four separate products doing nearly $500k in ARR collectively, Kumu [1] is a living example that it doesn’t have to be this way.

We rely heavily on bash, docker and cloudformation.

We only use Ubuntu LTS and we lag a release behind so there are plenty of tutorials available when it comes time to upgrade.

After experimenting with backbone, coffee script, flow, vue and multiple redux libraries we’ve settled on rewriting everything in typescript and developing our own thin redux abstraction.

Embrace new tech that makes developers’ lives easier while hopefully making things more secure too.

I get it if that’s not possible in large enterprise companies, but please don’t throw all software under the bus. Software is and will always be fun and there are still fun companies to work for if you’re willing to take a little risk.

[1]: https://kumu.io


I'm a bad writer. I'm not trying to throw in the towel, say software can't be fun, or that useful products can't be built within the status quo.

I can look at the product I develop and rattle off an impressive list of capabilities, talk about how well it is designed, say why it's the best product for the job on the market and talk about successes in the field. But I can also look at it and see a laundry list of design flaws, architectural limitations and unrealized enhancements that may never get time on the schedule.

The second side of the fence exists, and security people inherently spend a lot of time there. Spend enough time there and you see how the system that is the sum of all software is a mess. I'm not even saying it's a bad thing, just that it's the inescapable reality.

Your architecture is like swimming in a school of fish. By moving with the group you benefit from the successes of the group. Ubuntu, docker, typescript and delivering your product as webapp brings a lot of benefits in featureset, maintenance and training that come at a reduced cost. For the same reasons I also prefer to use as much popular off-the-shelf tooling as possible and stick to familiar designs wherever possible.

You're probably doing better than most. But even with all that benefit, the components of your system are fraught with defects and limitations that in a perfect world would already be solved problems. Both in the stack you use and your own software. And you make it work despite that. Great. That's not my point.


Your writing is just fine.

To me, most of the critical comments seem to miss the point that your frustration centers around the foolishness of trying "go as fast as possible" while at the same time "your shoelaces are tied together."


Not a bad writer at all. And I think all the problems you describe do exist. I'm just saying keep your head up and look too the bright side.

Be happy that you have a job that compensates you well, aligns with your values, is flexible to your personal needs, allows you to grow professionally, and enables you to reach for the goals you've set while you're here on earth.

And if that doesn't describe your job, please quit and come work with us or any other company that respects you as a human being first and a sysadmin second. Life's too short to do otherwise.


Docker has its own security nightmares and mis-designs -- for instance, are you using user namespaces? With LXC and LXD user namespaces are the default (and unlike Docker's design, they can use different ID mappings which blocks inter-container attacks). There are plenty of other missteps I can think of.

(I am a maintainer of runc and have contributed to Docker for a long time, as well as collaborated with the LXC folks.)


I love lxc/lxd. Its really a shame that there is little to no interest by the lxd team in supporting the oci container format.


I assume you're referring to the OCI image format (not the runtime spec). This is because the OCI image format doesn't quite meet what they want for LXD -- in particular the whole layering design that OCI uses (which was inherited from Docker) is simply wrong for them. In fact there is a strong argument that the layering design doesn't even match what OCI really wants (it effectively embeds an optimisation for "docker build" into the storage format).

I am actually working on improving the current state of OCI images[1] by using a snapshot-based tree structure -- which will also solve many problems we have in OCI that are independent from LXD. But it is possible that the LXD folks would be more interested if the OCI format more closely matched what they need.

Though it should be noted that LXC has had an OCI template for several years now[2] (and it actually uses a tool I wrote -- umoci -- to extract the OCI image).

[1]: https://www.youtube.com/watch?v=bbTxdzbjv7I [2]: https://github.com/lxc/lxc/blob/lxc-3.2.1/templates/lxc-oci....


Yeah I am aware of the oci-template. I was mostly thinking of discussions like this[1] where Stéphane says there are no plans to support anything like that in LXD.

I find the distinction between "system containers" and "application containers" to be a bit arbitrary from a technical perspective. What does it matter what I'm running as PID 1? I find both system containers and application containers to be useful.

It seems like LXD would see larger adoption if it were easy to run docker container images directly (built into the LXD tooling).

[1]: https://discuss.linuxcontainers.org/t/using-oci-templates-in...


Others seem to be commenting entirely from their own perspective. Try viewing this from the TSA’s or an attacker’s perspective instead and thinking about boundary layers.

In regard to the comments about disposing potentially hazardous water bottles, a TSA agent can rightfully assume that an attacker could dump the contents of said water bottle at any point before the security checkpoint, including into the public water supply. The fact they don’t attempt to prevent this from happening doesn’t make them full of shit, it actually saves tax payer dollars.


> Try viewing this from the TSA’s or an attacker’s perspective instead

Those are two very different ones, though. Of the TSA I hear nothing but "theater", so I don't know if they're a for-profit, an organisation to make people feel safe, or just implementing the letter of the law; regardless, the attacker's perspective would be entirely different.

And if you read a few of the comments, most of them do view it from an attacker's point of view. Nobody is going "my precious water has to be thrown away", but rather arguments about what kind of attacks it stops and doesn't stop.


Yes they are very separate perspectives.

I’d argue they’re still viewing it from their own perspective, though one where they are attempting to circumvent security measures as an attacker.

I’m not talking about that kind of perspective shift. I’m talking about putting on the hat of a cold blooded killer intent on making as big a statement as possible.


Have you seen Kumu? https://kumu.io

It doesn't support standard UML diagrams but between sketch mode [1] [2] and icons [3] you may be pleasantly surprised. I personally use it to map out Kumu's own internal application structure and flows.

Full disclosure: I am the lead developer and cofounder of Kumu.

[1]: https://www.youtube.com/watch?v=wX3kbCyOamQ (Gene Bellinger's intro to Kumu's sketch mode)

[2]: https://www.youtube.com/watch?v=AFOz67co0yA (Benjamin Mosior sketching wardley maps in Kumu)

[3]: https://docs.kumu.io/guides/icons.html (Kumu docs on Font Awesome support)


If enough people are interested I'm happy to do a webinar specifically for the HN crowd and do a deep dive on the technicals. Built on top of CouchDB and pretty neat stack overall.


Any tricks for mobile?


Use Firefox mobile and install any of the paywall bypass plugins that work on the desktop version.


There’s some work being done around renderless ui components but I’ve yet to see a complete framework.


Renderless? What are these good for?


They're like higher-order components on steroids that handle all the heavy lifting of the logic for the component (including what _should_ be rendered) but leave the actual rendering up to you (DOM structure, style, etc).

Here's a Vue example: https://banshee-ui.github.io/docs/guide/#why-banshee


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: