How do you actually asses that the cause of death is coal vs cigarette smoke vs car exaust? Is only cancer of respiratory system taken into account? Is a percentage of stroke victims also? How is that percentage chosen?
These are not death as in murder, these are a statistical measure called excess death and it is usually very model dependent. I looked a bit into the models for nuclear accidents. There the problem is, that there is just no good data in the relevant range of radiation. We have data from very low doses due to background radiation and for high doses due to accidents, we don't have good data for large scale releases. Either you can do what is suggested for higher acute doses, then you end up with 40 000 dead due to Chernobyl, or you can fit a second order model, then you will end up with something in the order of 10. (Or you do what Greenpeace did and you look at correlations between irradiation and cancer rates in former Soviet oblasts, where you then have some influence of Soviet industry, etc...) And that is if you want to look at catastrophic risk at all. (After all an RMBK had several design flaws, that modern reactors don't have.)
The number basically tells you that according to a specific model, that is hopefully a good approximation of our understanding of the risks, a certain percentage of the death rate should be due to that factor. However I think these numbers carry not much more information than 32 other bits of the same sentence and to really use them you have to look at the specific model and you need to develop an opinion if the model reflects the relevant arguments well.
I don't know how this study does it, but one can make estimates by seeing how death-rates change in places with different levels of coal particulates, particularly over time as coal plants were opened and shuttered.
Alternatively there are studies that correlate various pollutants with respiratory issues, and then compare that with the pollution emitted by coal plants.
Ideally one would make many such estimates, yielding a range that one could have high confidence is close to the truth, similar to how we use multiple methods of dating fossils, and then know something is wrong if there is a large disparity.
What field of programming have you implemented that? I'm curios, is it in kernel development, some DB design. Not trolling, just curios where that usecase where this would come in handy and why ask it if it's not a direct relation to what you are doing.
Not to mention how hard is to have a standardization. Doesn't do any good if the data is not guaranteed to have been measured in somewhat similar conditions. Thinking about my cholesterol. I've seen so much fluctuation on my own blood results between labs. They blame it on the "bad chemical reactives". And that is even a somewhat standardized numerical values, but what about subjective symptom:
"I have a sand in eye sensation". For ML this must be translated to a numerical value. Does it feel like a 1 or a 1,5 sensation out of 10, how can we make sure we have the same understanding?
Rather what the hell happened to the guy that had tape on his laptop camera and microphone. Suddenly he's got an always on listening device. How does one go from one to the other?
Securing a local network is orders of magnitude easier than a laptop which accesses the internet constantly.
Stick the cameras on a vlan only accessable to the (local) servers doing the face recognition, stick the servers on a vlan that has no direct connection to the internet.
Compare this to a laptop which is connected to the internet with multiple attack vectors (browser, email client, etc...)
Does he have some custom build network router that he trust? Putting tape over your camera means you know enough to not trust the laptop hardware neither the OS or browsers like you said.
But he says: "We use .. Sonos system with Spotify for music, a Samsung TV, a Nest cam for Max". So all of his appliances do get outside.
I did a double-take on Nest cam. So he's streaming videos from his house on the internet with a closed source hw and software.
I didn't read the part about Nest to be fair but with some basic network design you can easily segregate networks and reduce your attack surface massively, even if you're using internet connected devices (seperate vlans, use http proxies with ACLs, no inter device communication where not needed).
The difference between a switch and a laptop is that your switch isn't running browsers with 0days found regularly, no malicious JS payloads, no phishing emails.
To exploit a switch you generally need access to the management interface, something anyone who has any experience with networking does not put on the same network (virtual or physical) as laptops, iPads, televisions, or internet connected cameras.
I agree with what you are saying, however tape over the camera means you don't trust the OS or the hardware manufacturer with a console command like modprobe to actually do it's thing and disable the web camera.
Sounds like he knows where the data being recorded is always going and what's being done with it. Pretty different from the chance that a camera/mic on a laptop might be randomly sending data to some third party.
I lost it when the guy said don't use an IDE. What's the point of having a static type system when I can't use an IDE to see where a specific method is called.
When saying he works with a bunch of different languages and projects, but then prefers Vim...It's great that Vim is better for older practiced vi programmers (or opinionated middle aged), but the A/B testing is definitive for new developers. IDEs are more efficient for most languages (especially multi-language projects). Reducing the argument to "it shouldn't be this way" is luddite.
Not to mention the millions of features that I can't even begin to enumerate that I use on a day-to-day that I wouldn't have the time in my life to master without an integrated uniform method of interaction. Two of the best examples are Eclipse's build system and hot code patching in debug mode.
The build system lets me forget entierly about "how am I going to write my tup/make/cmake/anything file" as it just works when I click go.
The hotpatching debugger support is astounding. Nothing comes close to such a seamless integration as being able to change my code and click "go" and have my new method ready for action to see if it works on the next run.
You also get free autocompletion for everything. Oh god the ease of life autocompletion gives you. I miss it every time I need to tough another language.
As someone that bounced between Android development and webdev, who now does webdev full time, I can say the my biggest 2 complaints were always the build system and tooling surrounding the ecosystem. Luckily things have changed a lot in the web development world in the last two years. TypeScript and Visual Studio code deliver the autocomplete that I've wanted in web development for a long time and bring static typing to javascript, as well as handling the transpiling of new javascript features to older syntax all directly in the editor.
Webpack's hot module reloading is also incredibly useful if you take the time to set it up. You don't even need to hit 'go'. When you save a file, it incrementally compiles that module, and swaps it out on the client. This process is normally faster than me tabbing over the other window, and if you have you're project set up correctly, preserves the app state if it can.
> The build system lets me forget entierly about "how am I going to write my tup/make/cmake/anything file" as it just works when I click go.
This is a bug, not a feature. Or rather, it is extremely easy for this to result in subtle "works on my machine" scenarios.
1. If you just "click & go", you are foresaking the ability to have automated builds.
2. If you implement the automated build independently from the IDE, you will end up in a setting where the developer makes changes to the IDE settings that are not reflected in the automated build. In a good day, this will result in broken builds. On a bad day, you will have bugs that are imposible to reproduce because the build QA uses is not the same as the build Development uses, even if they both come from the same SVCS. On an ugly day, you ship to your customers a binary version of the product that neither QA nor Development have ever tested before.
3. Not to mention that if you have no version control of the IDE settings, every developer has a slightly different build of the project, one that mostly works, until it does not. See the problems in #2 and extrapolate.
> 1. If you just "click & go", you are foresaking the ability to have automated builds.
This is incorrect. Eclipse will generate ANT files that are usable via TeamCity.
> 2. If you implement the automated build independently from the IDE, you will end up in a setting where the developer makes changes to the IDE settings that are not reflected in the automated build. In a good day, this will result in broken builds. On a bad day, you will have bugs that are imposible to reproduce because the build QA uses is not the same as the build Development uses, even if they both come from the same SVCS. On an ugly day, you ship to your customers a binary version of the product that neither QA nor Development have ever tested before.
Again, just keep your ANT file in your source tree and export it after every change. This require far less maintenance then Make or cmake.
> 3. Not to mention that if you have no version control of the IDE settings, every developer has a slightly different build of the project, one that mostly works, until it does not. See the problems in #2 and extrapolate.
The build settings are stored in the project files. When you download the project and import it into Eclipse your build enviroment is kept in line.
In larger projects I use a CompanyLibraries project and inside it I have 1) all libraries, sources, and documents neatly organized along with all binaries involved with that library (so if it needs to import a .so or .dll it's there) and the IDE-Bootstrap process:
1. Go into your settings and import *.userlibraries
2. Go to your settings and import cleanup.xml
3. Go to your settings and import codetemplates.xml
4. Go to your settings and import format.xml
You don't need to import the generated ant build system. You can right click it and run the build & tests from there.
After you do this you have complete autocompletion for every library we use. If needed you can cntrl-click to view source of a library, and it will work on your system even if you have windows or linux (because of the way I setup my .userlibraries). These are some from what I've seen the "little known" Eclipse goodies for collaboration.
Could it be better? Yes I'd like to eventually write an eclipse plugin that will automatically import these on a project by project basis then just make a git submodule in each of my projects that includes the global configs to make sure I still only have one.
His point as far as I can tell is that Java is somewhat verbose and you really should you an IDE. There are languages where going without IDE is feasible. The need for an IDE is a sign of cognitive and wrist tax Java imposes.
Funny thing is that I use Scaleway also, and did not have any problems registering a credit card with them. Since online.net and scaleway seem to be the same or related somehow, maybe they should copy-paste the CC validation code from them or allow login with the same user.