I am very much in the same position right now. My dev team has introduced mandatory code reviews for every change and I can see their output plummeting. It also seems that most code reviews done are mostly syntax and code format related - noone actually seems to run the code or look at the actual logic if it makes sense.
I think its easy to add processes under the good intention of "making the code more robust and clean", but I never heard anyone discuss what is the cost of this process to the team's efficiency.
That's quite interesting, I wonder how they were fraudulent as bitcoin transactions dont have chargebacks etc. In the article they also mention that the price for the game would fluctuate which seems to indicate they actually priced the games in Bitcoin. Normally price would be Euro, USD etc and price conversion to Bitcoin would be done at check out. I guess these were the early days though =)
"Remote workers were not only more productive they were more satisfied with remote work than they had been with working in a traditional office"
I think some citation is needed here, there are multiple studies and they have very varied outcome. This one for instance find that productivity drops 18% when working from home: https://www.nber.org/papers/w31515
At best he is cherry picking his studies and ignoring the contrarian studies, at worst he is blindy pushing his own interest and views.
I can't disclose details but I know for my office raw [short term] productivity was about 20% up for WFH (the situation differs in almost every aspect from that in the study your linked).
That's non statistically significant to make a general statement, but confirms to me that as a minimum there is high variability.
Data entry is low skill, high churn; in person I'd expect managers to put a lot of pressure on because they can drive people away in six months (it might even be preferable for employment law reasons) and still have plenty of people needing jobs to pick up the work.
I'm not going to research a paper here, only to note my direct experience makes the broad applicability of your single study highly questionable.
It's one study, I'm sure there are now more longitudinal indicative studies for most regions that anyone here can find with ease.
You (I assume) were probably right to suggest the OP might be cherry picking, but you didn't do much better IMO (nor did I, I suppose).
Amplitude is a great product. We tested alot of similar products and Amplitude was by far the best. The only negative I have found is their pricing which is free up to quite a generous level to be honest but then it suddenly becomes really expensive.
I agree, it quickly becomes exorbitant. I used to manage a product used to record data points in fields - millions of datapoints a day and this cost us just too much money in my opinion compared to what we really learned from Amplitude.
I wish they would have event volume pricing option rather than just MTU, same with Mix+Segment. Very inflexible if the site you run has lots of infrequent users.
C# is just as good from a technical stand point I think, but there is a significant financial cost attached from the licenses. Large server installations with dev, stage, prod & load test envs tend to get expensive with .net.
If you need servers (which is only for web projects) then that costs money regardless of language. C#/.NET doesn’t have any license costs. It’s “apt install dotnet6” on Ubuntu. That’s it.
Isn’t microsoft pretty bad at committing to.. anything? Sure, they do make windows xp executables run even today, but they have revamped C# libs/platform so many times I have lost count.
Sure. Backwards compatibility on Windows is unmatched on other platforms.
>but they have revamped C# libs/platform so many times I have lost count.
What exactly do you mean? There was big migration towards .NET Core but full .NET will be supported forever. They even listened customers regarding WCF and helped a lot with CoreWCF.
Maybe it’s just in name, but .NET core to .NET, but if I’m not mistaken asp.net was pretty much rewritten from scratch? Then there is the gui churn where we get at least one new gui framework for every windows version (and if you are good enough you can open all kinds of them under the same system!). I mean things like that.
Oh and on top every major dependency being a copy of the java’s version, and only the microsoft version will be remotely maintained. But please do correct me if my flamewary knowledge on the topic is not accurate.
The energy is not used to handle the transactions, the energy is used to secure the network.
> Instead of being used for something useful
Bitcoin uses less energy than YouTube. Wether you think having a decentralized, global monetary system that gives everyone an opportunity to own sound money that the government cannot take away is more or less useful than YouTube is of course something everyone is entitled to have an opinion on.
It might be that you are correct, it is too early to tell, but you are no the final judge of what is useful in the world or not, so this quote is just your personal opinion and nothing more.
I believe this is totally wrong - I think Bitcoin uses significantly more than YouTube. Where did you get your numbers?
The 2020 Google Environmental Report [1] lists the total energy consumption of all Google/Alphabet data centers as ~12.2 TWh/year in 2019 and growing at about 2 TWh/year. (See page 32.) So in 2022 the approximate usage for all Google/Alphabet properties, not just YouTube, would be about 18.2 TWh.
Meanwhile, Statista estimates Bitcoin as consuming about 177 TWh/year [2].
So this means Bitcoin consumes about 10x as much as not just YouTube but all Google/Alphabet data centers combined!
There are some urban legends about YouTube using much more energy than it does; some of those urban legends are refuted here: [3]
Whether Bitcoin uses less energy today than YouTube is immaterial. (As is whether Bitcoin counts as "sound money", which I'll leave to the side.) What's important is the energy use per transaction, the total of which -- if scaled up to match what's handled by traditional banking -- would be absolutely staggering.
My lifestyle of keeping my high-end PC running all the time, cranking the AC down to 68, turning all the lights on, and so on, would of course use less energy than YouTube. But if everyone behaved as I do it'd be a bad thing (don't tell Kant).
"If scaled up" presumes incorrectly that the costs are somehow linearly bound to the number of transactions, they are not, so that whole line of reasoning is flawed.
> But then we've lost standardization which is the whole point of the error codes to begin with.
Returning 200 and then using response body to denote missing resource is no different. So you have to choose, either 404 can be invalid path and missing employee; or 200 can be valid data or missing employee. Personally I would prefer the former as 200/OK then indicates success.
This is getting into semantics, but IMHO a 200 OK with an empty body is the most correct response in this scenario. Everything worked so you get 200 OK and the most accurate representation of a resource that you know doesn't exist is an empty body.
404 Not found is not exactly correct. I argue that the server found a representation of the object with that representation being "It does not exist".
404 is consistent with the way a HTTP server works if what you build is not a dynamic app but a static website.
If you try to request a file `/public/pdfs/100.pdf` and that does not exists what does the server respond? 404
What the server responds if you try `/public/dpdfs/1.pdf`? Still 404 as that path does not exists on the local storage.
What is the difference for a client if 100.pdf should be an actually file or a data stream served from a web framework? There should be no difference.
Choosing to behave when building a dynamic app the same way as the static helps a lot with integrating with multiple other services (eg. caching, observability ...)
The difference is what you want to tell the client:
>The 200 (OK) status code indicates that the request has succeeded. The payload sent in a 200 response depends on the request method. For the methods defined by this specification, the intended meaning of the payload can be summarized as:
>GET a representation of the target resource;
>The 404 (Not Found) status code indicates that the origin server did not find a current representation for the target resource or is not willing to disclose that one exists.
If you to give the client the representation of the target resource (i.e. it doesn't exist) then send 200 and a body indicating it doesn't exist.
If you want to tell the client you couldn't find a representation for the target resource then send 404
>`GET /api/v1/employees/100/devices/1` where the employee with ID 100 does not exists but there is a device with the ID 1 owned by some other employee?
So you're asking for the device with id 1 owned by employee 100. The answer is that the device exists but is not owned by employee 100 because there's no employee 100. So return 200 plus however you want to represent "the device exists but is not owned by employee 100 because there's no employee 100".
>`GET /api/v1/employees/100/devices/1000` where both the employee with id 100 and device with id 1000 does not exists?
Same as above but subbing in "because employee 100 and device 1000 don't exist" as appropriate
That’s not a “representation of the resource”. That’s a fact about the state of the universe, to wit, that it contains no such resource. Which is what is communicated by a 404.
> Again, 404 does not mean that the universe contains no such resource.
It means that the server did not find a current representation of the resource and that the server accepts responsibility for providing an authoritative answer to whether it exists (if the latter is not the case, the most correct response is 421.) Aside from that and the combination of being unwilling to provide a default representation, not having a representation consistent with the Accept header provided by the client, and preferring not to distinguish this case from non-existence with a 406 response (which, like the situations to which 421 applies, is an edge case), the reason for a resource not being found is overwhelmingly that it does not, in fact, exist.
It is true that there are some other things that a 404 might mean, but “does not exist” is not only within the space of things covered by “Not Found”, it is by far the most common reason for it.
I think its easy to add processes under the good intention of "making the code more robust and clean", but I never heard anyone discuss what is the cost of this process to the team's efficiency.