And from a manufacturing/engineering point-of-view (which I am neither), I guess it's really hard to solve this problem.
You have some unpredictable 1/100 or 1/1000 defect that occurs long after production and sale.
Just how do you go about isolating the cause, and testing a solution? Make 5 changes, and put through a production batch of 1000 units, and then do accelerated testing? If 5 fail from one batch, and 2 from the rest, is there even enough statistical power to confirm that you've come across a solution? And you just burned through 5000 units.
Sounds like fun trying to solve this kind of problem.
There are PCB design/layout rules that deal with BGA. I'm not saying it's a 100% guarantee, but (much like EMC/EMI design rules) there are a lot of solid pointers that remove 90% of the issues. The remaining 10% are (again, much like EMC/EMI) subject to the layouter's level of experience.
Currently on mobile, can't link a PDF right now but if you Google " BGA PCB layout guidelines" you'll get a ton of documents.
Lastly: PCBs go through several optimization cycles, some occur after release for high volume stuff. There are always revision numbers of the silkscreen, sometimes they catch an issue like this after x1000 devices in the wild and do an update.
In production you would profile the boards. You take a board and run it through the oven with some thermocouples. You'd then set the temperatures of the pre-heat, heat, and cool down sections. This would heat the solder to melting point without putting too much stress on the components.
This is from memory from a long time ago using a teeny tiny little pick and place machine that did a few thousand components per hour.
BGAs were always always horrible to do.
"Design for production" is really very important and it's hard to find much information about it. Some simple little things can make the difference between an operator having to plonk a component on the board by hand every time just before it goes into the oven or having the machine do it. (Again, from memory).
Anecdotally confirming. When shopping around for the past 10 years, I've usually bought used PCs and used parts, both desktops and laptops, but after deciding to try out a Mac in early 2012, buying used just didn't seem to be worth the discount.
It doesn't seem to be so bad right now, but part of this may be due to my insistence on getting at least 4GB of RAM for a MacBook Air, which was hard to find on the used market.
Adding to the anecdotes[1], of the last 5 computers that I have owned, 0/2 of the Macs are still running, compared to 2/3 of the non-macs. Because of this, I refuse to ever purchase a used mac, the discount is not worth the risk, especially considering the fact that it's nearly impossible to repair them on your own.
[1]Random idea for a website/service: Computer reliability statistics by model/year. Could be really useful for people in the market for used machines. Not sure how/where you would get the data though. Seems like forums are overrun with anecdotes, but actual data is few and far between.
Macs are quite variable in terms of good models and bad models. This seems to apply to both laptops and desktops. Unfortunately. I also haven't heard of a website or service that has the reliability statistics you want.
You didn't mention Applecare. It is transferable and is for 3 years. So if you buy a 1 y/o machine from a hipster who wants to upgrade, you would be covered for the remaining time.
Yes, many people don't know about this law. The shops know this and takes advantage of it.
Of course there is also the manufacturer's warranty, which lasts for as long as the manufacturer decides.
Why have that law, and then make it legal for the shops to not acknowledge it up front? Why does the law have to be some weird kind of game where only people in the know benefit from it?
I recall some years ago (7?), some other mac having some characteristic issue that could almost always be fixed with a little time in the oven.
Having a kitchen with an oven and living in a student area where Apple products were popular, I sensed a business opportunity.
I put up an ad on some local classifieds with a lowball price for these units (but not much different than what the broken ones would sell for on Ebay, minus the hassle). I quickly learned that, after investing in a premium product, people would rather hold onto their brick rather than turn it into at least some cash. I never even got a chance to try out the procedure, people would counter-offer with ridiculous prices for, what is for them, a brick.
The sunk cost fallacy at work.
edit: I think I even offered pickup and some data recovery/security as a part of the offer, no takers.
Yep, this is the kind of thing I want too. A few scenarios (assuming that one person is living in a particular place):
I want the thermostat (with knowledge of outside temperature and winds) to know that my HVAC can increase the temperature by, say, 1C every 30 minutes.
Based on what it knows about my location and biometrics, it can drop to the minimum to prevent the pipes from freezing or other problems when I'm gone, and know when my classes/work ends and commute time to turn on the system at just the right time to be pleasant for my arrival. Maybe detect/realize that I'm in a, or will be in a, traffic jam, and hold off accordingly.
Integrate it with my alarm clock to cool the bedroom by a few C overnight, but warm the place up so I'm not chilled when I get out from under the blankets.
Detect that I've gotten out of bed in the middle of the night, and start blasting heat into the bathroom.
The dozens of hours of programming involved in replicating what Nest does to address his desires would not be "trivial and cheap". Your suggested substitute does not learn the heating/cooling efficiency curves of your HVAC system, does not automatically change the temperature when you go to sleep and before you wake on an adaptive schedule just by (re-)training it a few times, and does not turn on the heat when it detects motion. Even manual scheduling doesn't replicate what Nest does for you there; since Nest learns the efficiency of your system, it can calculate the number of minutes it must run to produce the desired temperature change, and turn on the system at the exact right time before you wake up to reach that temperature on time.
Nest can do all that out of the box, and it doesn't cost $250 for a new one on eBay, where you're quoting the rest of your prices from. They're $185-200 there BIN, and regularly on Amazon from the Amazon Warehouse Deals seller, which makes your z-wave stuff the "ouch" purchase. Whatever your opinion on having the API hosted externally, the product itself has some smart stuff built in. Looks pretty snazzy too.
For the record, I have a VeraLite and some z-wave stuff too. I'm not a huge fan. I don't care that Google hosts the API for my thermostat. I do care that if I'm standing outside my front door and want to open the deadbolt over z-wave, it takes 15-30 seconds, with a 50% success rate of the command getting through at all.
I was partially being facetious. This article is about a Rube Goldberg-esque API and this site is about hackers. I'm just pointing out that it can be done cheaply with better flexibility and precision by hackers like me. I'm one of those people who find the Nest algorithm frustrating and want to tinker with it (with more properly placed sensors etc). It's not really rocket science for people who understand the basics of control theory.
BTW, what does a deadbolt have anything to do with your Nest/HVAC system? Do you want to depend on Google to open/close your locks as well? It's _not_ normal to take 15-30 seconds to open a Z-wave deadbolt remotely (it's immediate if you use the lock keypad) with 50% of success rate: something (probably the security C/R exchanges) is timing out: the default timeout is 20 seconds. For me, it was 100% (knocking on wood:) with a few seconds latency for the times I tried, executing a night arrival scene from a phone in a car, which turned on path/door lights and unlocked the door, because I'd have groceries/takeouts in my both hands.
I'd tend to agree that Z-Wave devices and a Vera* controller is probably too much for non-tinkerers to handle in the current state, because average people can't even handle a wifi printer :). There are people who use Nest as a basic thermostat, i.e., for the looks only.
The main motivation for me to squawk on this topic is that I don't like the _trend_ of external services (nothing against Google per se :) owning my data and now control of my home.
We (hackers/tinkerers) can do better and demand a choice.
> something (probably the security C/R exchanges) is timing out
Yes, something's definitely timing out. If I watch the Vera UI after sending the command, it times out and retries several times before giving up. Yet other times it works, but usually only after at least one retry. The hub isn't too far from the door, and there's no walls in-between. The unreliability of z-wave stuff is my biggest problem with it.
The Nest responds pretty much instantly to commands despite them having to go out to the internet then back. The unofficial APIs provide access to everything the device and its web/mobile apps can do. Same with my Belkin WeMo switches and sensors -- you're probably not a fan of WeMo either, but I'm a fan of them responding instantly and speaking UPnP.
The divide between these products isn't really about "tinkerers vs non-tinkerers". If that was it, Nest/WeMo/etc would win over the monoprice z-wave stuff any day. They're much easier to hack with (no extra hardware required, HTTP APIs, open source libraries readily available) and typically do a lot more than the generic single-function devices.
The divide you've set up is around privacy and external dependency. The tradeoffs there aren't the same, and not everyone's going to agree with you on that either, even hackers with full knowledge. I don't necessarily think self-hosted and self-supported APIs are a better future. That's a future that would limit a lot of the cool stuff you can do with these devices to us, instead of making it available to everyone. Most of the coolest stuff in our homes today already relies on entrusting a certain amount of control and information to 3rd parties, from your electric company, to your ISP, to the makers of all the set-top boxes connected to your TV, to the phone in your pocket. All of these things involve giving up tons of data and control of your home.
I have no problem with "smart thing" makers hosting APIs so long as they remain trustworthy with that data, and reliable, which is not a huge ask. I'm a lot more confident Google can run Nest's API reliably and securely than Vera staying in business and keeping its P.O.S. MiOS functioning.
Well, locks are special beasts due to the security command class, which requires extra challenge/response round trips. My VeraLite-G (the cheap $135 one) is more than 50ft and a couple of dry walls away from the door and the overall latency is consistently around 1.4 seconds measured, while sending commands to the thermostats takes less than 0.2 seconds. It feels instant enough for me. The commands to the lock is routed through my thermostats and appliance module. I'm actually pleasantly surprised with how well the Z-wave mesh network worked. I need two Wifi APs to cover my house, if it were not for the second AP, the wifi signal at the door would be too weak to function reliably.
OTOH, I'm too somewhat disappointed at the quality, security and the development of the MiOS, but we as hackers can at least do something about it.
I see this as an ongoing model/strategy for grocery stores and other retailers: Taking the Amazon approach where they're just distributors, and you pay for the shelf space, and a fee per transaction.
The manufacturer/distributor sets the price and the grocer could care less whether any sells at all, as long as they find a way to keep getting foot traffic.
I'll bet they'll never attribute the value to Youtube that comes from being able to better target ads off-Youtube based on your Youtube views. Youtube might lose on paper, but generate more value in total for Google than direct costs.
Google also gets a whole pile of AV to do speech-to-text and other analysis on (before any additional compression) that no other firm gets a chance to do.
You have some unpredictable 1/100 or 1/1000 defect that occurs long after production and sale.
Just how do you go about isolating the cause, and testing a solution? Make 5 changes, and put through a production batch of 1000 units, and then do accelerated testing? If 5 fail from one batch, and 2 from the rest, is there even enough statistical power to confirm that you've come across a solution? And you just burned through 5000 units.
Sounds like fun trying to solve this kind of problem.