That all depends on what one is actually looking to find in an applicant. If the goal of the question is to actually determine if the interviewee has the ability to research a problem, think it through, and implement the solution, then the "homework" approach is definitely superior.
However, if the thrust of the question is to see the applicant think on his/her feet and possibly apply some knowledge from a college class on algorithms (I never covered a problem in this depth in my undergrad comp sci education, but it's definitely standard to discuss computing order statistics), presenting it in the interview is best. The idea of the solution as presented here could easily be sketched out in the interview, and a skilled interviewer could lead the applicant through fleshing out some details if that was so desired.
That said, I think the latter is more often the goal of a final round interview.
Often happens at all-you-can-eat sushi places; personally I know of three or four in Montreal/Toronto where this is done.
They're the kind of places where you order small dishes off the menu, and you can just keep ordering them until you're done. No limit on the number you can order simultaneously, but if you call for the bill when there are uneaten plates they'll charge you some nominal per-plate fee for each one (this isn't for leaving one or two pieces behind - it's for when you order a plate of ten pieces of sashimi and eat two).
I've seen signage to that effect at buffets before. At least one of the Chinese buffets in Houston (of which there are about a billion) has such a sign, though I don't remember which one. I don't know that it's enforced...just a not-so-subtle way to say, "Don't waste food!"
These are how buffets are where I grew up. When I moved to the US, I was surprised that buffets here allowed me to leave food on the plate while I went to get another plate of food.
If they were trying to sell the whole package, including the domain name, which is required for the service to continue supporting all the existing links, and no one would buy it even "for a token amount", then why would the domain name alone go for more than that?
Because the domain doesn't cost anything to maintain. The service has hosting and bandwidth costs, which must be relatively substantial for a service of their scale.
Fails to mention (to me) the most interesting aspect of this kind of work: the complexity and scale of these machines is such that although we may not develop more efficient algorithms for intractable problems in the traditional sense, they may become tractable when computed with DNA.
Yes, a fact I didn't realize until the section on Goldman Sachs referred to the more favorable levels of risk carried by Lehman Brothers and Bear Stearns.
"Since texting is usually a binary activity (the texter sends a text for every text they receive) we can guess that Echo writes about 7,000 text messages per month"
A huge oversimplification and probably inaccurate. Especially since most phones made in the past few years have the capability to send a text to multiple recipients, I highly doubt that Echo is typing as many texts as she receives -- both because she's more likely to get mass texts, and because she may herself be sending mass texts (which are charged as multiple texts, but are only typed once). I haven't been a teenager recently, but I also suspect that there is a lot more one sided texting than you'd expect (especially directed at pretty, popular girls).
I think you're overlooking the real thrust of the situation. Wiles' proof was relatively lengthy and involved, but it can be examined and studied by humans in an extremely reasonable amount of time. It only took three days for Wiles to present his original proof. There aren't many people in the world who can understand it, and there are fewer who are knowledgeable enough to confirm its validity, but they exist.
Conversely, the proof described in the NYTimes article is of such length that no single mathematician could confirm its validity -- rather than deducing the non-existence of the object in question by a logical argument, it examined a huge number of possible cases. In that respect, it is far more similar to the proof of the four color map theorem. The issue is not so much whether or not we trust the computer's result, but moreso what it means for mathematics to proceed with results that we do not, in a traditional sense, understand.
This question came up in some of my more abstract classes in college. A few professors in my department were working on slightly different problems in the same general domain of automated mathematical problem solving and proof construction. The general consensus as I remember it was that the simplest way to get around the problem of no human being able to survey these proofs was to do something like this:
1. Define a machine read- and writable logic that can express your theorem and the steps you think it'll take to get there.
2. Write an automated proof-checker that can verify that proofs in this language are correct.
3. Prove the correctness of the checker.
4. Write a program that starts with your axioms and writes out a proof that ends in your theorem.
5. Verify it with the checker.
Now the only proof that needs to be human-surveyable for us to be certain that everything is correct is the one in step 3. The proof created by step 4 can fill up a skyscraper full of hard disks, and as long as the proof checker verifies it we know that it must be correct. Given a simple enough proof language (FOPL, for example) and a suitable programming language (the choice at the time was lisp, I believe) step 3's proof is easily short enough for a human to verify.
The only hole left is the possibility of a subtle machine malfunction that causes the checker to falsely categorize a proof as correct. On modern hardware this possibility is remote enough that once a proof has been verified a number of times by independent researchers on their own hardware we can safely ignore it.
Reminds me of discussions which purported to prove, not that a particular number was prime, but that there was a 1 in BigNum probability that it was not prime.
The mathematicians in the audience seemed to consider that unacceptable to the point of being useless. Nonetheless, someone pointed out (to much laughter) that the chances of any given proof being incorrect were significantly higher than that 1 in BigNum. Of course, we've all been there--thought we proved something we didn't.
I regard the chances of machine malfunction similarly, and have a similar standard for proof. If you can examine the code and the processes sufficiently, there is no reason not to trust the machine. At least, no more reason than there is not to trust your and others' minds. I suspect this view is common, given that most folks consider the Four Color Theorem proven.
Yes, exactly, what if the thing they proved is not what they coded in the end, there is a good enough probability of bugs if the code is sufficiently big !
At least to me, this is a nonissue. If the program can be proven to be correct (in the sense that the logic can be confirmed so that it will result in a correct answer), and the hardware can be shown to be able to properly execute the program, and the programs answer is proven.
Even the already tiny chance of a hardware error can be eliminated in all practicality by the combination of error checking within the program itself and repetition on independent hardware.
It's a clever idea, but as someone who has worked for (and listed as references) supervisors who were extremely flaky, extremely busy, or both, I would never want to judge someone based solely on whether or not their references promptly return a call.
I love the idea. I've always been skeptical of references - if anyone asked me about any staff I've had, I'd be pretty positive. Why burn bridges and start vendettas? There's definitely a few people I wouldn't call back for, but I'd call back for any of the people that were fantastic to work with. That means either really good at their job, or average+ with an amazing attitude (which can be the right ticket for some roles).
I might halfway pause and start a bad reference with, "Well..." and let the manager guess if he's perceptive enough, but I'd never speak ill of a former staff member. Too much chance of it biting the company back - it'd be pretty damn irresponsible of me to do that. But not calling back? Yeah, I'd do that. For great people? Callback the same day I get the message, no matter how busy, at least for a 90 second chat. Maybe I'm not everybody, but I think most halfway competent managers go out of their way to take care of the great people they've worked with.
Why would you bother or care about the opportunities of some guy who you might never meet again, especially if you are trying to meet a deadline or had a bad day or whatever.
You need to have fierce loyalty to the people that help you do great things in life. When someone works with you, and especially when they work under your guidance, they're making you successful and the company successful. In return, you do what's right by them, keep them informed, equipped, protect them, stand with them, and so on.
I've got fierce loyalty pretty high in my ethics, but there's a lot of pragmatic reasons too. On any one event, you could blow it off, but it's pretty obvious on the whole who goes to bat for their people and who doesn't. It's not something you make a calculation on - "Hmm, no one will know or care if I don't help this particular time." It's a way of life - take care of people that take care of you. Feels good inside, world sees it, recognizes it, and treats you well. A good thing, top to bottom.
"That might have been the end of it, had the files not, as digital files will, leaked onto the Internet."
This makes it sound like the tubes were leaky that day and because the files were "digital," they just spread out over the Internet like an oil slick. Um, no. Files do not spread simply by virtue of being on a computer.
If S is the sum of f(i)10^(-(i+1)), then you could break up that sum into two other sums (using the fibonacci relationship), express them in terms of S, and solve.
I'm not a math expert, but I like trying to follow these proofs as a puzzle. Here's my handwavy, intuitive take:
1) Setup a matrix to advance fibonacci numbers
Define a matrix A so that
A times (Fib1, Fib2) = (Fib2, Fib3)
That is, A "advances" the Fibonacci relationship when you put two numbers in (Fib1, Fib2 and Fib3 are also multiplied by the appropriate power of 10 to make them at the right decimal place). You can see this when we multiply it out:
Since Fib1 + Fib2 = Fib3 [again, at the right power of 10; writing all the 10^(i+1) bookkeeping obscures the reasoning IMO).
2) Define the sequence we want
We really want the sequence Fib1 + Fib2 + Fib3, where each Fib number is advanced the appropriate number of decimals above.
We can do (.01, .001) [starting with the first numbers 1 and 1 in the right spots: .01 and .001, with .0002 coming next).
This means we take (.01, .001) and use itself (multiply by identity matrix I), then take (.01, .001) and advance it once (multiply by A), then take (.01, .001) and advance it 2 steps (multiply by A^2) and so on:
We only want the the first element (1/89) which is the sequence starting Fib1 + Fib2 + Fib3...
Phew. There's probably some typos in there, but that's how I intuitively understood their argument. It helped to actually work through the matrix math to see what was happening.
However, if the thrust of the question is to see the applicant think on his/her feet and possibly apply some knowledge from a college class on algorithms (I never covered a problem in this depth in my undergrad comp sci education, but it's definitely standard to discuss computing order statistics), presenting it in the interview is best. The idea of the solution as presented here could easily be sketched out in the interview, and a skilled interviewer could lead the applicant through fleshing out some details if that was so desired.
That said, I think the latter is more often the goal of a final round interview.