Hacker Newsnew | past | comments | ask | show | jobs | submit | lettersdigits's commentslogin

> "Most of my friends at Google work four hours a day. They are senior engineers and don't work hard. They know the Google system, know when to kick into gear. They are engineers, so they optimized the performance cycles of their own jobs," one engineer described.

Is this really prevalent at Google?

edit:quotes


My experience is that it isn't, because of the performance review process. If someone could "optimize" their evaluation-- including peer evaluation, evaluation from other teams, stack ranking, etc-- then I guess they could work less. But I really don't see how they could do this.


prediction: this post will have 300+ comments and 300+ votes.


You forgot to guestimate the number of flags.


"Instead, my ‘moderate success’ story is closer to one of hard work, and slow, steady progress"


Survivorship bias is by far my favorite bias


Not all could agree


My favorite quote from the article:

> Imagine two plants growing side by side. Each day they will compete for sunlight and soil. If one plant can grow just a little bit faster than the other, then it can stretch taller, catch more sunlight, and soak up more rain. The next day, this additional energy allows the plant to grow even more. This pattern continues until the stronger plant crowds the other out and takes the lion’s share of sunlight, soil, and nutrients.


> What is the hardest technical problem you have run into?

I never seem to find a quick good answer for this.

Maybe I just almost never work on REAL hard things.

So my question to you, HNers, is :

What is the hardest technical problem YOU have run into?

I am really interested to know what you would consider 'hardest'.. It's probably not going to be something like 'I changed the css property value from "display: block" to "display: inline-block"..'


I have no idea how to answer this question. There have been some problems where someone was stuck for weeks, and I came in and coded up a solution in a day. That seems, in some sense, good evidence of being hard, but those problems never seem hard to me. The reverse happens to me too, where someone else solves a problem that was hard for me, and it's easy to them. Is that a hard problem? It was hard to me, but the problem was easy if I saw it the right way, which I didn't.

Last year, I came up with a solution to a problem that we'd been solving sub-optimally for years. My solution is arguably optimal (given a certain set of assumptions) and requires multiple orders of magnitude less code than the previous solution. The solution is the core part of a paper that was recently accepted to a top conference in its field. That sounds like it might be good evidence the problem was a hard problem, but in fact the solution just involved writing down a formula that anyone who was exposed to probability in high school could have written down, if it had occurred to them that the problem could be phrased as a probability problem (that is, the solution involved multiplying a few probabilities and then putting that in a loop). When I described the idea to a co-worker (that you could calculate the exact probability of something, given some mildly unrealistic but not completely bogus assumptions), he immediately worked out the exact same solution. It wasn't an objectively hard problem because it's a problem many people could solve if you posed the problem to them. The hardest part was probably having the time to look at it instead of looking at some other part of the system, which isn't a hard "technical problem".

Another kind of problem I've solved is doing months of boring work in order to produce something that's (IMO) pretty useful. No individual problem is hard. It's arguably hard to do tedious work day in and day out for months at a time, but I don't think people would call that "technically hard".

I'm with you. Maybe I never work on "REAL hard things"? How would I tell?


It's all relative. Sometimes solutions require insights. Other times solutions involve a ton of grinding. Many people have a tendency to overemphasize the first and greatly underemphasize the second, even though the grinding may actually be harder than devising clever solutions.

We build tools that read and write Excel files (open source library: https://github.com/sheetjs/js-xlsx) There are plenty of very difficult problems involving ill-specified aspects of the various file formats and errors in specifications, but it is largely a matter of grinding and finding files in the wild that capture the behavior you want to understand. Those are "difficult" in the sense that people still get these things wrong (related: recently a bug in the Oracle SmartView corrupted US Census XLS exports, which boiled down to an issue in calculating string lengths with special characters) but they don't feel difficult since most of the work didn't involve any really clever insights.

IMHO the hardest problem is now fairly straightforward: How do you enable people to test against confidential files? The solution involves running the entire process in the web browser using FileReader API: https://developer.mozilla.org/en-US/docs/Web/API/FileReader , and that is an obvious technical solution in 2017 but few thought it was even possible when we started.


That's actually my default response when people intimate the work I do must be complex / I must be clever - it's really just hard graft. Sometimes you just have to be willing / stubborn enough to chip away at a problem that initially seems insurmountable. Sure, the more knowledge you accumulate, the faster you can figure out when to look, but often enough you just need to roll up your sleeves and bisect your search space.

I can imagine you get some pretty warty excel files. I mostly get PDF for my sins. I'm sure, like me, you've spent hours taking bits out of files until they work as expected and then figuring out what the difference is :-)


I totally agree with these gems ...

> It's all relative. Sometimes solutions require insights. Other times solutions involve a ton of grinding.

> Many people have a tendency to overemphasize the first and greatly under emphasize the second, even though the grinding may actually be harder than devising clever solutions.

> No individual problem is hard. It's arguably hard to do tedious work day in and day out for months at a time, but I don't think people would call that "technically hard".


The incidents you describe are examples of what Alan Kay means when he says that 'outlook' is more important than 'IQ'.


You left us hanging without linking to your paper. I would be really interested in it.


You are over analyzing it. The question isn't about defining what a hard problem is. It's about sharing how you overcame problems. And the interviewer is looking for stories that demonstrate how you overcame those problems. Times when you lacked the necessary skill to achieve something.

Not being able to answer this question is just a telling as an answer itself.


> I never seem to find a quick good answer for this.

So do what polititions do -- answer the question you wished they had asked instead of what they actually asked.

In my case, at most places I've worked I end up being one of the go-to people for gnarly bugs that have stumped the regular crew. So part of my interview prep is to condense a war story into something short and coherent that illustrates why people should have faith in my intuition, a bit of a tough sell. Then during the interview I latch onto any semi-related question and tell my rehearsed story.


Also, politicians recycle answers from other people.

-- like the story of when you saved 187 million dollars by fixing a totally trivial bug https://thehftguy.com/2017/04/04/the-187-million-dollars-gma...


Bingo.

"Well I'm not sure I can pick just one as the "hardest", but one very interesting problem that ended with an elegant solution was ..."

And you fill in the ... with a tale of you slaying a dragon^W prod issue with just your wits and a default .vimrc.


I don't know why this has to be framed as a manipulative 'politicians' move, this is just being honest and helpful like you ought to be.


In >15 years of professional development, I've probably worked on only one project in which there were any significant technical challenges. I should probably consider myself fortunate that I've had even that one technically challenging project. It was a lot of fun, but it has been rather demoralizing looking for equally challenging work since then and largely failing to find it.

I think much of the reason for that is that most software projects that deliver business value involve plugging together a bunch of components to deliver functionality that is not particularly complex. It doesn't involve pushing the limits of your datasources or inventing new algorithms. If performance problems come up, it's almost always cheaper to throw money at AWS or more hardware than to spend a couple developer-months addressing the bottleneck in the application. In some ways, I guess that's efficient from the perspective of the market, but it's disappointing for engineers who like to build applications that require solving hard problems.


"Complexity is like a bug lamp for smart people. We're just attracted to it."


The way I think about that question, and in fact that's the question I have gotten many times, is "what's a technical problem that you solved and you are the most proud of?". In this question, 100% of developers have an answer. Everyone has done something that needed a bit of thinking or planning, and then they were proud of the execution. It can be the hello world in a new language or it could be building Facebook, or whatever else in between and beyond. If you think you don't have any of those, think again. Many times this answer changed during your time as a developer, but there is always at least one answer, no matter your level, experience, etc.

Personally, many times I answer with a time where I decided to re-engineer and rewrite a "snapping engine". It helped with snapping boxes together when they are close to each other in a 2d design application. Unexpectedly difficult to write with some features we wanted, but after a couple of iterations, I finished, and since then, new features and plugins worked perfectly and nicely together, and were easy to add and implement.


> In this question, 100% of developers have an answer.

Sorry, I don't.

The story I would tell if asked this was solved by the guy I was pairing with. He knew about URL encoding images, which immediately solved something we could have worked for weeks on. I was very surprised and impressed. Of course, now that is part of my toolbox, and I wouldn't think much of solving something else this way.

Sometimes I solve problems easily that others find very hard. I'm glad I could help, but I don't go around feeling proud of how awesome I did that day. I just happened to know something others didn't yet.

This might sound like humblebragging, and perhaps it is. Just trying to explain why I have a hard time with this question.


How long have you been working for?

It just sounds kinda sad to say that you've never been proud of work you've done. I think if I was never proud of the work I'd done I wouldn't still be doing this job: a feeling of reward is important!


Think of it this way: the question isn't for you, it's for the interviewer. He/she wants to know if you're smart enough to work for him. So answering that co-worker Paul had this problem that he'd been battling for days and then he finally came to you for help, and explaining how you helped him is a valid answer. It gives the interviewer the insight he's looking for and make you look good.


I actually don't like this question. It's hard to decide.

If it's something too simple, you're going to be looked down on. If it's a clever hack around someone's bug, it's hard to really be proud of something that shouldn't have had to exist in the first place. If I say something from a long time ago, I may not remember enough details to answer follow-up questions. If my job is boring (hence interviewing for a new one), I may not have had any good "shining moments" recently.

As time goes on, stuff that used to seem or look cool can become embarrassing. I've seriously considered deleting some of the early stuff I have on Github even though it has relatively-a-lot-of-stars for something small and stupid.

Asking to be regaled by stories of tech heroism is also prone to sabotage, because it's easy to rehearse an impressive story. It doesn't necessarily indicate their ability to do things that are useful for the job; it just means they rehearsed a good story and prepared for some follow-ups specifically related to that.

In an interview, you're a lot better off asking questions that will require the respondent to formulate an answer right then and there v. something that they've rehearsed. You're also better off leaving the expectations of tech heroism behind.

"Rock star" job listings have more or less died out, but this is really just a lesser form of it. Typically you don't need or want a rock star. You want someone whose output is professional and consistent.


I think you may be overthinking how seriously your interlocuters take interpreting the behavioral questions.

The vast majority of CS interview questions are really just one or both of two categories:

1. Say something entertaining or that makes me like you.

2. Say something that proves you're competent so if I like you it's not a hard sell to hire you.

When you read this hard into a question that can in this framework be reworded "talk about stuff you programmed that you thought was mentally interesting when you made it", they truly only are thinking about your skills at the most basic surface level, they really just want to let you gush for a minute.

Even in the most embarrassing code you've written there are dumb bugs and little moments of triumph, and they're begging you to share some of the juicy details, of which I'm fairly sure every programmer has a few they can recall.

If you have no example of work you've done you can gush over, then yeah it's a problem, but to me this is a sign that the only truly wrong answer is NO answer or trying to fake a modicum of passion by gushing about something you actually don't care about, and THEN sounding wooden when doing so, because if you didn't come off as wooden, even this would be sufficient.


Right, I actually agree. I said in another comment that these questions are lazy interviewing, because it's just the interviewer saying "Well? Amuse me."

But it's still difficult to say I'm "proud" of something that I don't really think warrants pride. Just have to steel up and go in there ready to talk about something dumb I guess.

The things I'm actually proud of are things that don't look impressive to the outside. To me, the gold standard contribution is surgical, precise, and simple. It may only be 20 lines of code but it operates within the framework of the existing stuff, doesn't break the tests, etc. That sounds like "routine work" to me.

These people want to hear about atom bombs because they leave a cool looking mushroom cloud, but the professional shouldn't have to go nuclear -- and they shouldn't be proud of it when they do.

I guess the core issue is that if someone is asking this question, it signals that we're not really on the same wavelength. At least, it seems to signal that, because I assume they're saying "OK, please wow me now." Good work is quiet and consistent, usually not astonishing.

If the candidate is the author of some bona fide, actually-used open-source software (not GitHub vanity projects), that could qualify as something that looks impressive and is also probably objectively worth being proud of, but few people would meet this description.

Of course, in reality, the signal is really "I have no idea how to interview someone, please make this easy for me." If you interpret it that way and ignore the actual question posed, I guess it becomes easy; just say something that sounds like a vague answer, and then speak for 2 minutes+ about why you're probably the best choice.


This feels pretty pessimistic to me. A "professional" (and I only half-mean the scare quotes, I think I am one) is able to often do some pretty transformative stuff that's worth being proud of while not setting off The Bomb. I can truthfully and accurately say that I reduced one employer's deployment time of their services from six hours to six minutes with no loss of safety or increased risk--I hacked through an accretion of technical debt (after building out testing to ensure that I didn't change any functionality) that had just plain grown because nobody else had had time to pull it out and replace it with a more scalable, long-term solution! I'm pretty proud of that. And two orders of magnitude on a deploy will wow folks who've ever been personally faced with the bigger one.

(One of the other ones I'm more amused than proud of, though, is saving a client ten times the money they paid me because I happened to know about the existence of AWS D2 instances...)

Your concluding point is well-taken, though, because most people don't know how to interview and they're basically asking you to sell yourself for them. But I don't think the question is as problematic under the hood as you're framing it.


Yeah, those wins are great. It's really about the level of detail that you assume they want. "I improved the deployment time" is an effect of a technical change, not a technical change in itself. There are a lot of people who could improve the deployment time just by switching to a faster build backend or doing some other small change that has big dividends. Is that a "technical accomplishment"? Sure, and it has big wins, but if the answer is just "I installed Jenkins" then it kind of takes the oomph out.

And big wins like that are usually compacted pretty early on. If you can get orders of magnitude improvements left and right, it means that something about the company's management is off.

It's also not a good question for an interview because it's a hard basis for comparisons. Perhaps another candidate knew how to improve the build/deployment pipeline, but he was blocked by political interference. He wouldn't be able to say "I sped up the pipeline 6x.", but he could talk about his plans to do so with a question more oriented to the task, e.g., "How would you build your dream deployment pipeline?"

That's what I mean when I say they're looking for something spectacular. Some people can say they saved their company or made a change with massive ripple effects, which is not necessarily aligned with the technical difficulty of that change and may cause some candidates to elide mention of it entirely, and some people can't make such big assertions, not because they're not skilled enough, but because the opportunity and/or priority wasn't there.

If you want to know about business gains and side effects of technical work, ask "How did your work help your employer?" If you want to know about technical work itself, ask relevant lines of questioning.

Anyway, I think we're basically splitting hairs here. It's just about what level you're choosing to process the question on, and it seems everyone agrees that it's best to take a very superficial interpretation and allow them to inquire further as necessary. I just don't think it's a good interview question.


Some interviewers think like that, with this "wow me now" attitude. These are the bad interviewers, and honestly I don't much care about the answer given to them. The one losing is the company by not hiring me because I didn't "wow them" by their standards, and in the end I would like to work there.

But there are many interviewers that don't expect to hear the atom bombs, or the circus acts, or whatever else. They actually want to hear about something you did, finished, and then a feeling of accomplishment came over you. They are also developers. They want to know a bit more about you. And these are the people I would like to work with.

Bottom line, there are worst questions that are asked during interviews :P.


But that reveals why it's such a bad question: they just have to prepare in advance and think of a problem scenario that sounds interesting and panders you, and recite it at that question. Even a faker can do that, but a legit programmer that needs a few minutes to get thinking (and hasn't come up with a cookie-cutter answer in advance) will stumble at.


But who cares that it's a bad question. The point is that it was asked and you should either answer it, evade it skilfully, or find a tactful way to decline to answer.


The poster asserted that "100% of developers have an answer" to this question. We're discussing why some developers may not have a good or immediate answer, and why the question is not as good as he asserts.


Well, I don't know about how good this question is, but it is one that gets asked a lot. And I do still think that 100% of developers have an answer, because I am sure every developer has felt proud about something he did, no matter the size, otherwise why would they still be a developer?


Sure, I wasn't speaking to that issue, only to the parent's attempt to justify it as a good question.


The localization project I use as an example here was definitely one of my top 3 hardest projects of all time (and that was years ago). It was not a particularly difficult technical challenge, it was difficult because it touched every single aspect of the codebase. The project took me and a coworker 3 months to build out the infrastructure for, then another 3 months of actually rewriting everything to use it, and explaining to every single other developer about why we made the decisions we made in order to teach them the different ways they were going to have to write code from now on. Social challenges of the workplace are hard; we're not always looking for technical difficulty.


Off topic but: this is some good nostalgia here. I was that other coworker. Some of the best pair programming I've ever done. It was fun working closely with Nick.


Was it hard or was it just matter of putting time and focus?


It's definitely hard. There was certainly no clear cut solution to any of the problems I included on the card in the picture. We evaluated 8 or 10 different solutions for out of the box stuff, found things we liked and didn't like about all of them and eventually decided it was best to build our own. At each step there was a lot of debate because we knew this would probably be used beyond just the Careers project and be used in the Q&A project as well (separate at the time), so it wasn't just worrying about my team's concerns, but everyone's. We won some debates, we lost some debates, it was very hard.


I've done localization conversion projects a few times so I can relate. There's never one way to translate everything (e.g. page content, URLs, database content, images, forms), there's usually several translation methods to evaluate, you have to trawl your whole codebase to tag text for translation, translating routes/URLs tends to break all code that doesn't expect those names to change, new developers have to be taught to develop new content with translations in mind, you have to schedule allowing content to be translated along with time to get it translated to get everything done on time and you need a new workflow for how translatable text is delivered, translated, reviewed and deployed.


Hard technical problems are pretty much exclusive to the academia and R&D departments (and opensource projects). The rest of us are the 21st century's plumbers and electricians - if we run into a hard technical problem, it means something definitely went wrong in the planning process beforehand.


> > What is the hardest technical problem you have run into?

>I never seem to find a quick good answer for this.

Real easy: Overcoming technical debt/bad decisions of the previous group of programmers.

At my current company/position, our group basically replaced an outside company - two programmers. You name something you should do and they did it: Code in the behind, logic in triggers, plain text passwords, direct database access - bobby tables all the way down, etc.

When they were in charge, the company had ~4 customers... we are now rocking ~30 unique customers. Their fragmented codebase is unmanagable.

Keeping a train moving while replacing the engine and changing the wheels would be easier.

This doesn't include company culture, inter-company politics, other decisions, etc.


This is not impressive. This is normal. When they say "What's the hardest thing you've done?" I would hope that if you are going to run with this, you explain why conventional maintenance/upgrades were so extraordinarily difficult in this case.

Every developer dreams of going greenfield. Ultimately, that's because it's harder and much more tedious to read code than to write it. If you start from scratch, you understand the whole stack/platform, everything is customized to your liking, and so on. That's great for you, but the company is usually stuck spinning its wheels for months while you push this rewrite down their throats.

It's also very easy to underestimate the depth of domain knowledge and accounted-for corner cases encoded in an old codebase. It looks easy at first, but it usually ends up taking at least months to reach feature parity with the old software, which usually also means that people will use both systems simultaneously, requiring data synchronization, etc.

The whole thing becomes messy, and by the time you're done, the "new system" usually isn't really all that improved over the old system. Systems get convoluted in the process of development, business needs demand quick shoehorning of something instead of thorough refactoring, etc.

Once in a while, a full rewrite is indeed justified, but it's much rarer than most people think.

Going in saying "Yes, my company needed a full rewrite" is an instant orange flag in my book, and thorough questioning would be needed to determine if this is an ongoing attitude problem where there's a reluctance/reticence to read other peoples' code. That portends laziness, a disrespect for colleagues, and a disrespect for the business's needs, which are rarely aligned with tying its developer labor up in a greenfield reimplementation.


"This is not impressive" That's because I've given you a mile high description.

We have an outside consultant who does one thing: Fix businesses.

When he says this is the worst situation he's ever seen? I take it with a little more weight than I'd take someone else saying it.

While I understand and don't disagree with what you say - a full rewrite is normally not the answer - you haven't seen this codebase. Or the company structure.

We aren't exactly doing a "full rewrite"... it would honestly be easier in many respects - we are keeping the company functional while replacing large chunks.

Aka Keeping the Train Moving while changing the Engine and the Wheels.

This isn't JUST a code base issue. Or JUST a culture. Or JUST management. It's a combination of all those - and many more that can't be covered in 3 paragraphs.

I could talk for 8 hours - and scratch the surface - of where we are and where we need to be.


>When he says this is the worst situation he's ever seen? I take it with a little more weight than I'd take someone else saying it.

I take it as a consultant emphasizing biases that favor his presence.

Anyway, the point of my comment was not to nitpick your specific situation, which I have no information about and obviously cannot speak about intelligently. Perhaps it is as extreme as you indicate. If so, my only suggestion would be to focus on the difficult problems rather than colorful characterizations of them. In an interview, the employer will know about its own problems, and may imagine your running the interview circuit and saying all the same kinds of things about them.

The goal is that as general advice for what to say when someone asks about technical accomplishments/pride, talking about the nightmarish situation you're coming from is first, trite, and second, a signal that you may not possess the cooperative qualities or the perspective to properly evaluate situations as they arise.


I agree on all points - the consultant has his own objectives just like the owners and my manager.

Is it the worst situation EVER!...? Undoubtedly not... but its definitely twisted like a pretzel with problem layers on top of problem. But we also aren't "rewriting from scratch" - that would be too difficult. We are replacing pieces one at a time and breaking/fixing as we go.

I'm compensated enough for the stress and like the people and environment enough to offset the "overall situation".

But yeah... my main point was to say that moving a company from "old broken" to "new shiney fixed" while keeping everything working, adding new features, etc is, at the heart, the largest technical challenge I've faced.

Devil is in the details - and "spinning" it correctly without bad mouthing the company (Which I do like, otherwise I'd not still be there) and while keeping to that main point (upgrading a company) is... interesting.

Finangaling the finer words isn't my top skill :)


> Overcoming technical debt/bad decisions of the previous group of programmers.

This is often a gold mine, just make sure your interview doesn't become a discussion about how bad other programmers are.


Absolutely. I try not to bad mouth the individual - because the two guys seemed like good people.

Secondly, programming - to a degree - is "art". My version of a masterwork is different than yours.

But the framework decisions? Lack of documentation? Lack of source control? No dev environment? etc. Decisions and foundational information that is demonstrably wrong and needs fixing? And what we have done/are doing/will do? THOSE are the things to focus on.


how about overcoming technical debt you created yourself in the past? :)


I like to think about this question in three parts: scope, depth, and originality.

The scope of the project is the size and ambiguity level. Ideally as you get more experience your scope grows. Whenever you're coming out of school, your answer to this question might be a tricky bug fix but after a few years it might be something like "we needed to build a system to flag and filter fraudulent users based on their site activity."

Depth is about how much detail you can talk about the project in. If you choose a project with a big scope, can you drill down and talk about the implementation details of each component? If you chose a bug fix, can you describe exactly what triggered the bug as opposed to just knowing what fixed it?

For originality, what about the problem made it non-trivial to solve with out of the box tools? For the fraud case above, maybe the data was stored in a format that was hard to analyze. Or maybe for people at the bigger companies there were scaling issues that requipped unique solutions. For bug fixing, maybe it was a bug that was really hard to reproduce and you had to do a lot of memory dumps and code analysis to pinpoint it.

When I finish something I like to think about it along those three axes for a little bit in case I need to recall details later.


>> What is the hardest technical problem you have run into?

The hardest technical problems I've run into, have been mostly human; i.e. other people.

But, in the purest sense, I have to say that I have observed, on reflection, that the reason I am a technologically competent, adept, person, making a living by way of dark and serious mystery, is that I long ago decided that nothing would be hard. Just .. un-learned.

You see, it is a key factor of success that you, literally and otherwise, embrace the idea that you can't know everything.

So, know what you need. The hard things become easier the moment you do it, even the first time.

I know this sounds like compound nonsense, but I honestly had to give pause on this question. I'm a systems engineer with decades of experience in a multi-variate set of industrial categories, and relatively successful in my lot. This question made me really think - I couldn't think of the hardest things.

The hardest things, I haven't done yet. {But, on another thread, I'm serious about people being the hardest things about technology..}


I lie.

When I was a young, wet behind the ears, Java developer I answered telling them about making a modification to a Linux kernel driver for hardware support. It was a telephone interview but the silence was deafening. Still the only interview I ever had where I wasn't offered the job.

Some things haven't changed in that it is when I step outside my comfort zone I find the technical problems harder. But now I'd just talk about a more comfortable problem that went through multiple rounds of better fit solutions on a system actually in Java so they can relate and see I can actually talk about the target language. Then I'd probably make the point that as a more senior developer it's usually the non-technical problems that require my most focus.

Still makes me cringe thinking about it.


> Still makes me cringe thinking about it.

Probably because you are in a much better place now.

I have found that the propensity to lie is directly in proportion to one's [for the lack of a better word,] desperation. The less desperate I am, more ideology I tend to exhibit.


I'm currently a wet behind the years (hopeful) Java developer -- what's cringey about a modification to a kernel driver being hard? That sounds pretty intimidating to me.


As someone on the hiring side of the table, I rarely care about how technically complicated it was and more about WHY it was hard, how you figured it out, and how you avoided it in the future. I've made some really dumb typos in my code that caused a debugging nightmare for me in finding them.

As a recent example, in my game engine I copy/pasted some code for framebuffer and texture creation and missed renaming one variable. A stupid mistake that took me 2 days to find. But to solve it, I needed to look at all of the various textures on-screen. Some of them are non-linear, some are single component (just red) which doesn't display well, so I ended up writing a method that allowed me to render all of the various stages of my renderer out to the screen (color, shadow, light, depth, normals, etc.) as a debug method. Only then did I realize that the shadow buffer texture was sized to width * width instead of width * height. Again, a stupid mistake, but now we've got something to dig into a talk about and it's much more about the solution than the problem.


This https://kopy.io/eI8bT (that's 140 lines, whole thing was over 1000).

Was going to buy the calculations in as an API because it was an opaque government standard, API turned out to be incomplete after we bought it, rang them up to ask why "oh we are getting out of that side of the business".

I had two weeks to build out an API (over Christmas) that implemented a government calculation that was implemented in one 200 page PDF[1] and then modified in another two, total calculation had 44 individual steps referring to several dozen data tables some with hundreds of values.

I did it with a day to spare.

It was probably the single greatest pure technical programming I've done in my career.

[1] https://www.bre.co.uk/filelibrary/SAP/2012/SAP-2012_9-92.pdf


Strictly technical? Determining the existence of metastable states for the T-cell receptor protein in solution (PhD Dissertation topic). Sort of difficult science, though it seems quite easy in retrospect now; the project wasn't so much hard, as it just cut across a lot of disciplines. Poor developer interview answer though, as it didn't involve a lot of software development (lots of TCL scripting for data extraction and ML with Python instead).

The answer I used to use was a problem I had working as an R&D intern: determine when the speed limits posted on a street have changed from measurements of driver behavior. Interesting and fairly tricky ML problem (weather is a big confounder). Ended up writing a lot of C to get high enough performance to make the solution reasonable which was educational (I didn't know a lot of C at the time), but almost certainly not the right approach to the performance problem. Still more science than development, so it depends on who's asking.

Probably the hardest business-type technical problem I've encountered is database restructuring. We moved (a subset of our data) from a NoSQL database to SQL as part of larger architectural changes, and mapping, migrating, and maintaining compatibility has been non-trivial.

The hardest problem I've encountered has been helping to rescue a project with a severely dysfunctional development history. Much more project management and people than technical (it was just a CRUD app) but I came into a project that had been in development for a year or so and stalled out. The development was outsourced and I fell into a position as a liaison between the internal folks at the university that wanted the product and the dev team that had been hired to build it. Sort of a classic issue where the dev team and the stakeholders would talk right past one another. It drove me crazy at the time, but an excellent experience in retrospect. And it has a happy ending; the project went on to be successful after that, at least when I last heard.


I spent a couple very stressed weeks (nights, weekends) debugging a crash that would only happen every 30 minutes or so. It looked like stack corruption, so I was trying all avenues to debug it. Nothing seemed to make sense. We finally figured out that it was a signal integrity problem on the DDR memory bus. Software was fine.

Did you see that post a few days ago about "Is ECC RAM worth it?"

The answer, after my hellish debugging is an unequivocal YES! My horrible problem would have either manifested itself as a correctable ECC error or I would have gotten an uncorrectable ECC exception. I would have been able to go straight to hardware engineering with that instead of spending many miserable nights debugging an RTOS and ISRs.


In no particular order:

* GPU drivers are a buffet of terrible things. My best moment was either hand-compiling shaders to GPU-specific assembly in order to implement video playback filters, or deducing how the GPU vendor's drivers managed to fake a particular GL extension and implementing that same fake trick in the MesaGL version of the driver.

* Self-applicable partial evaluators are cool. I've tried several times to build one, and each time I fall short.

* I've hand-written parsers for big languages. I've also written parser generators. I'm not sure which is harder.

* Fighting with motherfucking BitBake. You have no fucking idea.


Sounds like you do some embedded graphics work.

On multiple occasions, I've kicked off BitBake to run overnight. I come in to find it failing from running out of disk space. And I'm usually perplexed - does this really need over 200 GB of space!?!


I once slept in a lab in order to monitor a BitBake project for a few weeks. I would wake up, check BB's progress, tweak it and start it going again, walk across the street and get a snack, then come back to the lab and go back to sleep.


Honestly it's a vague question. I don't really know what I would consider "hardest"...but one comes to mind as being really difficult:

Debugging memory leaks in a Python 2.7 asynchronous (gevent) daemon.

Aside from memory leaks supposedly being improbable at worst in Python's reference counting managed runtime...the GC interface and STDLIB tools for such debugging are anemic in Python2 (improvements have been made in 3 although I can't comment on them since I haven't used them yet). Not to mention that C extensions (gevent is just one) add complexity to debugging.


Weird that people consider this a question. I think it's objectively possible to say if a task is harder than another:

1) One problem is harder than the other if it requires more knowledge. E.g. to code AI you need to have programming skills, AI related skills, statistics skills and graph theory skills, plus whatever your domain knowledge is (e.g. how to build the code in your company's environment).

2) One problem is harder than the other if it requires more skills.

3) [...] harder if it requires a higher composition level of skills. E.g. configuring a firewall via iptables is harder than configuring a firewall via your router's web gui, since the first requires bash, Linux, tcp/ip related skills as a foundation to even understand what iptables does. The gui may only require a limited set of networking skills and 2 pages of router handbook.

4)[...] harder if it is more complex. Coding your own kernel is harder than coding your own calculator.

5)[...] harder if it requires more departments. "Go to market" of your product therefore is a harder task than "proof of concept".

6)[...] harder if it relies on more legacy code. Legacy code always contains domain knowledge that is unaware to most people, even to the developers. Changing that code or its environment yields a lot of surprises.


My go-to answer is my time when I worked in AIX kernel development at IBM. We'd get bugs for kernel crashes that appeared related to memory corruption. They frequently ended up being caused by stale DMA addresses in device drivers for (mostly) Infiniband adapters writing into memory that now belonged to some userland process or kernel data structure.

How I'd debug these (it took me a while to be effective in this regard):

  - Main tool was the AIX kernel debugger (like cutting bone with a butter knife :)
  - Identify corrupted memory, look for clues like recognizable data structures or pointers in the raw dump that could be cross-checked against symbol maps, etc.
  - Confirm the alignment of the corrupted memory. Page alignment was a tell-tale sign of errant DMA writes in our system... cache alignment is more mysterious and can be related to CPU design bugs (IBM designs their own POWER processors, and we'd test on alpha hardware frequently).
  - Scour the voluminous kernel trace for the physical frame # of the corrupted memory. A typical offending sequence was: 
    1. Frame assigned to adapter for DMA
    2. Physical memory layout change (we supported live hot-swappable memory arbitrated by the POWER hypervisor)
    3. Frame allocated for use by page fault handler
    4. Crash happens
Sometimes the root cause was that the device drivers were not properly serialized with the dynamic memory resource subsystem (the hot-swappable memory) and the sequence above happens very quickly (<1 ms). Sometimes the bug took a while to manifest, and the nice story tols above for our page was interspersed with thousands of unrelated activities in the same region of memory.

We had to be like a prosecutor and build a strong case to implicate a bug somewhere else. Until then, our team was always on the hook to figure these out.

This class of problem was hard because the tools we have at our disposal to collect evidence were quite inadequate, and the amount of data to sift through was enormous. Also, any tool we think might help to sift through all this data needed to already be in the system and in the kernel debugger as a diagnostic command (a crashed system in the debugger cannot be modified in practice). There's hundreds of those debugger commands for all kinds of randomly recurring problems we had trouble figuring out. Over time, you'd build your own for your own set of problems in your kernel specialty :-)


This one took a while to figure out. Especially, since as a tech-support person I did not have actual access to customer system. http://blog.outerthoughts.com/2004/10/perfect-multicast-stor...

This one took many many tries of various incantations and variations to discover (documentation was "less than useful") http://blog.outerthoughts.com/2011/01/bulk-processing-lotus-...

This one makes for a nice story when I talk about computer-specific language issues: http://blog.outerthoughts.com/2010/08/arabic-numerals-non-wy...


There was this variable name I misspelled once ;)


Heh. While still a student, I was working in Fortran (all upper case). I was trying to type COS (the cosine function), and I overshot the Oh character, and typed a zero: C0S. Not very visually different! It took me two days to figure out why Fortran suddenly didn't know what cosine was...


rm -rf $BUILROOT/∗


I ask: 'Tell me about a problem that was particularly challenging'

I'd love for someone to tell me a story about something they couldn't solve (or at least not the way they wanted to).

If they can't come up with something, which is rare, I ask them to tell me about something that was fun for them.


>> I'd love for someone to tell me a story about something they couldn't solve

I was in twelfth grade. I was given some EEPROMs which I had to write data to, just that I did not had the standard equipment to write to it. I used a printer port to drive an amplifier circuit I built, which in turn sent the voltages to the EEPROM. I sent waveforms exactly the way the data-sheet suggested. Yet, I wasn't able to read back what I was writing.

I had no oscilloscope or waveform analyzer to debug. All I could do was to re-read the data-sheet and then my program for correctness.

I could never figure why wasn't it working.

Later, my Dad found someone who did have the company-supplied EEPROM writing equipment and took the EEPROM to them. He learned that there was just data on the first few locations on it.

This is one of the very few projects where I have failed. Being in twelfth grade then, doing stuff that would fail college grads, I have not taken an offense with myself. :-)


Then the guy start describing the problem he solved in his last 6 months.

And you realize you've done about the same, fully finished and shipped, in about 3 weeks.

The rest of the interview is wondering whether you should cry or he should.


One of the most complex on the front-end was a repaint/reflow issue in Safari that was complicated by the way we were using Angular.

The easiest solution was to use transforms to force rendering through the GPU render pipeline by adding a Z-depth to the elements.

Which caused rendering issues in rendered font-weight for Firefox. We never resolved the issue, even after a root cause analysis showing the bug in Webkit and not Blink or Gecko.

On the backend, it was finding a way to store a persistent collaborative changelog with proper access control and heirarchy on top of a RDBMS. Resorted to redesigning a distributed file-system based on HFS+ and btrfs for COW and COR obligations. This is one of the most data-structure and depth of infrastructural knowledge problems I had to address.


Honestly, the hardest technical problem I ever ran into was teaching myself C pointers and keeping at it until I fully grokked them. Now, this was in the early 90s, and I was a loner with only a second edition copy of the K&R book. There was no Stack Overflow, and the only technical people I knew were on the other end of a BBS connected to FidoNet, which only batch-updated once per night. In hindsight -- and with today's resources and ever-shrinking distance between human beings in a community -- this problem is trivial. I've seen some pretty wild things in my decades as a programmer, but I have never since encountered a technical problem that completely fucked me up like learning C pointers did back then.


A good way to think about how to answer is to look at it from the perspective of the interviewer. What information do they want to get from you by asking this question? I'd say this question is aimed at finding out how you react when you're challenged. Can you describe what made the problem hard? Maybe it had conflicting constraints or goals. Maybe you were debugging some particularly tricky problem. What did you do when you ran into difficulty? Did you throw your hands up and give up? Did you talk with teammates about potential solutions? Did you have a systematic approach or were you just trying random ideas to see if one worked?


Like some of the other comments in reply to yours, I usually focus on a technical issue that stumped me for a long while - and then a change in perspective allowed me to solve it, or understand the solution offered by someone else. It's good to deconstruct what you got stuck on once you know the fix, because then you recognize what led you astray in the first place. (((imo)))

I think this question relates to personal growth and overcoming show-stopping obstacles with retrospective analysis? Something something smart-person-speak.


It always bothered me a lot too, but I was "lucky" to encounter a rather niche Chromium bug last year, so now I have it covered.

Generally, when you actively work on weird bugs and try to really understand what's going on, instead of doing quick hacky workaround, sooner or later you'll face some interesting bug. But it's sometimes exhausting to investigate stuff like that, plus most of the reasonable managers will try to prevent you from going down the rabbit hole if the bug takes too long to fix.


When i started out, a lot of things were hard.

Now, I can usually think of three decent ways to do anything. Nothing really feels "hard", it's just a different amount of work.

Another angle is that the way to solve "hard" problems is finding a way to think about it that makes it easy. Once I've done that, I no longer think of the problem as "hard".

I think the real issue here, that I don't fully understand, is what interviewers are really asking with that question? What do they want to hear?


I've had the same problem. I used to have what I thought were OK answers to those questions, but now it's hard to choose. It's especially hard if they scope it down, e.g., what's something you're proud of that you've done in the past 3 months. What would be worth being proud of after 3 months? It'd have to be an exceptional project to warrant that. Otherwise, little bugfixes are routine, and even if they're clever, they're hard to talk about both because the details get discarded and because it's hard to provide the necessary context.

Interviewers are being lazy with that question, essentially. They're saying "Wow me so that I can know you're the most impressive."

This is a problem if you don't think of interviews as a competition over who's the most sparkly (also, who's the best storyteller and/or who had the best script).

My experience is that people are shockingly bad at interviewing. They throw all the work onto the candidate and expect to get good hires that way, which is rarely successful.


Instead of taking the question too literally just think about a recent projects that had some technical challenges that you would like to tell the interviewer about.


Okay so I built shit version of Google Maps single handedly, from raw map data, before they had an API in 2005. It "worked" in IE5.5. Does that count?


One way I play this is to be like, "well I've blocked out the hardest problems, probably due to trauma, but here's a problem that I worked on that might apply to what you guys are doing here."

I find this easier because usually hearing the interviewer talking about things will trigger my memory as to when I was working on similar problems. It's probably better for them to know a relevant example anyway.


well, one hairy problem I had was migrating a legacy enterprise behemoth from a 4g language to Java (it was early 2000)

now 4g languages let you do anything easy, so nobody really put thought in anything really. result: everything was soft code and the database grew to around 4 thousand tables. the database itself wasn't even that big, running at around 10gb.

The sheer number of tables made impossible to use an orm layer, because back in the day Hibernate and the others had no other option but to map everything at startup time from xml files or annotation and have all the metadata about tables and relationships loaded in memory. Just the metadata was using about 5gb of memory.

However as part of the migration we managed to build all the UI straight from the 4gl definition, so we really really needed a way to create queries out of the UI metadata using object introspection.

We ended up writing our own object query language and the translation layer to build SQL queries out of it. It sounds bad but in the end wan't impossible even for a small 3 man team - we needed not to support the full spectrum of possible way to interact, only what the UI needed to load the data (and yes this was a thick client)


I think how hard something is cannot be easily understood by a non-very-senior engineer without context. I had a very good experience when interviewing at Facebook except for the part where they ask you this question. Either they were asked to respond rudely, or they really didn't think anything outside of adding stories to things was interesting/difficult.


Yes, it's kind of difficult to define the 'hardest' problem. Some issue I considered 'hard' is more because of my missing relevant knowledge in this field rather than the nature of the issue. And of course, I don't want to say "I spent a few hours to learn something to get things done, which is just changing a property value".


I got same vibe. Nothing I work on is really hard. It takes some time and focus, but most of the stuff in software development takes time and focus, unless it was already done (and if it was done, why redoing it?)

You example of changing block to inline-block can very well take time and effort depending on the issue at hands. So yeah - this is very vague question in my opinion.


“If you continue this simple practice every day, you will obtain some wonderful power. Before you attain it, it is something wonderful, but after you attain it, it is nothing special.”

That's why we can't look back at something as "hard". Or maybe it's not. It's a good time to read that book again.


Another problem with the question: if the hardest thing was something you solved then you're probably not stretching yourself. If I answered this I'd really be answering "tell me about an embarrassing failure."


I felt the same way. But it can be anything. "I sped up the page load on the site", "I redesigned the front-end to work on mobile", etc.


If we are talking about front-end, gradually migrating the Backbonejs/Marionettejs codebase to React/Redux/Webpack.


I'm not a developer, so my answers are different, but I've got a handful:

- Consulting for a customer where they were deploying to new hardware with a new processor architecture, I received a report that an application was running slower on the new servers than it was on the old ones. I started out looking at things with strace and ltrace, had to move deeper and pull out perf and systemtap, but found that it looked like memory access was slower than on the old hardware. I did research on the processor, and found that it was due to the 'Intel Scalable Memory Buffers'. Since memory first had to be loaded into the buffer before the CPU could access it, things not in the buffer already had higher latency, but things already in the buffer were much more quickly accessed than they would have been previously. I worked with the developers to make up for this performance decrease in other ways. Their application was well suited for using hugepages, but they were not, and TLB pressure was causing performance bottlenecks in other areas. Switching to hugepages prevented TLB pressure, and the application ended up being even more performant on the new platform due to the increased amount of available memory allowing for a large amount of hugepage allocations.

- I was consulting for a customer that was running instances on a xen platform. They were having performance issues vs. their old bare metal deployment, and had already done some analysis. They gave me a perf report that was showing a massive amount of time being spent with a specific xen hypercall. I had to dig into the xen source code to figure out exactly what that hypercall was doing, as general public documentation about it was somewhat vague. I was able to determine that it bundled up a bunch of different operations, so it wasn't conclusive from that, but it did narrow down the possibilities. It was enough to point me in the right direction, however, and I was able to determine with a little bit of trial and error with some tweaking that it was ultimately related to decisions NUMA was making. It turned out that the customer had thought they were doing NUMA node pinning, and ultimately weren't. Interestingly enough, even with pinning, we still saw some of this, and completely disabling NUMA (all the way - not just balancing) actually ended up being needed to fully reclaim the lost performance. I also learned an important lesson in trusting customers - even the ones that know what they're doing aren't always right, and while I should trust them in general, verifying their answers is important. I discounted investigating NUMA as early on they told me they had their applications pinned to nodes, and I would have otherwise investigated that more quickly and probably solved the issue in less time.


Tryna reverse-engineer the bit-banging protocol for a network card using the specs and a Linux driver.

Eventually I just gave up.


Dealing Usenet header data. The big alt.binaries.* groups can have upwards of 10 billion headers.


Talk about a problem of the same type as those likely being faced by the interviewing company.


> What is the hardest technical problem YOU have run into?

Layer 8, ie. human beings. The software side of stuff, I can eventually solve by hammering at the keyboard until it works. But the people using it, and the ever-changing requirements they have - especially since this influences my software design - is definitely the hardest part.


Debugging undefined behavior related heisenbug. Why your askin' such easy answers?


Whenever I'm in an interview and this question comes up I have similar issues as you. Even though I can think of certain particular problems that were a pain in the ass for me, a lot of the times someone came and solved it much faster and in a more clever way than I did.

However it just occurred to me that maybe the hardest problem I've had was actually making up an architecture from scratch as the problem was unfolding itself, and then having to maintain it and even bring others aboard. Meaning I had to document as much as I could (even though I had very little time for this) and I also had to sometimes give more priority to a not-so-important bug (vs a very pressing issue for me), not because it was critical to any feature but because it was making it very painful and hard for a teammate to implement one which in turn would later delay some other feature.

And the major reason why there was no actual planning to avoid this as much as possible, was because features were being decided on the go by the top brass on a case by case basis, completely opposite of the original direction I was told we were going to go (which was the information I used to lay down the foundations of the project). I.e. I was told at first that this was going to be just a wrapper script and it ended up being a whole orchestrator including multi-node operations needing result consolidation, a state machine to track down the... uhmm...state of the system, and things like that.

So my point is that probably there are several axes of "hardness" in a problem that can be mixed together, and that makes it difficult to compare a problem to another (i.e. over which combination of axes are you comparing one to another?). I guess part of the response to such a question in an interview would be then to explain the context so that it can be more easily understood why was that problem perceived as hard and over which axes. Was it because the problem was an optimization one and the previous code was impossible to work with? was it because the business constrains (as I believe was my case) where surreal? was it because the teammates made it really hard to move forward (e.g. bureaucracy, defensive/aggressive coworkers, etc)?

And I know we are talking about "technical problems" but I find it increasingly hard (as my career advances) to make a distinction between what is and what is not a technical problem. If the business constrains dictate certain sub-optimal solution must be developed, and that in turn causes technical issues, was that a technical problem? if a teammate is disruptive and introduces sub-par code that later causes bugs that need to be immediately addressed now, was that a technical problem?

In my mind they probably all are to some degree just by virtue of in the end influencing whatever technical decisions are being made. So maybe that could be part of the answer? asking about what specific sense are you referring to when you (the interviewer) ask me about the hardest technical problem.


It's all hard until you swim in those waters for a while.

I've got two answers that I would probably consider.

#1: debugging what ended up being a hardware problem. I was working on a device with a microcontroller and it had a sleep mode where the micro would program an RTC, shut itself off and the RTC would trigger the board's wakeup circuit when its alarm fired. I'd already told the board designer of two or three hardware bugs that somehow (surprise!) turned out to be my software bugs. So this time I was a little more cautious. There was a more senior software engineer working with me, and he told me to check the schematic. I looked at the processor manual and the board schematic, and followed the traces to make sure I was doing it right. And I just couldn't find out what was wrong. So the senior sw eng said, "well, ok, if you're sure, then just probe the RTC pin with a scope." Wow. A o-scope. WTF is this gloriousness? So I got to learn a bunch about how to go from the board schematic to the board layout, how to probe, what all the stuff on the scope was about. Sure enough, the RTC alarm went off on schedule but the trace showed some funny stuff that indicated that there was a design error in the board somewhere (I didn't understand the details, but IIRC a cut-and-jump of the prototype made the bug go away).

Motto: It's never a hardware design bug. Until it is.

#2: This bug I learned a good amount from. I would see frequent misbehavior in my code where it looked like multiple subsequent sessions were being corrupted somehow, perhaps from a previous session. I was certain that I was releasing resources from the previous session and destroying all of it. I watched my code hit my `boost::shared_ptr<foo_t>::reset()` and so clearly it was now gone. Right? Well, shared_ptr<> not all it's cracked up to be. So I went back to read about conventional advice about shared_ptr<> and people would frequently suggest boost::weak_ptr<> where appropriate. I mistakenly thought about these as a dichotomy for some reason. But that was no good because I couldn't share my weak_ptr<> so it's not really useful. Except -- wait -- the vast majority of the time I'm propagating my shared_ptr to places where they don't need to share it beyond themselves. So my design would actually be better if I shared the shared_ptr as a weak_ptr anywhere other than Right Here. In doing this redesign, I realized that the weak_ptr promotes itself temporarily by effectively asking "hey is this still allocated somewhere?" Turns out that other thread using this resource would occasionally take slightly longer and wouldn't decrement its shared_ptr until after the new session had started, which would mean that the old resource was never destroyed. After the redesign in this case where the background thread loses the race it would just fail the weak_ptr<> promotion and harmlessly skip its activity.

Motto: shared_ptr<> and weak_ptr<> help preserve an ownership metaphor. Which code Owns this memory/resource and which code is just "borrowing" it?


Dealing with legacy code.


>> What is the hardest technical problem YOU have run into?

I have solved about ten "hard" problems in my career, most of which has been in R&D. Each one of these had multiple prior failed attempts, and in some cases took me months of thinking before I could find a solution.

1. Qualcomm wanted me to devise a computer vision solution that was more than two orders of magnitude power-efficient than what they had then. There was a clear justification existing as to why such a drastic improvement was needed. Nobody had a solution in spite of trying for a long time. Most laughed it as impossible. I started by looking for a proof as to why it could not be done if it indeed could not be done. After some three months of pulling my hair, I started getting glimpses of how to do it. Some three months later, I could convince myself and a few others that it is doable. Some three months later, the local team was fully convinced. Some three months later, the upper management was convinced. You can read the rest here: https://www.technologyreview.com/s/603964/qualcomm-wants-you...

2. I wanted to solve a specific machine learning and Artificial Intelligence challenge. I would code for a day or so, and then again run into days of thinking how to proceed further. E.g., coded a specific parser algorithm for context-free grammars, including conversion to Chomsky normal forms, in 1.5 days including basic testing. However, what's next. Woke up with new ideas for about ten days in a row. Conceived Neural Turing Machines back in 2013, about a year before Google came up with their paper on the subject. (Unsurprisingly, I did not had that name in mind for it back in 2013.) I also did not get an actual opportunity to work on it, as a result of which I am still not sure if I could have actually done it.

3. Needed to make a very sensitive capacitance measurement circuit, trying to get to atto Farad scale floating capacitance even with pF scale parasitic capacitance to ground. The noise and power requirements were very challenging. After about three months of seeking inputs from the team lead without hearing a solution, I ended up coming up with a solution. I later discovered that the technique was already known in RF circles, though only a few were aware of it. Capacitance measurement circuits with such sensitivity did not show up in the market for several years. (My effort was target at using inside a bigger system.)

4. I was working on measuring bistable MEMS devices. The static response of these was well understood. However, so far, the dynamic response was only measured by the team; there was no theoretical explanation behind it. We invited several professors working in the field to give seminars to us, and asked questions for this, but never heard back a good answer. A physicist colleague found an IEEE paper giving the non-linear differential equations behind it, which worked, but yet provided no insights into the device behavior, and took time to solve numerically. I wanted a good enough analytical solution. I kept on trying whenever I had time-opportunity, while the physicist colleague kept on telling me to give up. Six months later. I woke up with a solution in mind, and rushed to the office at 7 am to discuss with whosoever was there at work at that time. The optics guy I found did not fully understand it, but did not find it crazy either. A few hours later, the physicist friend confirmed my insight by running some more numerical solutions. I could then soon find tight enough upper and lower bounds, and the whole thing fit the measurements so well that most people thought it was just a "curve fit". (It was pure theory vs. measurements plotted together.)

5. I proposed making pixel-level optical measurements on mirasol displays using a high-resolution camera to watch those pixels after subjecting them to complex drive waveforms. Two interns were separately given the task (surprisingly without telling me), and both failed to develop algorithms for pixel-level measurements. Later a junior employee worked on it, was unable to develop pixel-level measurements still, though was able to get it to work at lower resolutions. The system took about 40 minutes of offline processing in Matlab. Later, a high-profile problem came up where pixel-level measurements were a must, and I was directly responsible for solving. Solved in one day. Processed images taken in real-time, not 40 minutes. The system stayed in deployment for years to come.

6. We had bistable MEMS devices, and there was a desire to make tri-stable MEMS devices. Several people at the company attempted it, including a respected Principal Engineer, but no one could figure how to even start. I could not figure either at the outset, but started bottoms up from Physics and using Wolfram Mathematica to create visualizations around the thing. And bingo. In a few days, I had not only figured how to make these tri-stable MEMS devices, but also multiple schemes of driving them. My VP's reaction was "Alok, you should patent that diagram itself", given the clarity it had brought on the table.

7. We were creating grayscale/color images using half-toning. A famous algorithm, Floyd Steinberg, works very well for still images but has lots of artifacts for videos. An PhD student working in the field was brought in as an intern, nevertheless, the results were not great. The team also tried binary search algorithms to find the best outputs iteratively, however, it was not implementable in real-time as needed. I was interested in the problem, but was not getting time to give it a fresh thought that it needed, until one day. A few days later, the problem was solved. I developed some insights into it and just had the solution coded, to the surprise of people who had spent months working on it.

I could go on writing about more cases.


''''' Hmm, so, great. That's interesting.

So, um, how would you say your skills deploying to NodeJS are. Would you rate them as strong? Tell you what, lets go ahead and break for lunch now and Sam is going to show you around the campus a bit and then we'll continue with a follow up and some coding challenges. """""


Writing a Java compiler (not the full language, but a large subset including inheritance and polymorphism), writing a C++ game engine.

So much work involved. Very complex problems, needs a lot of theory but also practical knowledge. Needs good debugging skills. And endless amounts of time.


> I was once asked to basically create a clone of Yelp in a week

That's just unfair, inconsiderate and unsustainable, and is also a red flag (ie: I would not go to that company).

Giving up a week of your life for an interview ?

and for free ?

And most importantly - what if ALL companies did this ? you would have to spend months working from home, for free, just to get to the next level of the hiring process .

I don't know what the solution IS, but I know for sure that it IS NOT a 1-week work-from-home-for-free task .

(edit:formatting and some grammar)

Edit2: people will actually pay good money for a yelp (or craigslist/etc/etc) clone on places like upwork. And they asked you to do it for free .


You just gave me an idea.. you could actually run an entire company on nothing but free work by candidates doing "homework assignments"...


:)

it is probably something like : 1. go to upwork

2. find a 2-3 hours task (eg: scrape entire site [sitename].com, design a logo etc. etc.

3. give the same task to 5 different candidates

4. choose the best one

5. make profit and gain upwork cred

6. send a "thanks but we chose a different candidate" email to all the candidates

7. repeat

(edit:formatting)

edit2: you can even let candidate X make a code review on candidate Y.

step 8: automate the process and go public


Managing all the candidates and submissions is actually a full time work.


> Hours spent talking: 224

> Hours doing homeworks: 33

I would prefer this to be the opposite: more homework, less talking . That's what the job is going to be like, anyway ..

(edit:Formatting)


> you just have to expect the competitors to be too incompetent/short-sighted

Same probably goes to %99.9 of all investors as well, otherwise those %99.9 would have been rich from buying Google after their IPO.

Being short sighted is a humans' thing, probably. not a tech CEOs' thing..


Buying Google at the IPO wouldn't have made most investors rich anyway. They IPO'd with far too high of a market cap already (as contrasted to Microsoft, Oracle, Dell, Cisco, etc. from the prior generation).

A 17x return over 12 years won't make most (non-rich) investors rich unless they bet every dollar they have on said investment (a nearly impossible and entirely impractical scenario).

An average investor betting $10,000 - $25,000 on Google's IPO would be a serious investment. $170,000 - $425,000 is a great return off of that, it's just not anywhere near rich.

Had they IPO'd at something more like what tech companies used to, an investor could have seen a 250x to 1000x return up to this point. That would make you rich off of a $10k bet. Amazon for example, has produced something like a 560x return so far from the IPO.


> I feel like this is the best short ever

'feeling like' is not a good way to convincing me to short something. nor long. nor anything ;)

> ..., but I feel like the multiplied probabilities of both make it as clear a bet as you ever get

So you 'feel' like it would be a 'clear bet' ? sounds like a big maybe to me.

How about NOT shorting/longing at all ? I feel like this is the clearest bet for me ;)

(edit: spacing)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: