One thing that puzzles me about the Chromecast compared to other Google/Android devices - why has Google locked it down so much and not released a bootloader unlocker? Is it for publisher/DRM-type reasons (given that the Chromecast is primarily for playing content on your TV)?
Yeah, almost certainly this. Same reason Sony locked down the Dash (the Sony branded chumby device) -- it was essentially required by netflix (in turn because it was required by netflix's content partners).
Still, it would have been nice of Google to put in a developer mode for the Chromecast that basically wipes all the proprietary bits and turns it into a generic linux box. Though, granted, at the price they are selling the hardware for that may not be the best business move... and in any case, there are other options for that in the form of all the many linux/Android-on-a-stick devices (like all the Rockchip based ones), but having one you can easily buy at retail in the US would be a nice option.
I find it amusing that Netflix is one of the main reasons we have both locked-down devices like the Chromecast and DRM in w3c standards. It all boils down to the content providers sticking their fingers into tech, but Netflix's popularity and leadership in the space is probably the direct driver of both of these.
These are my gripes with Unity, in question format :) Would love to hear from anyone that has concrete answers for these:
- How does UDK build apps for iOS natively on Windows without requiring a Mac? Are they doing some kind of insider thing that Unity can't replicate?
- I see Unity as massively extensible and that's one thing I like about it. Comparisons are often made between vanilla Unity and UDK; what about Unity + PlayMaker + UFrame + Level Builder, etc. I don't see this ease of editor extensibility with UE4 (I'm sure it's there; but the Unity Asset Store just lets me cherry-pick one, click buy and then just have it to use immediately after download - I like that).
- My biggest gripe with the editor is the font size. Will the new UI that's coming in 4.6 and/or 5.0 allow me to increase the font size used by the Unity editor to actually make it comfortably usable rather than fatiguing?
FWIW I've preordered Unity 5 and I use UE4 at the moment as well. Nothing big completed in either engine (just some side work here and there) but no fanboy-ism for any particular one (though a bit of a fondness for Unity as I encountered it back in the old Mac days when Unity were called Over The Edge Entertainment; GooBall was pretty cool by the way).
I don't know how UDK does iOS but I'd like to mention that was also a big draw to adobe air for me - you can develop/compile/deploy to ios from windows (don't know how it does it either but apparently it's possible).
We'd tend to do that for stories, yes; take an overall story and assign it a number of story points. We'd then try and make time estimates for each of the story's subtasks, which is where I would have trouble. Time was a bit like a currency - "We have X number of hours available in this sprint, the sprint is big enough when we've 'spent' all those hours on estimates".
My experience actually matches what you're saying, in that there was always much better discussion around the number of story points and I never felt like my story point estimates were guesses. If there were any disagreements over story points, we'd at least be able to discuss it concretely ("here's why I think this task is more complex / more simple").
For what it's worth, before I was gone our process was moving more towards a velocity focus, where story points were getting more importance than time estimates. Generally Scrum was a new thing for our company and our group was one of the first to use it; I don't think they were using estimates as a whip, at least not intending to. However the business still needed/wanted concrete dates that the group had to commit to.
At our company, story point method is very accurate, so most of the tasks are completed at the end of sprint except maybe 1 or 2 at the end(out of 30) which normally caused by something outside of our control.
At previous companies I used time based 'guess' methods, which were all over the place most of the time. I don't think i've ever been at a company that could actually get it right.
Honestly this way of estimating really suits my team. I don't want go back to a company that picks time based estimates out of thin air then expects you to meet them. I was really bad at that, but so is everyone else.
We don't expect to developers to have committed to complete stories mid-sprint or anything. But we have generally come to expect all items to be completed at the end of sprint, simply because our capacity planning is very good at the moment. Everyones happy with estimates at the moment. We do slight adjustments based on previous velocities. When we don't complete stories at the end of the sprint, we change the capacity for next sprint rather than blame developers.
It's a lot more collaborative, rather than a hostile blame game.
As many have said here, focus on what you're actually building. That means at some point you have to put aside the fear that the framework you've chosen "isn't good enough" and just build something with it. With any framework, there are going to be some things that fit well and more often than not some things that don't - I think it's rare that any one framework perfectly fits any one project, which is probably why so many frameworks are in existence in the first place.
So my ideal philosophy is to pick a technical stack and try something with it. Until you're building something with a stack, you're just messing around.
Now -- the harder to solve (and more infuriating problem) I see is a recruitment culture that seems to favor buzzword bingo. Why don't job ads just ask for "a developer who is comfortable having to work with or pick up the following frameworks quickly" instead of "MUST have at least 3-5 years experience in such-and-such. "
Don't even get me started on 'language quizzes'. Oh man I hate those. Learning language and framework minutae off by heart is pointless. In production code, it's rare to use ALL the features of a given language. So you have to be inquisitive, but at the same time turning down candidates because they know .NET 3.0 but not the minutae of .NET 4.5 is just lazy; unless the codebase actually uses the new feature (and then you have Google anyway).
(on that note, one of the best job ads I ever saw was for a games company where their only requirement was "show us a complete game you've built". Now THAT is sane and says to me that these guys/gals know how development works in the real world. Unfortunately I didn't have a complete game to send them, oh well. )
WRT to learning new things, I'm sort of going through that now. I'm "de-specializing" myself a bit by learning HTML, CSS and JS. I always pegged myself as 'not a web dev guy' but unfortunately the country I live in doesn't have the healthiest games industry. I had to drop the 'not that kind of developer' attitude and just go into it with as much of a blank slate as possible. It's been interesting so far, with the hardest mental roadblock I've had to overcome is thinking that "web pages are just documents" from my late-90s/early-2000s exposure to basic HTML/CSS.
I'd never made the connection between "low priority project" and "strictness of arbitrary deadlines" as the project having to continually race against its' own insignificance. That's a fascinating insight and raises a number of questions.
For example, 'crunch' is something that's endemic in the games industry and a contributing factor is that big publishers will impose arbitrary deadlines on their studios to 'ship a game'. So, what does this say about the significance of these projects? Also, are the deadlines really useless, or do they have some real basis? (EA is not going to go broke because a Need For Speed game is a few months late, for example).
On the other subject of programmers being bad at estimation. I've done a hell of a lot of introspection with regard to this and why I find estimates so unpleasant. A few reasons that come to mind:
- When you're asked for an estimate, what you're often really being asked for is a deadline.
- In a sprint planning session, I'd often get asked to produce an estimate for "how long will it take to design this?" followed by an estimate of "so how long will it take to implement?". Which makes no sense - I can't answer the second question without answering the first. This often came up with bug fixes or 'small' jobs - things that could reasonably be assumed to fall within a single sprint for at least some of the time.
- I feel like there's an unspoken rule that your estimates relative to other team members signal your competency to the team and to your manager. So if you're constantly providing estimates that are longer than those made by other team members, it's a bit like you're saying "I'm less capable than them". You feel encouraged to provide short estimates and then hide the difference. I see this come up most starkly when sprint planning wants consensus estimates, despite having team members of varying skill levels in the particular task being estimated.
- I find that 'watching the clock' breaks my flow. If I'm having to track my own time, it's like a bit of cognitive load that I don't want. I don't want to have to think about it, because losing your sense of time is so crucial to really being in-flow. Often tools will ask you to enter how many hours you worked on something as part of tracking. So you end up even guesstimating at that ("well it felt like I spent half a day working on it, so I'll put in 4 hours").
- As a result of the previous point, I don't really collect historical data that I can reflect on. It'd be great to be able to go back and see how long it actually took me to do stuff. Not to mention that when you leave a company, you generally lose access to any historical performance information like that which you may have built up over the years.
There is a lot about Scrum that I like, but as a whole I agree it still suffers from the arbitrary deadlines problem. I wonder if there's a business acceptable way that you could do sprint-like activity, eschewing the idea of time estimation and replacing it with effort estimation only. Rather than aggressively fixing the time through schedule, aggressively fix the scope of work and then just work towards completion "until it's done".
An important overall point that the article seems to be highlighting is that, in general (independent of these myths), developers are becoming less and less familiar with what the underlying system is doing. Or perhaps the more correct term is that developers are becoming more 'distant' -- there are more layers of abstraction in the whole system than there have been in the past. At the top, programming languages themselves want to get more abstract to make the process of writing software more efficient and in some ways more automated. At the bottom, hardware needs to use clever design to hide performance limitations that we come up against.
The problem is that sometimes, these abstractions leak. Knowing that what you're seeing is one of those leaks is key to not go down the rabbit hole of, for example, chasing a bug that ends up being caused by a hardware failure (painful experience!). So you have to keep notice of those layers below those you're directly working with; while you don't have to live there, you shouldn't totally ignore their existence.
I think this is a subtle but excellent point and answers the gut question I first had when reading your top post, which was to think: why would you (not YOU specifically, just the general 'you') marry someone who:
"....you don't respect, someone you think makes bad decisions, someone who isn't kind back, someone who is emotionally selfish or self-centered, etc. "
The answer being that you don't know these things going into it or they don't bother you that much; it's only when they accumulate over time that they start really bothering you. An analogy would be taking a new job at a company with lots of higher-up political BS - it doesn't bother you so much to begin with, but the more you learn the more it can do so.
EDIT: Realized I read your list of qualities as if you were talking about the same person with ALL those qualities in tandem; my "why would you marry?" obviously makes less sense if it's only one or two of those qualities in isolation.
Thanks everyone for the suggestions. I'm starting out by working through some stuff on TreeHouse and then will go from there. Looks like it's going to be a great adventure!
Why is WYSIWYG not used? Is it just that the available tools aren't good?
I would have thought that writing raw HTML would be a bit like writing raw WinForms code these days, or writing XAML by hand instead of using the forms designer.
Note that's the narrow application of visual editors I'm thinking of here - laying out buttons, binding events to buttons, binding data to elements, etc; not the "create a pretty looking page and have it spit out horrible HTML" type stuff.
Most WYSIWYG stuff I've used (long ago, mind you) was quickly not granular enough to be worthwhile. Very often I'll be working with HTML that is any of these:
- initially hidden
- meant to be rendered as part of a larger whole
- needs to be a specific element, have a specific structure, have a specific ID / class
- contains serverside code / include that renders additional HTML
- is actually a domain template language such as Mustache or Smarty that the tool may not understand
- is greatly changed by CSS (which may itself need to be compiled, such as in the case of LESS or SASS)
WYSIWYG tools usually can't grok the context of cases like that and if you try to use them, they end up taking more time than it would to simply write out the HTML oneself.
You can use WYSIWYG editors but make sure that they comply with latest HTML and CSS specifications, so that when you want to manually modify the code, you will find yourself in a well structured framework and not in a jungle of incoherent code.
But these days thanks to the evolution of the web itself, a lot of modern online tools are coming out. For example check the power of https://webflow.com/ I believe that being these new tools "indier" and lighter, they have the chance to keep up faster with the new technologies and best practices of web design!
Of course the best is - before using these tools - knowing about what the best practices (in terms of code) those tools are implementing (if they are) in their rendered code. For example you should learn about how much Twitter Bootstrap [http://getbootstrap.com/] simplified layout design for web for since a couple of years now, and then maybe you will recognize those tools using it in the code they generate, and then you'll feel you can take over by yourself and further change things at your need. Another example is jQuery.
Also, in my opinion, some kind of new fundamental learning/building facilities these days are "developer tools" that come with browsers (Firefox and Chrome). For example check https://developer.mozilla.org/en-US/docs/Tools/Page_Inspecto... You don't know how much you can learn from using them!
But, If you want to get to know even more, and start a journey in a path of learning nothing worked more for me as than this little big interconnected learning path: http://www.bentobox.io/
Hope I updated you a bit with times, good luck! :)
Can you give some more examples of the kinds of closed-mindedness you're trying to get at? I'm trying to understand your line of thought. Are you saying that Myhrvold is being an asshole because he's elevating what he is doing to a much higher level than it justifies?