I'm working remote since 4 years, and I have had periods spanning weeks where I have moved less than 2000 steps during 90% of the days. When I started working remote, I went from walking 8k steps per day to ~1k consistently for a 6 months or so.
My schedule when I started was 1) Go up from bed 2) Go to computer 3) Sit for ~8h straight, only pause for toilet and making lunch 4) After 8h make dinner, eat, feel like a vegetable in both mind and body.
I've since then re-arranged my day to at least walk 5k-8k steps per day. I've found that the step count doesn't matter much, I need at least 30min high intensity workout every day to not go numb in my mind and body. Luckily, I have two daughters, so I'm forced to do more excersices than I did when I worked from outside of my home.
Nothing. Strangely enough, I can't think of anything new techy that I learned this year.
I've worked full time remote in mobile and machine learning the last 3 years, but this year is special. This year has required me to set my own motivations and goals aside to really be there for my 18 months old daughter, and for my wife. At first I tried to resist and focus on both my family and my career, but I found it hard to be my best self for them.
I took a step back from the stress and greed. I learned that I need to keep my career out of my head when I'm caring for my daughter. I'm more motivated than ever to do some cool projects in 2019. And I've learned that I need to maintain a mental balance between work/passion/me and my family.
Towards new beginnings, and growing as a person.. :)
I think you learned the most valuable thing about tech that you could possibly learn: that it is not the most important thing in life. Congratulations!
Thanks! And yes, that's indeed true. There's a fine line between passion and obsession. I found that my motivation and passion easily are morphed to their negative equivalents when I'm under pressure, or when I just need to focus on something else entirely.
Real world use cases might be; Classifying the users mood from mouse movement, classifying the microphone audio. I.e. process real time data that might be too large to upload.
I personally haven't seen any NNs being used in browser apps, but there are plenty of existing mobile apps that has NNs to classify audio/video/etc directly on the device.
I think they wanted us to do something with NLP and ML, to make it syntactically-based. I happen to have studied linguistics and know how complex this would be — and of course it would have to be custom-built for every language. The current version, which just uses line position, works fine and is language-independent.
Another explanation: they didn't actually know how they wanted us to us ML — they just knew they wanted us to use it.
I do ML/NLP, and recently have been doing color related things.
I think you are underestimating how useful NLP could be in your application.
I tried out one of your test cases, and I did find it useful. However, I think that blending colors based on entity recognition along with your line based system could focus attention on important parts.
Have a look at [1] (sorry for the long URL) and imagine the colors blending with yours, so entities were slightly brighter than the rest of the text.
I don't disagree that NLP could be powerful, and I have thought about ways to make it syntactically aware.
But honestly the biggest barrier we face in adoption among licensees (platforms that would integrate our tech) is that they are simply uncomfortable with text that has colors in it. It's not about how much it helps any metrics (the things that NLP could improve) — it's just that most folks don't want to be early-adopters of crazy-looking tech.
Though I should note we do have some great licensees, especially in the education, impact, and accessibility markets.
Regardless, I'd love to chat with you further — please shoot me an email (contact@[domain]) if you're up for chatting about how we could deploy NLP as we develop.
So.. the VCs weren't wrong, and it is really UI/UX issues stopping you? And those same UI/UX issues are actually things your primary business has to overcome anyway?
It sounds to me like you should talk to those investors again ;)
That sounds like a surefire way to annoy and distract the reader. Imagine if every entity was in bold, in a piece of text. It wouldn't draw your attention to important information, it would just disorient you completely. "Why is 'John' in bold? I already know this sentence is about him!".
Yes, I think I agree. But OTOH I thought different colours for different lines of text would be annoying and distracting, but that’s what this app does, and apparently it works.
I believe it is the same functionality as in Sublime, IntelliJ, Eclipse and others. It's a total lifesaver when editing bulk stuff, like a lot of constants and what not! One of my must-haves in an IDE.
Dagger2 is great when you have it setup, generating the dependency graph during compilation makes it lightning fast during execution. I also like the error messages, as long as you can read Java stack traces all the info for solving errors are in there - I've never seen an unsolvable situation this far.
Kapt does bump the clean build time to about 2min, incremental builds take between 10-60s - although actually launching the app on an emulator/device adds another 20s.
The cons of Dagger2 that I've experienced since it's launch are; The documentation and support is useless. You're on your own of you don't use a 3rd party sample project as a template. No one understands scopes and subscopes, subcomponents etc. The new Android dagger api is arcane and weird, no one wants to use it.
The Dagger2 team should (if they aren't already) create a Kotlin extension for it, I believe there are some syntactic optimizations to be offered.
Interesting, will check this and see what i could add to my blog. But yeah, if there is an official example of how to use dagger-android, it will be much helpful. Took me a day to figure it out the whole things and 2 different types of dagger...
> No one understands scopes and subscopes, subcomponents etc
I think there are two reasons for this. First, is how scopes interact with Android component lifecycles, which make everything harder (like RxJava, etc) since they are a complexity multiplier.
Second, there are seemingly bizarre design decisions. For example, if you use dependent components (aka component dependencies), the syntax is pretty straightforward:
So the natural order of things is, components use modules, and components can depend on other components. Pretty simple.
Subcomponents turns this relatively simple schema around. The documentation for subcomponents say:
> "To add a subcomponent to a parent component, add the subcomponent class to the subcomponents attribute of a @Module that the parent component installs."
I find is truly strange, now we have modules depending on (sub)components which is the inverse relationship as above.
You get it totally right, I just knew that there are 2 ways to implment DI with dagger, dagger 2 or dagger-android.... And the official docs are indeed useless... And before you learn some dagger basic, some online tutorial just can't be understandable... because either they mixed up the 2 plugin or they just all have their different setup.