Hacker Newsnew | past | comments | ask | show | jobs | submit | rcchen's commentslogin

Homeward | Designer, Data Engineer, Software Engineer | Austin, TX and San Mateo, CA (hybrid) | Full time

Homeward is a technology-enabled healthcare provider delivering quality, affordable and comprehensive care to rural America. Our innovative healthcare delivery model is purpose-built for rural America and directly addresses the issues that have historically limited access and quality of care for 60 million Americans.

We recently raised a $50 million series B from General Catalyst, ARCH Venture Partners, and Human Capital to tackle lack of healthcare access in rural America. We’re a small team of operators passionate about using technology for good. As an early member of our engineering team, you’ll work cross functionally with product, design, operations, and clinical teams and lead the design and development of our technical infrastructure. Your contributions will influence the Homeward product at a foundational level.

To learn more about the roles and apply, see the links below. If you have any questions, feel free to reach out to rchen@homewardhealth.com and just mention you're coming from HN, thanks!

* Product Designer, Growth - https://grnh.se/3f3e8b335us * Data Engineer - https://grnh.se/9b59e48e5us * Senior Software Engineer, Care Experience - https://grnh.se/be0c71c55us * Senior Software Engineer, Growth - https://grnh.se/8b7696e85us

All other open roles can be found at https://grnh.se/3fb8d2595us


There's a bit of a nuance here - his grandniece was killed in the Ethiopian 737 accident [1] so it's personal at some level for him.

[1]: https://www.npr.org/2019/04/04/709999296/ralph-nader-calls-f...


Mmm... didn't know that. I understand _why_ now, but, "never letting the 737 max fly again" is a purely emotional decision that helps no-one. They didn't tear down a 45 story building after the Hyatt Regency walkway collapse, as example.

HN is showing a trend of purely blaming Boeing, but there were in fact several failures. Any one of: pilot training [There's a runaway stabilizer trim procedure that's actually a memory item for 737 pilots], not allowing single sensor inputs for flight control systems [FAA spec failure], the decision to allow the MCAS system to command an extreme amount of trim [engineering failure, should have been limited to a sane value] could have prevented the accidents.


Oof unfortunate that Detroit missed the cut, wonder why that happened; they seemed reasonably competitive from an outsider's perspective.


I assume Detroit has a lot of the same problems that St Louis has, they are both great little cities with an up and coming tech industry but hiring skilled talent and conniving them to move to the area is a nightmare.


Oakland county to the north of Detroit and Washtenaw county to the west of Detroit are two of the wealthiest, most highly educated counties in the country.

And the cities surrounding Detroit have a lot less problems than the city itself.


I think the parent's point still probably stands. Wherever they end up, assuming there really is a 50K HQ2 in the end, they have to convince a lot of people to relocate. And a lot of people would be: 1.) Detroit, no thanks or 2.) I can live with Detroit but when I end up leaving Amazon after 2 years like so many people do, I'd have to find a job in a different location. I imagine the talent question at least factored in--schools in the general area notwithstanding.


My point is that the problem is largely one of perception.

The hundreds of thousands of people in the area with technical educations aren't working at McDonald's.


"My point is that the problem is largely one of perception."

That very well might be true, but that's Detroit's problem to fix, not Amazon's. Amazon doesn't want to have to deal with that; they want a place that they can get people to come to day 1.


Of course. Note that my first comment is literally pushback against the poor perception...


So, where are they working?


Detroit is shifting fast in this regard. It still has problems, but it's not the city it was 10 years ago, which is what I think a lot of the perception still is.

The Detroit bid put together by Dan Gilbert and team highlighted the fact that almost 50 million people live within a 5 hour drive of the city, with a huge collection of major research universities. Another big push or two like Amazon HQ2 can provide, and I think the city and metro area become highly attractive as an economic destination like Chicago dominates today in the same region.


I think rfp asked for a stable and growing economy. Detroit is still stumbling in this respect.


No employee wants to move there.


Given Detroit's public transportation, I'm not surprised.


That is sad, indeed. Amazon has the scale where they could have essentially created the social infrastructure they want.


Amazon can't create social infrastructure. They can create buildings and jobs, but the social infrastructure of a city are things like government policies and quality, existing tech worker base, etc. They have little to no ability to move the needle on those things on any kind of reasonable timescale.


> If you are going with AMD, EPYC has quite impressive performance.

Does this imply that Dropbox has started testing out EPYC metal?


Now that the Phoenix DC is online, will there be geographical redundancy between Phoenix and NCal for Backblaze customers?


There's currently redundancy inside of Backblaze datacenters with our Vaults architecture (https://www.backblaze.com/blog/vault-cloud-storage-architect...). Georedundancy is on the roadmap for Backblaze B2, but not currently for the Computer Backup service.


it'll throw an error if a parameter is undefined, and https://github.com/Microsoft/TypeScript/issues/15333 which lands in the next version auto-suggests spelling corrections as well


I wonder if their render service is still backed by racked Mac Pros (http://photos.imgix.com/racking-mac-pros). If so, considering the lack of updates around that machine for the last several years, I wonder if they are planning to remain with that solution.


Wow. I just read that.

> We operate our own hardware, run our own datacenters, and manage our own network infrastructure.

This seems insane to me. Although I don't work with image processing beyond "saving for web" in Photoshop, so I could be wrong. Why would they not use AWS or any number of other cloud providers where capacity planning is handled for you?


The bigger question is, why not use elastic infrastructure?

Sure, image optimization is CPU intensive, but their use case is super bursty. When the source image changes, you do multiple optimizations (convert to WebP, lossless JPEG, lossy JPEG quality change, etc), cache the results and you are done. The ratio of "optimizing image" to "serving optimized copy" must be insane.

Dedicated physical hardware feels like a waste.


[I work at imgix and have helped lead the team on the production issues we face and gathering the details for this blog post]

> why not use elastic infrastructure?

This would have been beneficial (to a certain extent) to solve the over-capacity issue that we faced, but it wouldn't actually be of that much help in an under-capacity scenario. None of our machines are ever idle -- it's simply a matter of how long the work queue gets.

> The ratio of "optimizing image" to "serving optimized copy" must be insane

Yeah, but the serving an already rendered image doesn't traverse our image rendering stack, as you might imagine. That content is cached at the CDN edge (or failing that, within our rendered output cache). We don't re-do work when we don't have to, that's an inherent part of how we keep the cost of operations right and provide our customers with a service of this complexity at this price point.


Running the delivery/CDN part on AWS would probably be way to expensive, and if you are running parts of your own infrastructure anyways (and thus are already paying for sysadmins, data center, ...) you can put at least the base load of your processing there also. But they could of course have the option to fall-back/scale to a cloud provider, if they didn't "require" Mac OS X...


I would guess because of costs. At one point I was going to do a video processing app and after doing some calculations the costs were insane, and the savings with actual hardware were tremendous. I imagine image processing is a lot less than video, but maybe the same deal. Then again... they did rack mac pros so who the heck knows what they're thinking.


[I work at imgix and have helped lead the team on the production issues we face and gathering the details for this blog post]

The main reasons are twofold:

A) cost. I haven't done the math yet in 2017, but my recollection of the difference circa 2015-2016 between GPU instances on AWS and our solution is a COGS that's about 3x higher on AWS. GPU instance prices have come down a bit since then, but they're still very expensive and somewhat supply constrained as well.

B) Technical frameworks. We'll reach an inflection point this year or next where we've added or changed so much of the rendering pipeline that we've basically written it from scratch, but until then we're still reaping the benefit of building on top of CoreImage. It really benefitted us early on in getting a MVP faster, and it's continued to pay (slightly reduced) dividends over the course of the company's life. At some point the dividends will stop, and then we'll move on.

I understand the skepticism around Mac Pros. There are challenges there -- it isn't my most favorite solution of all time. It is a practical solution for us though. I have no room for computer religion or deciding to do things because they're cool or shiny or new. Anyone that's ever spoken to me for a few minutes in person about technology stuff can attest to that.

The Macs don't have IPMI, for example. That sucks but we do have power outlet control and a network installer as a way to back into "out of band management" for them. They're largely stateless, so it's a tolerable solution.

They do run a Unix-like OS (thank god), and they do represent a good price / point ratio for image rendering hardware. We could do a little bit better with Linux servers and GPUs -- but the upside is only around 10-20%, and there's still engineering work to do there. It's on our roadmap to explore this more fully and maybe start taking more concrete steps in that direction. For now, OSX still gives us more than it takes in terms of cost.


If I remember correctly, a ton of their original image processing code was written for OS X and it was nontrivial to port to another platform (Might have used CoreImage?), so mounting Mac Pros was the only solution they could find.


There must be something else going on, because the authorization hold has been $1.25 for Lyft for as far as I've been using it.


They may have some sort of risk factor, but it was definitely $25. Judging from the results on Google, the $25 charge is not uncommon.


*Mac OS X 10.6 - 10.8

Can we please fix the title? Seems pretty misleading


Why? React and React Native are both pretty core to Facebook's main applications these days (all of Instagram is written in React, there's a smattering of components all over the main Facebook page, and multiple Facebook-written applications are written in React Native, including parts of the main Facebook app). In contrast, Parse was more of a dev-infrastructure product that didn't end up impacting developer relations as much as Facebook wanted it to (I'm guessing)


I love react and react-native ... this kind of announcement is scary though because Facebook is a big part of the ecosystem.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: