It sounds like you're suggesting that we need machines that mass produce things like automated pipetting machines and the robots that glue those sorts of machines together.
Replacing a skilled technician is remarkably challenging. Often times, when you automate this, you just end up wasting a ton of resources rather than accelerating discovery. Often, simply integrating devices from several vendors (or even one vendor) takes months.
I've built microscopes intended to be installed inside workcells similar to what companies like Transcriptic built (https://www.transcriptic.com/). So my scope could be automated by the workcell automation components (robot arms, motors, conveyors, etc).
When I demo'd my scope (which is similar to a 3d printer, using low-cost steppers and other hobbyist-grade components) the CEO gave me feedback which was very educational. They couldn't build a system that used my style of components because a failure due to a component would bring the whole system down and require an expensive service call (along with expensive downtime for the user). Instead, their mech engineer would select extremely high quality components that had a very low probability of failure to minimize service calls and other expensive outages.
Unfortunately, the cost curve for reliability not pretty, to reduce mechanical failures to close to zero costs close to infinity dollars.
One of the reasons Google's book scanning was so scalable was their choice to build fairly simple, cheap, easy to maintain machines, and then build a lot of them, and train the scanning individuals to work with those machines quirks. Just like their clusters, they tolerate a much higher failure rate and build all sorts of engineering solutions where other groups would just buy 1 expensive device with a service contract.
This sounds like it could be centralised, a bit like the clouds in the IT world. A low failure rate of 1-3% is comparable to servers in a rack, but if you have thousands of them, then this is just a statistic and not a servicing issue. Several hyperscalers simply leave failed nodes where they are, it’s not worth the bother to service them!
Maybe the next startup idea is biochemistry as a service, centralised to a large lab facility with hundreds of each device, maintained by a dedicated team of on-site professionals.
None of the companies that proposed this concept have managed to demonstrate strong marketplace viability. A lot of discovery science remains extremely manual, artisinal, and vehemently opposed to automation.
> They couldn't build a system that used my style of components because a failure due to a component would bring the whole system down and require an expensive service call
Could they not make the scope easily replaceable by the user and just supply a couple of spares?
Just thinking of how cars are complex machines but a huge variety of parts could be replaced by someone willing to spend a couple of hours learning how.
That’s similar to how Google won in distributed systems. They used cheap PCs in shipping containers when everyone else was buying huge expensive SUN etc servers.
yes, and that's the reason I went to work at google: to get access to their distributed systems and use ML to scale up biology. I never was able to join Google Research and do the work I wanted (but DeepMind went ahead and solved protein structure prediction, so, the job got done anyway).
They really didn't solve it. AF works great for proteins that have a homologous protein with a crystal structure. It is absolutely useless for proteins with no published structure to use as a template - e.g. many of the undrugged cancer targets in existence.
@dekhn it is true (I also work in the field. I'm a software engineer who got a wet-lab PhD in biochemistry and work at a biotech doing oncology drug discovery)
There is a big range in both automation capabilities and prices.
We have a couple automation systems that are semi-custom - the robot can handle operation of highly specific, non-standard instruments that 99.9% of labs aren't running. Systems have to handle very accurate pipetting of small volumes (microliters), moving plates to different stations, heating, shaking, tracking barcodes, dispensing and racking fresh pipette tips, etc. Different protocols/experiments and workflows can require vastly different setups.