Discovery calls are the initial stage in a selling process. I am biased but "selling" to a customer provides a much more honest picture of the things they truly care about then a generic customer interview. Despite most folks aversion to sales, if you have a compelling story and can narrow in to a general area of pain (which you have already) most people are very willing to hop on a call and tell you exactly the things you need to know. Be sincere, open to their expertise but be willing to challenge and ask prodding questions. This helps test their conviction and set the weights of what will get them to influence internally to purchase - the true kpi of most products. You might feel goofy since they you don't have anything to truly sell yet. Rest assured, your buyers will be very willing to cooperate if you deliver and you will stand out from the crowded majority of those who don't.
This is absolutely true in less regulated/IP-paranoid industries. I work with (a specific vertical of) small business owners who will talk your ear off about their problems, exactly as you said (though then there's the at least tractable problem of filtering through the noise!). But life sciences and pharma... your potential users' livelihoods and even freedom may be on the line if they say too much. You'll need to be seen as trusted, not just someone with a compelling story, no matter how compelling. Which means it's very hard to break in.
@williamcotton - saw this the other day in the prompt engineering post - I enjoyed it. Sent you a message on your site but something looks broken. If you use linkedin I requested to connect.
The "model" is not the hard part! If a pretrained model is generalized enough and valuable enough then it can exist as simple API or a runtime, it stops being a "model." If you have the engineering chops to deploy models in production for your application then deploying a pretrained model is trivial. If its valuable enough to the business then squeezing a few more points by fine tuning the model is worth it.
Kudos to her. Conservation tech is a challenging and exciting test bed for "hackers" because it's a hyper dimensional chessboard of constraints and strategies. It's easy to reduce the solution to "just do this." From one end, poaching is lucrative in areas with massive economic disparity. And from the other, climate change and the pressure we put on natural resources is unbalancing ecosystems driving human wildlife conflict.
A project I've worked on is Trailguard AI, ran by resolve.ngo, the principal is Eric Dinnerstein who is the former chief scientist at WWF. I can say they our running a start up in every sense of the word but the result is beautiful: a satelite/ gsm connected, battery powered AI enabled cryptic trail cam that can be used as a swiss army knife from animal censuses, to poacher detection, illegal logging truck detection and preventing human wildlife content (detect tiger + sound alarm).
The measure of VC is growth and if you are a component to larger systems or programs you will inevitably be throttled. If you are a platform and can own the end to end then you should take VC capital and invest heavily in well connected BD's that are known by the PM's and have a nuanced ability to navigate the orgs.
Palantir (who is end to end) has opened up VC appetites for defense and you see a certain signaling pattern to the VC backed defense start ups vs the traditional small defense business upstart.
I work at a "start up" that develops a component so we took a strategic investment from a defense contractor and that has enabled us to develop core technology and grow commercially while being patient with the arduous cycles of programs.
As a thought experiment this is cool but from a practical perspective it’s too focused on a specific architecture and if anything adding perturbations might (slightly) help the training process.
From the thought experiment side. I think the moral implications cut both ways. Mass image recognition is not always bad - think about content moderation or the transfer of images of abuse. As a society we want AI to flag these things.
Kudos to whoever thought of this at sfdc. This is a visionary acquisition and it makes sense on so many levels. From the fabric of slack's unique sales model to what this means for enterprise communication.
In the section around core technology. I'm curious about this statement:
"As we continue to collect data over time, the system will become more robust."
This is one of these statements that sound true and I know Karpathy coined the term "operational vacation" but it's not clear more data always equals more robust system.
> but it's not clear more data always equals more robust system.
Of course, however I imagine that the self-driving feature space is so large and so hard to collect that they probably do have lots of room for improvement simply with more data.
it's not clear more data always equals more robust system
It is for the "learning organization". If Musk is able to continue a generative culture within Tesla more data will equal a more robust system. If Tesla changes to a "rule/process" or "power" based culture, your concern will come true.
Fully Agree with this insight. Superior UI/UX will make a crappy model look like the magical future we imagine. When you flip on a snapchat face filter nobody thinks about the quality or accuracy of the object detection, facial landmark localization and the active shape model that's doing the lifting. I guess this is what they call the "AI effect."