A good alternative is to just run your Terraform yourself in Github Actions or Gitlab CD, and host your state in S3. We are using open-source Terramate to orchestrate Terraform, and it makes the transition and operations super easy.
That looks pretty cool! OpenTelemetry Collector configuration files are pretty confusing. Do like the collector, though. Makes it easy to sent a subset of your telemetry to trusted partners.
1. Yes. We're essentially a baremetal solution. Either we run it on your behalf (this is kraft.cloud), or we deliver an ISO or AMI and this can be installed on-prem/cloud-prem. This latter solution is our enterprise offering.
2. eBPF used in this way, I feel, has become a symptom of the problem that we're trying to solve at Unikraft: the bloated Linux- (and cloud native-) stack. In the end, you're adding more tools or doing more tricks to try and do computationally less since performance and running faster is about performing less operations in the critical-path. Unikraft approaches the problem space differently: bottom up (app first, then dependencies, then select OS libraries/primitives based on required syscalls) as opposed to top-down (taking away from Linux, its distros, removing functionality, making short-cuts, etc.).
When I was at Google, I helped develop models using this modeling technique that ended up in products such as Google Maps. These models are highly interpretable and, like xgboost, can be applied to classification and regressions tasks and more. I would say that there are a few cases in particular where these models really shine:
(1) You want more than just a prediction. You want to understand how your data impacts the prediction. For example, you might have a model for predicting sales of an item on your e-commerce platform, but you care deeply about how price impacts the prediction. These models have feature calibrators, which can really help you to understand how the model understand the feature. In the case of price, you might discover that the calibrator follows a concave shape, indicating that prices that are too low / too high are similar and that there is likely an optimal price somewhere in the middle. Perhaps you'll even discover close to that optimal price from the calibration analysis.
(2) You want to embed domain knowledge into the model. You want to make sure that model follows certain expected behaviors. This is where shape constraints shine. For example, when predicting creditworthiness, you'll want to make sure that more timely payments will only ever increase your creditworthiness (since otherwise the model wouldn't be fair). Another example here would be predicting the likelihood of a sale for an item on Amazon. It would make sense for the model to trust a feature for star rating more if the number of reviews is higher (since that's how a human would likely judge the rating themselves). A trust shape constraint here would make embedding this expected behavior into the model not just possible but easy.
Ultimately these models are what's considered "universal approximators" just like DNNs, able to approximate any function with sufficient data, so you can use them for any prediction task you're looking to solve (think of an alternative to xgboost you can try during modeling) that are more interpretable and controllable.
- team are scientists from the Wendelsten 7-X stellarator plasma physics experiement
- want to innovate on the two most impactful learnings from the experiment towards commercial fusion power, namely
- using high temperatur superconductors in these weird shaped magnetic coils
- 3D printing the water cooling vessels for the divertor
Researchers at the Max Planck Institute for Plasma Physics (IPP) have found a way to significantly reduce the distance between plasma and divertor by modelling the X-point radiator.