Hacker Newsnew | past | comments | ask | show | jobs | submit | shaqbert's commentslogin

A good alternative is to just run your Terraform yourself in Github Actions or Gitlab CD, and host your state in S3. We are using open-source Terramate to orchestrate Terraform, and it makes the transition and operations super easy.


For now, but soon running these things in Github Actions might pose an extra cost. https://news.ycombinator.com/item?id=46291156


True, but a rounding error compared to the HashiCorp bill...


5s for Docker containers vs 20ms now ... holy moly, this is fast


From what we've seen, micro VMs could probably do something very fast too (150ms?) but we thought 20ms was pretty crazy.


Otel is indeed quite complex. And the docs are not meant for quick wins...

Otelbin [0] has helped me quite a bit in configuring and making sense of it, and getting stuff done.

[0]: https://www.otelbin.io/


That looks pretty cool! OpenTelemetry Collector configuration files are pretty confusing. Do like the collector, though. Makes it easy to sent a subset of your telemetry to trusted partners.


The lessons from Ukraine war:

- "zerg" cheap mass produced unit approach is superior to "protoss" expensive but few units

- if you have expensive kit, better hide it in a drone/mid range rocket proof shelter

- pilots are a bottlneck resource in a peer war

- "jack of all trades" systems are inferior to "do one job well" systems


This is related to an earlier post [0] and shows the underlying open-source repo.

[0]: https://news.ycombinator.com/item?id=39902949


Hi, Alex here from Unikraft, that's right!

You can find more information about Unikraft here:

- https://unikraft.io -- The Unikraft Company

- https://unikraft.org -- The open-source project

- https://kraft.cloud -- The Millisecond Cloud Platform based on Unikraft


This looks amazing.

1- Is there an option to run your own "kraftcloud" in your own cloud account?

2- How does this compare to companies leveraging eBPF to squeeze more performance from the Kernel?


Hi,

1. Yes. We're essentially a baremetal solution. Either we run it on your behalf (this is kraft.cloud), or we deliver an ISO or AMI and this can be installed on-prem/cloud-prem. This latter solution is our enterprise offering.

2. eBPF used in this way, I feel, has become a symptom of the problem that we're trying to solve at Unikraft: the bloated Linux- (and cloud native-) stack. In the end, you're adding more tools or doing more tricks to try and do computationally less since performance and running faster is about performing less operations in the critical-path. Unikraft approaches the problem space differently: bottom up (app first, then dependencies, then select OS libraries/primitives based on required syscalls) as opposed to top-down (taking away from Linux, its distros, removing functionality, making short-cuts, etc.).


This is surprisingly elegant and simple... whish I had found Mirascope like three month earlier.

On the repo roadmap section I am seeing RAG support coming up, this would be extra nice for my use case. Any ETA?


I wish I had started working on Mirascope 3 months earlier!

I'm hoping we have RAG support sometime in early-mid April :)


Hi William, super nice tech. Alas, I am suffering from lack of imagination, can you give some example use cases where this tech would really shine?


Hi shaqbert! I'd love to give some examples.

When I was at Google, I helped develop models using this modeling technique that ended up in products such as Google Maps. These models are highly interpretable and, like xgboost, can be applied to classification and regressions tasks and more. I would say that there are a few cases in particular where these models really shine:

(1) You want more than just a prediction. You want to understand how your data impacts the prediction. For example, you might have a model for predicting sales of an item on your e-commerce platform, but you care deeply about how price impacts the prediction. These models have feature calibrators, which can really help you to understand how the model understand the feature. In the case of price, you might discover that the calibrator follows a concave shape, indicating that prices that are too low / too high are similar and that there is likely an optimal price somewhere in the middle. Perhaps you'll even discover close to that optimal price from the calibration analysis.

(2) You want to embed domain knowledge into the model. You want to make sure that model follows certain expected behaviors. This is where shape constraints shine. For example, when predicting creditworthiness, you'll want to make sure that more timely payments will only ever increase your creditworthiness (since otherwise the model wouldn't be fair). Another example here would be predicting the likelihood of a sale for an item on Amazon. It would make sense for the model to trust a feature for star rating more if the number of reviews is higher (since that's how a human would likely judge the rating themselves). A trust shape constraint here would make embedding this expected behavior into the model not just possible but easy.

Ultimately these models are what's considered "universal approximators" just like DNNs, able to approximate any function with sufficient data, so you can use them for any prediction task you're looking to solve (think of an alternative to xgboost you can try during modeling) that are more interpretable and controllable.


  - team are scientists from the Wendelsten 7-X stellarator plasma physics experiement
  - want to innovate on the two most impactful learnings from the experiment towards commercial fusion power, namely
  - using high temperatur superconductors in these weird shaped magnetic coils
  - 3D printing the water cooling vessels for the divertor


Researchers at the Max Planck Institute for Plasma Physics (IPP) have found a way to significantly reduce the distance between plasma and divertor by modelling the X-point radiator.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: