Hacker Newsnew | past | comments | ask | show | jobs | submit | medi_naseri's commentslogin

Hello HN!

I am a builder and researcher building tools to make compute and datacenters grid interactive. Wanted to introduce myself to community and get your thoughts on this topic!


Was just reading many people canceling OpneAI for Anthropic. Few days ago everyone were against Anthropic.


You are very optimistic thinking it will take 80 years.


PHEVs are really good for people with trust issues, like me.

I don't trust EVs completely and I still think gas station is a more reliable source of energy.


This is so freaking awesome, I am working on a project trying run 10 models on two GPUs, loading/off loading is the only solution I have in mind.

Will try getting this deployed.

Does cold start timings advertised for a condition where there is no other model loaded on GPUs?


I don't quite get it if that is bad thing for Nvidia and AMD. How would them be able to optimize their GPUs with a model?


The model running initially (maybe) better on Huawei than NVIDIA hardware means that hosting providers will have some motivation to buy more Huawei hardware instead and also that software developers will learn to work with Huawei HW.

I personally hope we see a Huawei or similar competitor to the Strix Halo and NVIDIA Spark lineup for "prosumer" LLM work.


Via drivers I’d assume


This is very cool. Will give it a shot.


Hardware market has become very unpredictable, I have had vendors rejecting my replacement orders because new orders where 20% more expensive 15 days after I initially ordered the DRAM.


I would probably ask the models to explain their "WHY". Probably the smartest models should ask " where is your car?".


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: