Most tech companies need vc money to bootstrap product development and businesses. The question is how much is "too much"? A very subjective question, especailly considering huge diferences across different domains and industries. It's easy for most people (investors/founders/LPs) to simply follow market trends. Few have cool heads at volatile times. The market will play out anyway.
It's interesting that at the bottome of this article there is a recommened article about PCIE 7.0 published in June 2025. And it says "While PCIe 8.0 is still years away, PCIe 7.0 is a lot closer." Never know how these committees work.
Nevertheless, the perf increases of IO devices these days are in insane. I am wondering whether and when these perf promises will materialize. We are only on PCIE 5 this year and it's not that common yet. I am wondering how fast adoptions would be, which pushes manufacturers to iterate. The thing is that at the current level of PCIE 5, a lot of softwares already need to be rewritten to take full advantages of new devices. But rewriting softwares takes time. If software iterations are slow, it's questionable if consumers will continue to pay for new generations of devices.
Agreed. "Open sourcing" means you do it for free but your work benefits others. And you may have an opportunity to pass the torch to others. For hobbies you keep it to yourself. I played an instrument for many years in spare time. I enjoyed it a lot. I eventually gave up, because my life changed and many other things popped up. On reflection, I still think it was an intersting experince for all those years. But I don't feel anything for it now.
Interesing idea. I am wondering what are the use cases on top of your head? I am asking because in my understanding people who care concurrency and parallelism are often those who care performance.
Like I said, the use case is heavy numerical workloads with, e.g. dataframes, in a context where the data is too big for something like python to handle. Using Nim for this is quite difficult too due to value unboxing overhead. It is easier to optimize for things like cache locality and avoid unnecessary allocations using this tool.
I wouldn't reply except you mentioned this unboxing twice and I think people might get the wrong idea. I almost never use ref T/ref object in Nim. I find if you annotate functions at the C level gcc can even autovectorize a lot, like all four central moments: https://github.com/c-blake/adix/blob/3bee09a24313f8d92c185c9... - the relevance being that redoing your own BLAS level 1/2 stuff is really not so bad if the backend is autovectorizing it for you, and full SVD/matrix multiplying/factorizing can be a lot of work. Anyway, as per the link, the Nim itself is just straight, idiomatic Nim.
Parallel/threads is some whole other can of worms, of course. It is unfortunate that the stdlib is weak, both here and for numerics, and for other things, and that people are as dependency allergic as in C culture.
Anyway, "easier to optimize" is often subjective, and I don't mean to discourage you.
The goal is to not need to write C just to get performant code, especially for things like concurrency and numerics. While Nim can be fast without `ref object`, and you can guide the compiler to autovectorize in some cases, it often requires deep knowledge of both the compiler and backend behaviour. That’s not a good developer experience for many users.
Multithreading in Nim is bad to say the least and has been for a while. The standard library option for multithreading is deprecated, and most alternatives like weave are either unmaintained or insanely limited (taskpools, malebolgia, etc.). There's no straightforward, idiomatic way to write data-parallel or task-parallel code in Nim today.
The idea of the project is to make shared-memory parallelism as simple as writing a `parallel:` block, without macros or unsafe hacks, and with synchronization handled automatically by the compiler.
Course, performance can be dragged out of Nim with effort, but there's a need for a language where fast, concurrent, GC-optional code is the default, not something one has to wrestle into existence.
FWIW, I almost always have to look at the generated assembly (never mind C) to even verify things are vectorized. I hope you realize the promise of a PLang where that doesn't need checking or some kind of "autovec-miss" compiler flags or so on and wish you all the best.
My personal feeling is that medical practices have not evovled too fast with computing. Electrical engineering, mechanical engineering, biomedical engineering etc all contributed a lot to how doctors treat diseases. But whether medical records are digitialized or not is not significant. It helps, but does not increase cure rate. Old fashioned doctors have good reasons to reject. But they do not say no to new medicine, new devices, new procedures.
> My personal feeling is that medical practices have not evovled too fast with computing.
You should see "computing". Resizing a window in Windows has become a lost battle. Working with files on Android is a torture.
I really hope that "medical practices" will not "evolve", like "computing" has.
I fully reject your statement about not digitizing being insignificant. And there are several reasons for it, but the main one in my mind is about prevention vs curing.
In an ideal world where every medical record is digitized it would be possible to discover long term causal effects that nowadays we don't know because running long term studies is hard, costly, and in a world where publishing is everything they don't lend to it. So we explored and confirmed only the most obvious long term cause-effect connections.
Therefore, it would enable prevention of some diseases for which we, nowadays, can only have a reactive MO.
Numerous companies have already tried and failed with this approach to medical research. Naively you might think that you could just suck in huge quantities of de-identified patient charts to find all sorts of useful correlations between diagnoses, treatments, and outcomes. But this doesn't actually work because the data quality is so bad: garbage in, garbage out. Doing useful medical research usually requires setting up strict protocols for data entry and patient follow-up.
Fair, but I said "in an ideal world". Also, not sure if selling ads is more profitable to a product like a supplement because mining the digitized medical records showed that (example I pulled out of thin air) "constantly low potassium increases Alzheimer incidence by 50%" or similar.
Ehh, I think there’s a pretty consistent pattern of doctors rejecting pretty basic technologies or procedures that lead to positive outcomes for patients if it’s seeking to address the fact that doctors are human beings that can make mistakes. Medicine is a field full of massive egos.
I always find it odd to put the two keywords "blockchain" and "DB" together (though I know quite a few similar projects). The essence of blockchain, in my view, is to decentrialize because people don't want or can't have a single authority, whereas DB is centralized. The priority of blockchain is not performance (well, after decentralization, of course you don't have performance), and DB always prioritizes performance. Putting the two together is the most conflicting thing I've seen in the tech world.
I most agree, only except that perf is determined by a myriad of facors. Even if this piece of data fits into a cache line, it may have no or negative effect on the following code. For complex softwares, even if it's faster after the change, it's hard to tell what's the contributing factor really is. I once read a system paper (forgot the name) from a recent system research conference on how changing the code layout, e.g., moving a block of code, randomly may have unexpected impacts on perf.
Personally, I only pay attention to the cache line and layout when the structure is a concurrent data structure and needs to sync across multiple cores.
I second this. It's too risky and simply not worth it. Nevertheless, I find this "optional ACID" thing interesting. Many years ago when I was a graduate student, NoSQL was a big thing. It was widely claimed that transactions were expensive and you had to drop them in exchange to scale. I always had this question that if transactions were the culprit, why not turning them off? I later found that the relational system is such a monolith that everything (caching, concurrency control, logging, locking) is wired together in an extremely complex way and there is simply no "turning off".
Redis simply serializes every operation, I thought. Transaction = run a Lua script as a single operation. I think that is ACID, if you count RAM as "durable" and doing one thing at a time as "concurrent".
Yeah, I think they were talking about distributed transactions, Redis only support transaction in a single instance, not in a cluster. You can not run Lua across machines.