Hacker Newsnew | past | comments | ask | show | jobs | submit | zdw's commentslogin

I can generally re-find my place in books, but years ago I acquired a stack of orange punch cards from a university library that they were giving away as scrap paper. These make great bookmarks and also interesting historical conversation pieces if someone notices/recognizes them.

I think the previous use for the punchards to have one for each book and scan them on checkout/checkin (maybe this predated barcodes?)



This compiles to native binaries, as opposed to deno which is also in rust but is more an interpreter for sandboxed environments?

Oxc is not a JavaScript runtime environment; it's a collection of build tools for JavaScript. The tools output JavaScript code, not native binaries. You separately need a runtime environment like Deno (or a browser, depending on what kind of code it is) to actually run that code.

Deno is a native implementation of a standard library, it doesn't have language implementation of its own, it just bundles the one from Safari (javascriptcore).

This is a set of linting tools and a typestripper, a program that removes the type annotations from typescript to make turn it into pure javascript (and turn JSX into document.whateverMakeElement calls). It still doesn't have anything to actually run the program.


Deno uses V8, which is from Chrome. Bun uses JavaScriptCore.

Ah, yeah. Easy mistake

I'm going to call it: a Rust implementation of JavaScript runtime (and TypeScript compiler) will eventually overtake the official TypeScript compiler now being rewritten in Go.

? Most JavaScript runtimes are already C++ and are already very fast. What would rewriting in Rust get us?

Nothing, but it will happen anyway. Maybe improved memory safety and security, at least as a plausible excuse to get funding for it. Perhaps also improved enthusiasm of developers, since they seem to enjoy the newness of Rust over working with an existing C++ codebase. Well there are probably many actual advantages to "rewrite it in Rust". I'm not in support or against it, just making an observation that the cultural trend seems to be moving that way.

In popularity or actually take over control of the language?

Eventually I imagine a JS/TS runtime written in Rust will be mainstream and what everyone uses.

If you want native binaries from typescript, check my project: https://tsonic.org/

Currently it uses .Net and NativeAOT, but adding support for the Rust backend/ecosystem over the next couple of months. TypeScript for GPU kernels, soon. :)


No, it it a suite of tools to handle Typescript (and Javascript as its subset). So far it's a parser, a tool to strip Typescript declarations and produce JS (like SWC), a linter, and a set of code transformation tools / interfaces, as much as I can tell.

Going a bit meta - this blog seems strange as its only other story is criticizing a member of the go community. The OP has posted this story, done so twice (first time was flagged) and has no other comments on HN.

There may also be a downvote brigade in this comment section.


I think this must be a bit. On the one hand you have this story about Bernstein, someone who has made a pastime out of weaponizing process in consensus organizations to drag progress to a halt when he's failed to coerce his preferred outcome; on the other hand you have a story villainizing Filippo Valsorda for not doing that, and avoiding standards organizations altogether.

I first encountered djb's work back in the 90's with qmail and djbdns, where he took a very different and compartmentalized approach to the more common monolithic tooling for running email and DNS. I'd even opine that the structure of these programs are direct ancestors to modern microservice architectures, except using unix stdio and other unix isolation mechanisms.

He's definitely opinionated, and I can understand people being annoyed with someone who is vociferous in their disagreement and questioning the motives of others, but given the occasional bad faith and subversion we see by large organizations in the cryptography space, it's nice to have someone hypervigilant in that area.

I generally think that if djb thinks something is OK in terms of cryptograpy, it's passed a very high analytical bar.


The main problem with technology coverage is you have one of 3 types of writers in the space:

1. Prosumer/enthusiasts who are somewhat technical, but mostly excitement

2. People who have professional level skills and also enjoy writing about it

3. Companies who write things because they sell things

A lot of sites are in category 1 - mostly excitement/enthusiasm, and feels.

Anandtech, TechReport, and to some extent Arstechnica (specially John Siracusa's OS X reviews) are the rare category 2.

Category 3 are things like the Puget Systems blog where they benchmark hardware, but also sell it, and it functions more as a buyer information.

The problem is that category 2 is that they can fairly easily get jobs in industry that pay way more than writing for a website. I'd imagine that when Anand joined Apple, this was likely the case, and if so that makes total sense.


When Andrei Frumusanu left Anandtech for Qualcomm, I'm sure he was paid much more for engineering chips than he was for writing about them, but his insight into the various core designs released for desktops and mobile was head and shoulders above anything I've seen since.

It's a shame that I can't even find a publication that runs and publishes the SPEC benchmarks on new core designs now that he is gone, despite SPEC having been the gold standard of performance comparison between dissimilar cores for decades.


There are still places that benchmark, but mostly for commercial apps like Puget Systems in the earlier post. Phoronix can also be useful as well for benching open source stuff.

I wouldn't put much trust in well-known benchmark suites as in many cases proprietary compilers, a huge amount of effort was put into Goodhart's law optimizing to the exact needs of the benchmark.


> The problem is that category 2 is that they can fairly easily get jobs in industry that pay way more than writing for a website

This is true, but those jobs are much worse than writing jobs. So it comes down to how much you value money and what it buys. Most people earning "way more" are spending "way more" to try to pay back the soul debt the job takes away. When you dig deep, it's not "way more" utility.


If The Princess Bride is to be believed, MCP stands for the "Mutton Context Protocol".


When the tokens are nice and lean.


But that's not what he said! He distinctly said "AI", so you were probably playing capitalism, and he cheated!


When something goes wrong it saying "Who let these lab monkeys free?" would be excellent.


It's well known that Bill Gates's favorite band is Weezer, so this feels unsurprising.


He’s ride or die


This is a great article, but people often trip over the title and draw unusual conclusions.

The point of the article is about locality of validation logic in a system. Parsing in this context can be thought as consolidating the logic that makes all structure and validity determination about incoming data into one place in the program.

This lets you then rely on the fact that you have valid data in a known structure in all other parts of the program, which don't have to be crufted up with validation logic when used.

Related, it's worth looking at tools that further improve structure/validity locality like protovalidate for protobuf, or Schematron for XML, which allow you to outsource the entire validity checking to library code for existing serialization formats.


When I came to this idea on my own, I called it "translation at the edge." But for me it was more that just centralizing data validation, it also was about giving you access to all the tools your programming language has for manipulating data.

My main example was working with a co-worker whose application used a number of timestamps. They were passing them around as strings and parsing and doing math with them at the point of usage. But, by parsing the inputs into the language's timestamp representation, their internal interfaces were much cleaner and their purpose was much more obvious since that math could be exposed at the invocation and not the function logic, and thus necessarily, through complex function names.


I disagree. I think the key insight is to carry the proof with you in the structure of the type you 'parse' into.


In order to carry the proof, you have to parse it early (otherwise you're not carrying it the whole time before parsing), so you you're both right.


Could you clarify what you mean by "carry the proof"?


Let's say you have the example from the article of wanting a non-empty list, but you don't use the NonEmpty type and instead are just using an ordinary list. As functions get called that require the NonEmpty property, they either have to trust that the data was validated earlier or perform the validation themselves. The data and its type carry no proof that it is, in fact, non-empty.

If you instead parse the data (which includes a validation step) and produce a Maybe NonEmpty, if the result is a Just NonEmpty (vs Nothing) you can pass around the NonEmpty result to all the calls and no more validation ever needs to occur in the code from that point on, and you obviously reject it rather than continue if the result is Nothing. Once you have a NonEmpty result, you have a proof (the type itself) that is carried with it in the rest of the program.


From the article:

    validateNonEmpty :: [a] -> IO ()
    validateNonEmpty (_:_) = pure ()
    validateNonEmpty [] = throwIO $ userError "list cannot be empty"
    
    parseNonEmpty :: [a] -> IO (NonEmpty a)
    parseNonEmpty (x:xs) = pure (x:|xs)
    parseNonEmpty [] = throwIO $ userError "list cannot be empty"
Both consolidate all the invariants about your data; in this example there is only one invariant but I think you can get the point. The key difference between the "validate" and "parse" versions is that the structure of `NonEmpty` carries the proof that the list is not empty. Unlike the ordinary linked list, by definition you cannot have a nil value in a `NonEmpty` and you can know this statically anywhere further down the call stack.


Typed functional programming has the perspective that types are like propositions and their values are proofs of that proposition. For example, the product type A * B encodes logical conjunction, and having a pair with its first element of type A and its second element of type B "proves" the type signature A * B. Similarly, the NonEmpty type encodes the property that at least one element exists. This way, the program is "correct by construction."

This types-are-propositions persoective is called the Curry-Howard correspondence, and it relates to constructive mathematics (wherein all proofs must provide an algorithm for finding a "witness" object satisfying the desired property).


I think that's an excellent way to build a defensive parsing system but... I still want to build that and then put a validator in front of it to run a lot of the common checks and make sure we can populate easy to understand (and voluminus) errors to the user/service/whatever. There is very little as miserable as loading a 20k CSV file into a system and receiving "Invalid value for name on line 3" knowing that there are likely a plethora of other issues that you'll need to discover one by one.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: