My previous startup Vidmaker created a browser-based collaborative video editor (think Google Docs for iMovie) in 2014 which did live preview in the browser and final render server-side. To use the same typescript-based "engine" for both we re-implemented most of the media browser APIs in NodeJS: 2d and 3d canvas, WebAudio, image APIs, something akin to HTML Video, etc. It worked amazingly well.
We got acqui-hired in 2015 and the product was shut down, but in hindsight I wish we'd open sourced at least those libraries.
Automattic acquired LearnBoost who employed TJ Holowaychuk, the original creator of node-canvas (and also Express and tons else). Automattic has not been involved in any way since at least 2017 when I joined node-canvas.
In my situation when parsing lines of code Twine stores the file path and concats the line number and column when getting the location within a file. It does this a lot so using Twine there really helps. The rope data structure can be used in a number of ways.
I’ve spent the past few months translating a C library heavy in pointer arithmetic to TypeScript. Concessions have to be made here and there but ended up making utility classes to capture some of the functionality. Structs can be represented as types since they are able to also to be expressed as unions similar to structs. These const types can have fields updated in place and inherit properties from other variables similar to passing by reference which JS can do (pass by sharing) or use a deep clone to copy. As far as affecting the underlying bytes as a type I’ve come up with something I call byte type reflection which is a union type which does self-inference on the object properties in order to flatten itself into a bytearray so that the usual object indexing and length properties automatically only apply to the byte array as it has been expressed (the underlying object remains as well). C automatically does this so there is some overhead for this that cannot be removed. Pointer arithmetic can be applied with an iterator class which keeps track of the underlying data object but sadly does count as another copy. Array splicing can substitute creating a view of a pointer array which is not optimal but there are some Kotlin-esque utilities that create array views which can be used. Surprisingly, the floating point values which I expected to be way off and can only express as a number type are close enough. I use Deno FFI so plenty of room to go back to unmanaged code for optimizations and WASM can be tapped into easily. For me those values are what is important and it does the job adequately. The code is also way more resilient to runtime errors as opposed to the C library which has a tendency to just blow up. TLDR; Don’t let it stop you until you try because you might just be surprised at how it turns out. If the function calls of a library are only 2-3 levels deep how much “performance” are you really gaining by keeping it that way? Marshalling code is the usual answer and Deno FFI does an amazing job at that.
Formal verification uses Prolog a lot. System on TPTP at U of Miami utilizes this for many of the formally verified tests on there. It is just a more intense discipline than general programming which is why I’m perpetually drawn to it trying to find more real world applications. It is not exactly Prolog but close enough to mention the similarities.
Well sure, but the hardware encode and decode isn’t completely widespread yet. I’ve been patiently waiting for what has felt like an eternity. From the developer perspective everyone needs to have access to it or I’m just sitting on my hands waiting for the capabilities to trickle down to the majority of users. Hopefully more users will purchase hardware if it features AV1 encode/decode. They need a logo that says “AV1 inside” or something. So, for example only the iPhone 15 pro offers hardware decode so far in the iPhone lineup.
You inevitably have to do multiple encodes. An H.264 encode as the "plays anywhere" option and an AV1 encode as the "less bandwidth or better image quality at the same bandwidth" option.
YouTube does H.264, VP9, and AV1 video encodes with AAC and Opus audio encodes.
By your definition then nothing would be "plays anywhere". H.264 is about the closest you can get. And depending on Profiles it is about to become patent free where license will no longer be required.
Then the issue kind of becomes software support. For example, something like Handbrake will gladly use AV1, but Kdenlive will refuse to acknowledge that it exists (at least for hardware encode, last I checked; that said, they still mark NVENC as experimental).
Also, OBS handles AV1 nicely, but you can’t stream to a site like Twitch because they only support H264 outside of specific experiments.
That said, Arc GPUs are nice, I have an A580 and while the drivers aren’t as mature as Nvidia or AMD, I got it for a pretty good price and they’re gradually patching things out.
That only works if you actually own a desktop though. It's too bad they don't have some ultra tiny SD card sized accelerator addon system or something for laptops
The part of Root I use is Cling the C++ interpreter along with Xeus in a Jupyter notebook. I decided one night to test the fastest n-body from benchmarkgames comparing Xeus and Python 3. With Xeus I get 15.58 seconds and running the fastest Python code with Python3 kernel, both on binder using the same instance, I get 5 minutes. Output is exactly the same for both runs. Even with an overhead tax for running dynamic C++ at ~300% for this program Cling is very quick. SIMD and vectorization were not used just purely the code from benchmarkgames. I use Cling primarily as a quick stand-in JIT for languages that compile to C++.
Well then there is Emacs which is like an entire ripscrip BBS in your terminal if you configure it that way. If the Lisp you hand edit to actually make the plugin start up works you may want to also see what it’s doing apparently. Even though that happens sometimes Emacs is very good at what it does.
MFC also has CComPtrBase which uses & to represent pointer lifetimes to COM objects such as while(pEnum->Next(1, &pFilter, &cFetched) == S_OK). Especially fun when debugging DirectShow filtergraphs someone made in the UI completely. There is more of an explanation here: https://devblogs.microsoft.com/oldnewthing/20221010-00/?p=10...
I built one for the government keeping track of licensed professionals and receiving their payments for things reported in the field (mobile first web app). It collected over a quarter million dollars in 30-40 dollar payments with different tiers, refunds, overrides, and even penalties for non-payment all while being PCI compliant. One thing that helped a lot- create an auth system that lets the admin impersonate the customer to walk them through it. It’s a tough paradigm to start with but pays off immensely. Another was generating excel files and reports on demand from any view of the data in the app. One of the developers on the project implemented a simple state machine for payment histories and stored it in a number of tables with FK constraints. Do not do this! That means in the future your app will need to deserialize every customer’s history and after a few years the app will grind to a halt. This was the one issue with the app looking back. A state machine looks like a good fit for billing but if my future self could go back in time and warn everyone it would be with this one common problem or I wouldn’t even make this comment in the first place. If you do consider using a state machine just create thousands of customers with long detailed histories up front and if you can load them all up quickly then that is a good sign. Your billing system will only perform as well as you can transform and index data from customer history in bulk.