Hacker Newsnew | past | comments | ask | show | jobs | submit | zachrip's commentslogin

Fetch has also lacked support for features that xhr has had for over a decade now. For example upload progress. It's slowly catching up though, upload progress is the only thing I'd choose xhr for.

You can pipe through a TransformStream that counts how many bytes you've uploaded, right?

That would show how quickly the data is passing into the native fetch call but doesn’t account for kind of internal buffer it might have, network latency etc

That is a way to approximate it, though I'd be curious to know the semantics compared to xhr - would they both show the same value at the same network lifecycle of a given byte?

This is a pretty widely known acronym


Oauth with mcp is more than just traditional oauth. It allows dynamic client registration among other things, so any mcp client can connect to any mcp server without the developers on either side having to issue client ids, secrets, etc. Obviously a cli could use DCR as well, but afaik nobody really does that, and again, your cli doesn't run in claude or chatgpt.


Can you clarify what you mean?


Title: "California’s New Bill Requires DOJ-Approved 3D Printers That Report on Themselves"

Actual fact: California’s New Bill Requires that 3D Printers Get DOJ Approval as Firearm-Blocking"

(The "report on themselves" is fiction invented by Adafruit.)


"to be able to get a 3D printer" is implied in the "requires" wording. There's no problem with that part.


I actually think the title is misleading. I'm not sure actual existing deployments are affected? Seemingly just new ones are not working?


Railway founder here. <3%.

That said, we treat this exigently seriously!

Any downtime is unacceptable and we'll have a post mortem up in the next couple hours


it seemed to have been all deployments that had a browser facing interface. id say some cloudflair DNS config messup


I've been using railway a while now, and I've basically never paid them but I would. It's even better than heroku. Super easy to use.


Their 5$ monthly has been far more than enough for me to host my demos.


Would like to see the eval version - the dialogue version just seems like normal code with extra steps?


yeah, the previous example was quite basic. I will write a complete example for that, but here is how you can run dynamic code:

   import { task } from "@capsule-run/sdk";

   export default task({
     name: "main",
     compute: "HIGH",
   }, async () => {
     const untrustedCode = "const x = 10; x * 2 + 5;";
     const result = eval(untrustedCode);
     return result;
   });
Hope that helps!


Is the code in the eval also turned into wasm first then? Does this work as a JIT for wasm?


It actually works a bit differently. The eval is executed by the interpreter running inside the isolated wasm sandbox (StarlingMonkey). You can think of it as each sandbox having its own dedicated JavaScript engine.


Can you give a real world example?


I think the examples here are pretty good: https://boringsql.com/posts/beyond-start-end-columns/


This is kind of a complicated example, but here goes:

Say we want to create a report that determines how long a machine has been down, but we only want to count time during normal operational hours (aka operational downtime).

Normally this would be as simple as counting the time between when the machine was first reported down, to when it was reported to be back up. However, since we're only allowed to count certain time ranges within a day as operational downtime, we need a way to essentially "mask out" the non-operational hours. This can be done efficiently by finding the intersection of various time ranges and summing the duration of each of these intersections.

In the case of PostgreSQL, I would start by creating a tsrange (timestamp range) that encompases the entire time range that the machine was down. I would then create multiple tsranges (one for each day the machine was down), limited to each day's operational hours. For each one of these operational hour ranges I would then take the intersection of it against the entire downtime range, and sum the duration of each of these intersecting time ranges to get the amount of operational downtime for the machine.

PostgreSQL has a number of range functions and operators that can make this very easy and efficient. In this example I would make use of the '*' operator to determine what part of two time ranges intersect, and then subtract the upper-bound (using the upper() range function) of that range intersection with its lower-bound (using the lower() range function) to get the time duration of only the "overlapping" parts of the two time ranges.

Here's a list of functions and operators that can be used on range types:

https://www.postgresql.org/docs/9.3/functions-range.html

Hope this helps.


64 bit ints are a thing in JS for a while now


No, they aren't. You have to use BigInt, which will throw an error if you try to serialise it to JSON or combine it with ordinary numbers. If you happen to need to deserialise a 64-bit integer from JSON, which I sadly had to do, you need a custom parser to construct the BigInt from a raw string directly.


Why do they obfuscate if they're just going to provide the mappings?


Proguard can also apply optimizations while it obfuscates. I think a good JVM will eventually do most of them itself, but it can help code size and warm-up. I'm guessing as JVMs get better and everyone is less sensitive to file sizes, this matters less and less.


And there's no way to do only the optimisation part? Surely you could optimise without messing up class and method names..?


One of the biggest optimizations it offers is shrinking the size of the classes by obfuscating the names. If you're obfuscating the names anyway, there's no reason that the names have to be the same length.

"hn$z" is a heck of a lot smaller than "tld.organization.product.domain.concern.ClassName"


So we're not talking about runtime performance, but some minor improvement in loading times? I assume that once the JVM has read the bytecode, it has its own efficient in-memory structures to track references to classes rather than using a hash map with fully qualified names as keys


Proguard was heavily influenced by the needs of early Android devices, where memory was at a real premium. Reducing the size of static tables of strings is a worthwhile optimisation in that environment


Okay but we're talking about Minecraft on desktops and laptops, where the relevant optimizations would be runtime performance optimizations, no?


Probably, but proguard tends to bundle the whole lot together


Even a hash map with fully qualified names as keys wouldn't be so bad because Stirng is immutable in Java, so the hash code can be cached on the object.


The names need to be stored somewhere because they are exposed to the program that way


They have to be stored somewhere, but they don't have to be what the JVM uses when it e.g performs a function call at runtime. Just having the names in memory doesn't slow down program execution.


At runtime this is going to be a branch instruction yes


Yeah in some ways the obfuscation and mappings are similar to minification and sourcemaps in javascript.


And minification in JavaScript only reduces the amount of bytes that has to be sent over the wire, it doesn't improve runtime performance.


According to the v8 devs it also can increase parsing performance

> Our scanner can only do so much however. As a developer you can further improve parsing performance by increasing the information density of your programs. The easiest way to do so is by minifying your source code, stripping out unnecessary whitespace, and to avoid non-ASCII identifiers where possible.

https://v8.dev/blog/scanner


Sure, but that's also just in the category improving loading a bit. It doesn't have anything to do with runtime performance.


Well, maybe that's why they're not obfuscating anymore.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: