Hacker Newsnew | past | comments | ask | show | jobs | submit | dr-ando's commentslogin

Not exactly what you are asking for, but jcodec is a pretty readable codebase written in Java. (The readability part is often, ahh, lacking in the source for codecs, in my experience.) It might be a good candidate for rewriting in Rust. https://github.com/jcodec/jcodec


Funnily enough I recently released 0.1.0 of "less-avc" a pure Rust H.264 (AVC) video encoder: https://github.com/strawlab/less-avc/ . For now it only implements a lossless I PCM encoder but supports a few features I need such as high bit depth. If anyone has a codec-writing itch they want to scratch, I would welcome work towards the compression algorithms H.264 supports: context-adaptive variable-length coding (CAVLC) and context-adaptive binary arithmetic coding (CABAC). Also I'm happy for constructive criticism or questions on this library. I think it is fairly idiomatic, and no `unsafe`, rust. While H.264 is an older codec now, as far as I can tell, this also means any patents on it are about to run out and it is very widely supported.


as far as I can tell, this also means any patents on it are about to run out

Not for H.264; looks like the last patent expires in 2028:

https://scratchpad.fandom.com/wiki/MPEG_patent_lists#H.264_p...

On the other hand, the last patent on MPEG-4 ASP (Xvid/DivX/etc.) which preceded H.264 apparently just expired earlier this month:

https://meta.wikimedia.org/wiki/Have_the_patents_for_MPEG-4_...

...and IANAL but that means the patents for H.263 and everything older should've already expired too.


That's a great list of the H.264 patent claims--thanks. I had naively assumed that since the first iteration of standard was published in 2003 that "obviously" all related patents (to features in the first iteration, anyway) would have to have been filed prior. Clearly, that is not the case.


"method of selecting a reference picture" sounds like an encoding patent, and that one was filed four years after the standard came out. I wouldn't worry about 2028.

It's harder to evaluate the blob of patents from 2004-2005.


Which original article? I would be interested to read it.


[1]. Upon re-reading this, they don't even have the actual laser rig, just a camera and a methodology to estimate future locations to compensate for [camera] latencies with accuracy that conveniently fits within average body sizes of specific spices of a bug. This is also funded by "Moonshot R&D Program" and its sub-programs[2][3] ran by the Cabinet Office of Government of Japan.

Are you fluent in the language by chance? Because I might be able to explain how I came to post the above comment but it's hard, it just ... smells.

1: https://www.naro.go.jp/publicity_report/press/laboratory/nip...

2: https://www8.cao.go.jp/cstp/moonshot/index.html

3: https://www.affrc.maff.go.jp/docs/moonshot/moonshot.html


Thanks. I was able to read [1] with the help of Google Translate. From what I understand, this is an announcement of the project being funded within the Moonshot program and thus it is expected that the capabilities discussed are the goal, not what the researchers have already demonstrated.


The computational requirements are very modest. The magic is in the math. I'm not sure if it counts as "batteries included" but I wrote a Kalman filter implementation in "no-std" (no standard library) rust called adskalman [1]. This means it can run on very modest bare metal targets with no operating system. Of course, it can also run on targets where the standard library is available and the examples [2] make use of this to do nice things like print the results which can be piped to a file and plotted. The core runs fine on embedded targets, and we use this with arm microcontrollers, but it should work on just about anything. Feedback is welcome.

[1] https://crates.io/crates/adskalman [2] https://github.com/strawlab/adskalman-rs/blob/main/examples/...


I am sympathetic to the point you make but to be accurate, one can consume and create C and C compatible dynamic libraries with rust. So, one is not “losing” something because what you (and me) want - dynamic linking and shared libraries with a stable and safe rust ABI - was not there to begin with.


Why not use Let’s Encrypt, e.g. with DNS validation so they do not need to hit your HTTP server?


They won't validate my internal domains (obviously). I have all my infra on .lan and using this they all get ACME certs and I never have to see another "insecure connection" page.

Also had my old workplace on .dev until those bastards at Google stole it and added the entire tld to the hsts preload list!!


> Also had my old workplace on .dev until those bastards at Google stole it and added the entire tld to the hsts preload list!!

They didn't steal it. You'd hijacked it, and your hijacking failed. Go big or go home. The IETF hijacked the OID arc 1.3.6.1 and they succeeded because everybody accepted their control of that arc and it's now used everywhere, but if you hijack some namespace and then only use it on a few dozen machines nobody has heard of, that's not going to stick.

More seriously, what you've done is probably a bad idea. https://myprinter.lan/ seems unique to you, and then your new partner moves in, why doesn't the printer work? Oh right, his printer is also named myprinter.lan because you don't have globally unique namespaces.

This happens on a bigger scale at a business or other organisations of course, but it's annoying even in one household. Here's a metaphorical nickel kid, get yourself a domain in the public DNS hierarchy.


Maybe it's time to ditch the printer you won't need for ecological reasons then?

Jokes aside, isn't this what .local and .localdomain are specified for?

Why not use nickname.local as your namespace?... it's probably unique enough at least on this planet.

Of course another way would be to register one gTLD for each person on the planet, which seems to be the trend as of late /s


Using .local conflicts with mdns


> Using .local conflicts with mdns

It's completely acceptable to use .local. in such a manner, however.

The "conflict resolution" process is outlined in the RFC [0] and is, well, pretty simple:

> ... the computer (or its human user) MUST cease using the name, and SHOULD attempt to allocate a new unique name for use on that link.

You can even set up your own DNS servers to be authoritative for the ".local." domain (zone), if you really want to.

RFC6762 states that "any DNS query for a name ending with '.local.' MUST be sent to" 224.0.0.251 (or ff02::fb) -- but it also explicitly allows sending them to your regular ol' (unicast) DNS servers, too. It's up to you to figure out how to manage that, of course.

Now, that said... to avoid any potential issues, I'd only ever use .local for its intended purpose. There's just too much potential for "weirdness" to occur. Personally, however, I completely avoid any use of either (.local and Multicast DNS) regardless.

--

On a side note, ".localdomain" mentioned in the grandparent comment should actually be "localhost."

--

[0]: https://tools.ietf.org/html/rfc6762


Being the admin of my network, I control these things. I don't have a partner adding random devices without oversight.

I have plenty of public domains. .lan is short and easy, hence my preference for it.

Ideally there would be one or two private tlds codified just as there are private ip ranges (my hypothetical partner could also add random devices with conflicting IP, businesses have problems with conflicting IP/subnets often, these are just problems that need to be solved through proper organisation, so I fail to see why dns is somehow different).


> Ideally there would be one or two private tlds codified just as there are private ip ranges ...

There are several, in fact.

RFC8375 [0] states:

> This document registers the domain 'home.arpa.' as a special-use domain name [RFC6761] [1] ... 'home.arpa.' is intended to be the correct domain for uses like the one described for '.home' in [RFC7788] [2]: local name service in residential homenets.

In addition to "home.arpa.", there are several other domain names listed in IANA's "Special-Use Domain Names" registry [3] that "users are free to use ... as they would any other domain names" -- even if they are technically intended/reserved for other uses.

For as long as I can remember, I've used a subdomain of one of my registered domain names for everything in my home network. That has the advantage of, if and/or when desired, allowing me to do some "fancy tricks" (involving some combintion of DNS, VPN, and/or reverse proxying) to make specific internal/private resources available from the Internet.

--

[0]: https://tools.ietf.org/html/rfc8375

[1]: https://tools.ietf.org/html/rfc6761

[2]: https://tools.ietf.org/html/rfc7788

[3]: https://www.iana.org/assignments/special-use-domain-names/sp...


I use int.company.com for my internal domains. company.com is a real domain that I registered. If you did similar, as opposed to making up your own domain, you wouldn't have a problem.


With DNS validation it's fairly straight forward to get a wildcard cert for all your internal domains (*.my.domain)


You still need a public-facing domain to do this, though. You can't use Let's Encrypt on a my.lan domain name, because there's no way to create the public records required to validate it.


My go-to way is having a public-facing domain with Let's Encrypt certs and the public-facing domain just CNAMEs to my internal domains. Public-facing domains are not luckily not that expensive and I didn't even go for the cheapest option (mine's about 10€/year).


I was looking into something like for my homelab but for a cert noob I got lost somewhere between trying to use intra.mydomain.com and not screwing up my public address

Can you recommend a good book or blog series that covers this topic in depth?


How would one set this up? Why is DNS validation needed?


Those are simply the rules. You can do ACME with an HTTP challenge or a DNS challenge. The HTTP challenge is adequate for proving that you control x.example.com, but serving a website on x.example.com doesn't prove that you own y.example.com. But, being able to create example.com DNS records does, so that is what's required to get a wildcard certificate.

I imagine you are confused because the proposal above sounds like "just get *.example.com, then copy that cert to everything that will ever serve traffic for example.com", which doesn't sound like a great idea to me.


Any camera modules you might recommend for such tasks, ideally compatible with the nano?


I can't remember the name of vendor, but they were from some niche manufacturer of professional machine vision cameras from Germany. Each of those cost few 1000s euros.

Almost anything for industrial use cost you a leg and then some. If you want to experiment with CV at home I would take a look at that high quality camera module from RPi.


Is the HQ camera module any better than the Veye Starvis (imx297/307/327) module?

Aside from being a third party module, they do quite well with low light situations -compared to conventional Raspberry Pi camera modules.


The dev kit is just that - a dev kit. For production you could use the jetson nano module but probably want a different carrier board. I think the Nvidia license also prevents production use of the dev kit.


Seal up your electronics box and they'll never know. If it is not observable, it doesn't exist.

The alternative carrier boards all cost far too much, many more than the Nano itself which violates the first law of carrier boards [0]. Only the OEM carrier board is priced reasonably.

[0] A grocery bag should never cost more than the things that fit in it; a phone case should never cost more than its phone; a camera bag should never cost more than a camera; likewise, a carrier board should never cost more than what it carries.


You must not be familiar with the luxury handbag market.


Correct, I don't touch luxury bags for the same reason I don't touch Connect-Tech and Leopard Imaging products. I pay for functionality and carrier boards are functionally just wrappers with some wires.


There's no NV license on this. Note however that the carrier board is non-production specification and its warranty applies to development use only.


Relatedly, a paper was posted to bioRxiv a few days ago describing the results of large scale fruit fly release-and-recapture experiments. They conclude the abstract with "Our field data do not support a Lévy flight model of dispersal, despite the fact that our experimental conditions almost perfectly match the core assumptions of that theory."

https://www.biorxiv.org/content/10.1101/2020.06.10.145169v1

Also, Viswanathan's 1996 Nature paper was stupendously wrong due to the key measurements being in error, but instead of retracting it, he and colleagues published a followup - in Nature 2007 - in which they say the albatross flight times are gamma distributed.


If is interesting to you, I also suggest to check out my Rust crate bui-backend[0]. This is a library for building Browser User Interface (BUI) backends. The key idea is a datastore which is synced between the backend and the browser. The demo has several example frontends. The most interesting perhaps is the yew[1] framework, which is somewhat like React but in Rust. This lets you share code directly between the backend (natively compiled) and frontend (compiled to wasm). There is also a pure Javascript frontend, an Elm frontend, and a Rust stdweb frontend in the demo.

0 - http://github.com/astraw/bui-backend 1 - https://github.com/DenisKolodin/yew


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: