(co-author of the article and Docker engineer here) I think WireGuard is a good foundation to build this kind of feature. Perhaps try the Tailscale extension for Docker Desktop which should take care of all the setup for you, see https://hub.docker.com/extensions/tailscale/docker-extension
BTW are you trying to avoid port mapping because ports are dynamic and not known in advance? If so you could try running the container with --net=host and in Docker Desktop Settings navigate to Resources / Network and Enable Host Networking. This will automatically set up tunnels when applications listen on a port in the container.
I'm basically using Docker on Mac as an alternative to VMWare Fusion with a much faster startup startup time and more flexible directory sharing.
I want to avoid port mapping because I already have things on the Mac using the ports that my things in the container are using.
I have a test environment that can run in a VM, container, or an actual machine like an RPi. It has copies of most of our live systems, with customer data removed. It is designed so that as much as possible things inside it run with the exact same configuration they do live. The web sites in then are on ports 80 and 443, MySQL/MariaDB is on 3306, and so on. Similarly, when I'm working on something that needs to access those services from outside the test system I want to as much as possible use the same configuration they will use when live, so they want to connect to those same port numbers.
Thus I need the test environment to have its own IP that the Mac can reach.
Or maybe not...I just remembered something from long ago. I wanted a simpler way to access things inside the firewall at work than using whatever crappy VPN we had, so I made a poor man's VPN with ssh. If I needed to access things on say port 80 and 3306 on host foo at work, I'd ssh to somewhere I could ssh to inside the firewall at work, setting that up to forward say local 10080 and 13306 to foo:80 and foo:3306. I'd add an /etc/hosts entry at foo giving it some unused address like 10.10.10.1. Then I'd use ipfw to set it up so that any attempt to connect to 10.10.10.1:80 or 10.10.10.1:3306 would get forwarded to 127.0.0.1:10080 or 127.0.0.1:13306, respectively. That worked great until Apple replaced ipfw with something else. By then we had a decent VPN for work and so I no longer need my poor man's VPN and didn't look into how to do this in whatever replaced ipfw.
Learning how to do that in whatever Apple now uses might be a nice approach. I'll have to look into that.
I believe even with the visa it’s still up to the immigration agent. I came close to trouble once when asked for my H1B visa petition document (not the visa in the passport). I had a photocopy and was told that wasn’t enough and although they’d let me in this time they expected to see the original in future. I also travelled with a letter from my employer explaining where I worked, job title etc as extra documentation just in case to derisk further.
I feel exactly the same way. College had just the right amount of private space, lots of shared spaces for social occasions / group working and a maintenance department to look after all the tedious domestic repairs. Bliss
TSO is Total Store Ordering, which refers to the memory model of x86_64. For Rosetta 2, Apple will switch the M1 processor's memory model when emulating x86_64.
(I work for Docker on the M1 support)
I'm glad it's working for you! There's a bug in the recent Docker Desktop on Apple Silicon RC build which affects some users of vagrant at the provisioning stage when the new ssh key is copied into the machine. It turned out that the permissions of `/dev/null` inside `--privileged` containers were `0660` (`rw-rw----`) instead of `0666` (`rw-rw-rw-`) In case you (or someone else) runs across this there's an open issue with a link to a build with the fix: https://github.com/docker/for-mac/issues/5527#issuecomment-8...
Hey, thanks for all your hard work, it's much appreciated!
Thanks for the tip, that's good to know. I'm running RC2 and haven't come across any issues like that, although I don't run my Docker containers in 'privileged' mode when using Vagrant.
Thank you for this -- I've been bothered by my Windows PC not sleeping properly for the best part of a year. `powercfg lastwake` indicated the Ethernet adapter and then disabling the option "Wake on Pattern Match" has allowed the computer to sleep soundly.
Don't worry, we (at Docker) have been working on Apple Silicon support for a while. The command-line tools work under Rosetta 2 but the local VM inside Desktop will take a little bit longer to port. Just in case you haven't seen it there's some further info on Docker+M1 in the blog post: https://www.docker.com/blog/apple-silicon-m1-chips-and-docke...
Thousands (30) of people are in the trial, half were assigned randomly to the control group. So far 95 people in the trial have caught COVID and, when they unblinded the data, they discovered that 90 of those infections where in the control group. Since participants were randomly assigned into the test group vs the control group and so both groups should have the same amount of exposure, this is a strong signal that the vaccine was effective. Here's an article about Moderna's trial with a link to their 135 page (!) design doc https://www.livescience.com/moderna-vaccine-trial-protocol.h...
I do OCaml programming on Windows and I found that it was a bit confusing at first with too many different ports and install options. However once I settled on https://github.com/fdopen/opam-repository-mingw I was fine. To my surprise I was able to extend existing C bindings to use Win32 APIs fairly painlessly (for example https://github.com/mirage/mirage-block-unix/commit/7cf658f8a... ) . I did have problems with I/O scalability at first but I fixed these by using libuv via https://github.com/fdopen/uwt . The core compiler and runtime are rock solid on Windows. Docker (where I work) ships OCaml/Windows binaries to lots and lots of desktops with no problem.
Apart from the too-many-ports problem, I think the main remaining problem is that too many 3rd party libraries require Unix-isms to build, like shell scripts. This necessitates the presence of cygwin for build (but not at runtime). However the ongoing "dune-ification" of the OCaml universe should help fix this since dune can do everything directly from OCaml code. I'm really looking forward to being able to open a powershell window and type "git clone"; "dune build" and have everything just work.
I'm looking forward to the day when I won't need Cygwin even in the build environment. Since the OCaml compiler itself works fine on Windows and modern build systems like "dune" are also Windows-friendly I'm fairly optimistic this can happen soon. I think it'll mostly be a matter of removing accidental Unix-isms (like unnecessary use of symlinks) in the build scripts.
Mostly, I'd like to ensure that I don't like the dll, so that I don't have to attempt to distribute it. More selfishly, I'd like to have a straightforward installation process where I pull down only a binary or two and can have a working environment and the ability to integrate additional packages.
I didn't know about dune. Looks neat. Is this meant to be used in conjunction with opam?
Yes -- opam and dune are complimentary. I normally use dune (formerly known as "jbuilder") as the build system within my packages, which I then publish and install via opam. Dune does the fast incremental builds, while opam deals with version constraint solving, downloading and general package metadata.
There are some interesting experiments combining the two more closely -- take a look at "duniverse" https://github.com/avsm/duniverse which is a prototype system which would use opam to solve package version constraints and download the sources, allowing dune to build everything at once. The nice thing about this is that you could patch one of your dependencies and then dune will be able to perform an incremental build, as if all the code was in one big project. I'm personally hoping this will help speed up Mirage development as it can be time-consuming to propose a change to an interface and then find all the places that need changing (a cost of having lots of small repos versus a big monorepo)
BTW are you trying to avoid port mapping because ports are dynamic and not known in advance? If so you could try running the container with --net=host and in Docker Desktop Settings navigate to Resources / Network and Enable Host Networking. This will automatically set up tunnels when applications listen on a port in the container.
Thanks for the links, I'll dig into those!