Another contestant in the competition here. Really great stuff from Casey! I'd strongly recommend all those interested in more updates to join the discord server:
How much of that website is true and how much is fluffing his own project/research? It would be good to have at least some external or independent links on this topic. As it stands it reads more like marketing and less like research, which doesn't mean anything necessarily but does make me skeptical.
I am not in any way qualified to assess the quality of his research as this is completely outside my scope of practice, but I am failing to detect any marketing in what is an interesting open source (and possibly forfeiting or foregoing the rules of the prize) disclosure of something that he believes is a progression of the subject under discussion, and will ultimately lead to further progression; or alternatively lead to dismissal if the methods disclosed turn up nothing of further benefit.
Seems like someone who is pretty into the scientific method, and, more importantly, spirit, to me?
My app [0] currently uses a mildly customized version of FastSpeech 2 [1] with LPCNet [2] vocoder, which I consider "good quality" @ 16kHz. Faster than realtime on mobile CPU (at least, on anything upwards of a mid-range 2017 device - I can stream practically instantly on my iPhone 11). Using a different vocoder with mobile GPU could probably get even faster (which I don't want to do, for various reasons), and desktop CPU is usually even faster.
There are various other flavours that can deliver faster synthesis (NixTTS comes to mind), but IMO they sacrifice quality even further.
"Good quality" is subjective, obviously. To me, it's perfectly audible, but there's definitely a noticeable difference in quality compared to the heavier diffusion-based models. It's much less crisp and loses some of the more subtle inflections, plosives, etc. For my purposes (language learning), it's fine for the time being but eventually it would be nice to move to a higher-end model.
I used to work at Resemble.ai and we used models that did real-time synthesis. I don’t think it’s particularly difficult anymore, even without sacrificing quality.
If this text was in an ebook my phone could read it aloud in real time. I'm using Cool Reader and Samsung's voices. They feels like TTS but it's OK.
I'm sure there are ways to select any text and make my phone read it in any app but I don't need it and I didn't investigate. Actually I don't need it in ebooks too but I know it's there and I checked that it works.
While I don't agree with the anti-mars thesis of the article, I do think the article is hitting on something important: Robotic exploration is underrated.
Curiosity cost something like 2 billion to build and launch, but there's no reason building a car-sized rover has to be that expensive. With economies of scale + better design-for-manufacturing + reusable rockets the total cost would easily drop by several orders of magnitude. Why isn't NASA building factories upon factories that produce robotic probes?
I'm no expert but I don't think there's a shortage of experiments that people want to run on mars and the other solar system bodies, especially with sample return capabilities.
We've been launching things into low earth orbit for ~60 years and orbital launch demand still seems to outnumber supply. If we could get our martian surface payload capacity to even 1% of our LEO payload capacity I'm sure there would be many organizations that would want to send something.
Definitely! Just look at all the Cubesat payloads people fly these days - all of that only really possible by the cost of individual satellite mission coming down.
Before that one had to build and launch one big expensive satellite or beg another project to have their technology or experiment included on their mission, with a very limited number of available "slots".
Partially, you can have lots of cheap instruments driving/flying/floating around to find something interesting to look at with the fewer expensive shots.
I know. I am entertaining the OP's exercise in how one might make a thinner device. CR2032s are more than 4 times as thick as a credit card, even without a socket or a housing.
The thinnest commonly available coin cells are 1.2mm thick, which is better, but still will require a creative interpretation of "credit card size".
Not surprised. I support egalitarianism as much as the next person, but as YC has grown linearly the number of times I’ve thought “what on earth?” about a YC startup has increased exponentially. The sad reality is that there is a finite number of viable startups in the world.
There is a dhcp option that can be used as an alternative that was recently (~2yr ago) introduced but it’s not supported by any major OSes except android
I am currently writing captive portal support for a big-name internet provider. This article only scratches the surface of how difficult it all is. Each OS is different, and each is painfully undocumented
Please don't build anti-features. They only happen because people like you and me agree to implement them. Push back against management. Tell them that you've done the research and it's a nightmare and you shouldn't move forward with it.
'They only happen because people like you and me agree to implement them.'
i am sorry, this is absurd line or reasoning. This logic has never worked in the history of business. It can only work in lisenced proffeshions like law and accounting where doing something nasty would loose you your lisence and your boss knows that, so literslly noone would agree to do it
secondly, captive portal at prague airport actually has a function - it provides you with up to date information about flights and dates.
The market for software engineers is very good. Many companies can't just cut you loose because you push back against an anti-feature. If they do, you have no shortage of other companies to work for.
You can provide flight information without a captive portal, just stick it on a normal web server. Maybe stick QR codes around the airport to help guide people to it. Bonus: you can access it outside of the airport.
As a thought experiment, if we want to use voluntary inaction to prevent bad ends, we have to go the distance. Standing alone won’t work , because companies can raise their rates until someone takes up the offer or they can lower their standards and pick up a dev without good prospects.
In order for this scheme to work, unions are needed. A professional union which like doctors associations would enforce ethical standards on its members and use collective bargaining to freeze malefactors out of the industry.
It wouldn’t be as good as how doctors have it, with legal weight and governmental recognition, but it would be enough.
This has worked many times. I have personally said "no" several times and many of my colleagues have as well -- and none of us were fired and none of the anti-features were built.
Operating a good wifi network at scale is usually quite costly. It is a good idea in some settings to be able to monetize outsized users. For example, your first 30 minutes on the airport wifi can be free, and if you have to be there for longer than that, you can pay a fee for the next hour.
How does a captive portal help with throttling or locking out users? Is the idea that the user could just recycle their MAC and be let in again, well the same thing applies for captive portals.
In general, someone needs to take a quick look it the development time and customer support involved really can be motivated from the extra earnings. The trend is that fewer and fewer bother, and just see complimentary wifi as a value add instead.
Airports, cruise ships, and other places where it does make economic sense are better served by real things like 802.11x and per-user QR codes.
That's not free, that's paying for it in a different way.
I do think the internet should be free, but how we sustain that is a pretty viable question. APs need to be installed, configured, and monitored. When spaces grow, APs usually need to be reconfigured or moved.
I know that airlines are fairly low margin businesses, based on what I've read from the recent bankruptcy stuff. I am curious about who owns airports and what their margins look like.
On mobile OSes, the captive portal is opened in a sandboxed embedded browser. OS designers want to prevent the captive portal from being used maliciously, so they understandably block off a lot of functionality. Problem is they don’t tell you what features they turn off. I.e As far as I can tell iOS blocks off external links and ajax requests (!)
On iOS you can’t close the captive portal programmatically. The user must submit an html form (or similar) and navigate to a new page. Only then will the OS check /mobile-hotspot-detect and realize that the user is connected to the internet and present the user a button to close the captive portal. This is very clunky and makes it impossible to make a sleek user experience
Android automatically closes the captive portal when it detects a connection. This often confuses the user (why did my page suddenly disappear?) and makes it impossible to make a consistent mobile captive portal experience between iOS and android
Androids kernel seems to have two separate, independent captive portal checks
iOS only checks the content of the connectivity check endpoint, while android also checks for any form of a DNS redirect in its requests
Microsoft checks against two different domains for a captive portal
Many Non-stock android distros check against their own custom (and undocumented) endpoints
There was a dhcp option recently introduced to help clean up this mess. Problem is, nobody supports it. Not even Apple (who seemed to have played a hand in the RFC) supports it
Linux is a lost cause
Figuring this all out took over a month of trial and error. Even then many of my conclusions are probably wrong. None of this is documented or standardized!
I've been there. I can relate to everything you said!
The DHCP standard was such a waste of time. Ignore it completely, no client support whatsoever.
Intercepting all plain HTTP traffic (just drop https) and responding with a 30x redirect to your captive portal web page seems to be the ad-hoc "standard". Your captive portal domain can be served under secured HTTPS just fine.
I fully agree with the sandboxed browsers pain and absolute impossibility to get a nice consistent UX across platforms.
Totally agree. Like you said the method we converged on is to just redirect DNS requests + 303 users depending on if they’ve gotten through the portal yet. It seems to work fine. What’s most frustrating is that most off-the-shelf FOSS dns programs don’t let you do DNS redirects on a per-mac basis, leading us to in-house a decent amount of DNS code.
I mean, captive portals started out as a hack, then captive portal detection was a hack against that hack... It's effectively an antagonistic relationship.