Hacker Newsnew | past | comments | ask | show | jobs | submit | RealityVoid's favoriteslogin

After 15 years in tech and a life-changing psychedelic trip, I decided to self-learn genetic engineering and share it with the world[1][2] while I look for a meaningful biotech venture to found.

What you can do today from your kitchen or home lab is remarkable. For instance, I taught myself PCR (polymerase chain reaction) and recently published my first genetic sequence at GenBank from fungi that I cultured, extracted, sequenced and aligned. I'm planning to have developed my first GMO yeast in the next 90 days.

If you're interested, I also host the EverymanBio Podcast (YouTube & iTunes) where I talk to folks doing incredible work in the biotech / startup / diybio community.

[1] http://everymanbio.com/

[2] https://www.instagram.com/everymanbio/

[3] https://www.linkedin.com/in/joshuamcginnis/


I’m genetically engineering yeasts to make subtly flavored breads. I’ve already done grape aroma, now working on wintergreen.

Also working on a red chamomile (using beat red biosynthesis). Just for fun. Red chamomile tea!

The idea is to have niche invite-only genetically engineered flavors that I can bring to parties around SF :) what’s more special than a genetically engineered organism that you can ONLY get if I’m there? Good calling card


Take no. 6 for example: Behave by Robert M. Sapolsky. Dr. Sapolsky is a professor at Stanford with nearly 300 publications [0] and has been teaching biology and neurology for years. I'm curious, why do you think this book is pop-pseudoscience? Have you read it? Are there certain things you disagree with or think aren't based on research or reproducibility?

[0] https://profiles.stanford.edu/robert-sapolsky?tab=publicatio...


toLower() with RVV[0] has been implemented (by brucehoult).

0. https://lobste.rs/s/bfgsh6/tolower_with_avx_512#c_wqhwtp


I'm getting back in to audio programming, starting off with Pd[1] and reading Miller Puckette's book[2]. I'm planning on writing some low-level C libraries afterwards, using The Audio Programming Book[3] as a guide

[1] https://puredata.info

[2] https://msp.ucsd.edu/techniques.htm

[3] https://mitpress.mit.edu/9780262014465/the-audio-programming...


otrack et al.: Thank you and congratulations! It's gratifying seeing the wheels of research make progress.

My appreciation of formal and machine-checked proofs has grown since we wrote the original EPaxos paper; I was delighted at the time at the degree to which Iulian was able to specify the protocol in TLA+, but now in hindsight wish we (or a later student) had made the push to get the recovery part formalized as well, so perhaps we'd have found these issues a decade ago. Kudos for finding and fixing it.

Have you yourselves considered formalizing your changes to the protocol in TLA+? I wonder if the advances the formal folks have made over the last decade or so would ease this task. Or, perhaps better yet -- one could imagine a joint protocol+implementation verification in a system like Ironfleet or Verus, which would be tremendously cool and also probably a person-year of work. :)

Edited to add: This would probably make a great masters thesis project. If y'all are not already planning on going there, I might drop the idea to Bryan Parno and see if we find someone one of these years who would be interested in verifying/implementing your fixed version in Verus. Let me know (or if we start down the path I'll reach out).


For SW type people ...

GCC's impact was possible because it was (with GAS - the assembler) 100% feasible to have an open source toolchain. Yes more software was necessary for a complete system (linker, libc, etc), but GCC made it possible to build from the ground floor up.

Also, yes, the initial GCC was worse than any proprietary decent tool chain at the time, but it got better and better because each improvement built on all the earlier open sourced efforts.

Think about how hard Linux kernel development would have been if it had to rely on different proprietary tool chains for every target architecture (and possibly chip version).

Hardware definition languages (Verilog/VHDL, etc) enable high level chip design like high level programming languages, but making the physical chip requires a PDK (process design kit) that encodes how each critical silicon feature is built.

So a chip built for TSMC 28nm contains TSMC proprietary material and is essentially unportable. It can take several years to move a major chip from one foundry to another (or even a shrink at the same foundry), and the proprietary tool chains preclude a development process that can incrementally improve portability.

This announcement is a a major step toward a similar foundation being available for silicon design. It is very important that it is a large complex chip, rather than just a research development vehicle.

[disclaimer - past life as OpenPOWER participant]


Scott Alexander is not for everyone (as I've learned by unsuccessfully foisting him on friends for years) but for people who are even remotely interested he is probably the greatest internet writer alive.

I'd love to hear alternative GOATs.


One of my favorite books is The Mezzanine[0], which takes place entirely as a man ascends a single elevator but spins off onto all kinds of tangents that comment on and express exuberance about the most mundane things.

There's an entire thread on the evolution of stapler design, elaborations on the invention of perforations, and abundant self-reflection. It's almost like a hybrid of Leonard Read's "I, Pencil" and Hegel.

There's something magical about paying close attention to the mundane, IMHO.

Praise dullness!

[0]:https://en.m.wikipedia.org/wiki/The_Mezzanine


I wrote real-time signal processing code in C for 5 years. I used lists and arrays. Indirectly I used queues as well, but those were buried down in the mailbox code in the middleware. Also was not allowed to malloc and free during runtime or use recursive calls without a very good reason. No DFS or BFS to be had anywhere in the code. No sorting, except for a couple calls to the Standard Library qsort function. My algorithm text of choice was Skolnik[1], not CLRS.

Guess I wasn't writing software for a living. Plenty of SV companies sure got that impression.

[1] http://www.amazon.com/Radar-Handbook-Edition-Merrill-Skolnik...


Can anyone suggest a good textbook for Radio, Antenna theory, Ham radio in general?

I'm quite interested in learning these but most books I've come across either seem too basic, or too advanced. I am a math major so mathematics isn't a problem for me if it is supported with enough text. Thanks!


Destroy All Software screencasts by Gary Bernhardt has some great content. Besides the typical CS degree, I felt that these videos were most pivotal in me writing better code.

https://www.destroyallsoftware.com/screencasts


So, I've read most of these. Here's a tour of what is definitely useful and what you should probably avoid.

_________________

Do Read:

1. The Web Application Hacker's Handbook - It's beginning to show its age, but this is still absolutely the first book I'd point anyone to for learning practical application security.

2. Practical Reverse Engineering - Yep, this is great. As the title implies, it's a good practical guide and will teach many of the "heavy" skills instead of just a platform-specific book targeted to something like iOS. Maybe supplement with a tool-specific book like The IDA Pro Book.

3. Security Engineering - You can probably read either this or The Art of Software Security Assessment. Both of these are old books, but the core principles are timeless. You absolutely should read one of these, because they are like The Art of Computer Programming for security. Everyone says they have read them, they definitely should read them, and it's evident that almost no one has actually read them.

4. Shellcoder's Handbook - If exploit development if your thing, this will be useful. Use it as a follow-on from a good reverse engineering book.

5. Cryptography Engineering - The first and only book you'll really need to understand how cryptography works if you're a developer. If you want to make cryptography a career, you'll need more; this is still the first book basically anyone should pick up to understand a wide breadth of modern crypto.

_________________

You Can Skip:

1. Social Engineering: The Art of Human Hacking - It was okay. I am biased against books that don't have a great deal of technical depth. You can learn a lot of this book by reading online resources and by honestly having common sense. A lot of this book is infosec porn, i.e. "Wow I can't believe that happened." It's not a bad book, per se, it's just not particularly helpful for a lot of technical security. If it interests you, read it; if it doesn't, skip it.

2. The Art of Memory Forensics - Instead of reading this, consider reading The Art of Software Security Assessment (a more rigorous coverage) or Practical Malware Analysis.

3. The Art of Deception - See above for Social Engineering.

4. Applied Cryptography - Cryptography Engineering supersedes this and makes it obsolete, full stop.

_________________

What's Not Listed That You Should Consider:

1. Gray Hat Python - In which you are taught to write debuggers, a skill which is a rite of passage for reverse engineering and much of blackbox security analysis.

2. The Art of Software Security Assessment - In which you are taught to find CVEs in rigorous depth. Supplement with resources from the 2010s era.

3. The IDA Pro Book - If you do any significant amount of reverse engineering, you will most likely use IDA Pro (although tools like Hopper are maturing fast). This is the book you'll want to pick up after getting your IDA Pro license.

4. Practical Malware Analysis - Probably the best single book on malware analysis outside of dedicated reverse engineering manuals. This one will take you about as far as any book reasonably can; beyond that you'll need to practice and read walkthroughs from e.g. The Project Zero team and HackerOne Internet Bug Bounty reports.

5. The Tangled Web - Written by Michal Zalewski, Director of Security at Google and author of afl-fuzz. This is the book to read alongside The Web Application Hacker's Handbook. Unlike many of the other books listed here it is a practical defensive book, and it's very actionable. Web developers who want to protect their applications without learning enough to become security consultants should start here.

6. The Mobile Application Hacker's Handbook - The book you'll read after The Web Application Hacker's Handbook to learn about the application security nuances of iOS and Android as opposed to web applications.




The best lecture series I have seen till date ( and I have seen lectures by top professors across great institutions in multiple countries) is Classical Physics by V. Balakrishnan from IIT Madras, India [1]. Only people who have thought about concepts deeply over a lifetime can deliver such truly delightful lectures. If you have an hour to spare, just listen to the first lecture [2] and it will profoundly impact your outlook on science (and physics in particular)

[1] https://archive.nptel.ac.in/courses/122/106/122106027/

[2] https://youtu.be/Q6Gw08pwhws


If you don't mind missing the "M" part of MOOCs, you can learn a lot from university courses. Most of the top CS schools have slides and homeworks on their course websites.

For example, if you want to learn...

Artificial Intelligence (Stanford): https://stanford-cs221.github.io/autumn2021/

Programming Languages (UW): https://sites.google.com/cs.washington.edu/cse341spring2021/...

Distributed Systems (MIT): http://nil.csail.mit.edu/6.824/2021/schedule.html

These courses have all been through the test of time and are specifically designed to provide an in-depth education with material written by some of the best and brightest educators in the field.

You can simply follow along with the material at your own pace. Occasionally you'll come across something that you can't do (e.g. part of the assignment requires running tests on the school's private cluster), but most of the homeworks you can implement on your own. The only downside is there's nowhere to ask questions if you really get stuck (don't go contacting the course staff...), but with unlimited time you can usually figure it out eventually.


Yes, much better. ChatGPT/Claude/etc. are useful the times I want extra explanation to help connect the dots, but Math Academy incorporates spaced repetition, interleaving, etc. the way a dedicated tutor would, but in a better structured environment/UI.

Their marketing website leaves a lot to be desired (a perk since they are all math nerds focused on the product), but here are two references on their site that explain their approach:

- https://mathacademy.com/how-it-works

- https://mathacademy.com/pedagogy

They also did a really good interview last week that goes in depth about their process with Dr. Alex Smith (Director of Curriculum) and Justin Skycak (Director of Analytics) from Math Academy: https://chalkandtalkpodcast.podbean.com/e/math-academy-optim...


Very cool!

Does anyone have a list of other similar texts?

There's:

- Geometry: Joyce's Java version of Euclid's _Elements_: https://mathcs.clarku.edu/~djoyce/java/elements/elements.htm...

- Physics: https://www.motionmountain.net/

- Chemistry: The Elements by Theodore Gray https://apps.apple.com/us/app/the-elements-by-theodore-gray/...

A nifty thing my kids enjoyed was the website version of the book, _Bembo's Zoo_ (which sadly is no longer on-line: https://soundeffects.fandom.com/wiki/Bembo%27s_Zoo_(Websites... )


It's funny you bring that up as it seems to be a common request from adults interested in leveling up their math. I'm sure we could come up with something reasonable.

Nice article! I used to work in this field and there is an insane amount of cool tricks you can do if you have enough signal processing available. The first (obvious) one is that, since you can switch beam direction as quickly as you can switch radio frequency (ie thousands of times per second) it is no longer necessary to give each target equal illumination time. It is for example possible to mostly focus on the missiles inbound to your vessel at mach 3 while still not losing track of all the other traffic in the area. You can't really do that with older mechanical antennas since the inertia would tear the assembly apart.

The second is "colored space" radar, where you arrange the phases and wavelengths in such a way that the same transceiver array can generate multiple beams with different frequencies in different directions, giving true parallel beamforming (not merely concurrent as in the first example).

Finally, though every textbook introducing these concepts uses transceivers arranged in a flat plane, that is not actually required. For example, the F-35 uses "conformal antennas" that are shaped like the rest of the airplane and uses a phenomenal amount of signal processing to convert the resulting signals back to "as if" the antenna had been a flat plane.

I have always wanted to experiment with these things and rectennas to power small UAVs from afar so that they never have to land, but never got around to it.


Oregon eliminated burdensome parking regulations in most larger cities and: it's fine.

Many home builders still add parking to new projects because there is market demand for it - and they are also competing for tenants or buyers against existing housing which has parking.

But there is now the flexibility to do some projects without parking, which really helps at the affordable end of the spectrum, and is a good fit for more walkable locations.

BTW, Nolan Gray, cited as the author, has a book out himself that's really approachable and good reading if you're interested in cities: https://islandpress.org/books/arbitrary-lines


I strongly disagree.

The problem with the Unix lowest-common-denominator model is that it pushes complexity out of the stack and into view, because of stuff other designs _thought_ about and worked to integrate.

It is very important never to forget the technological context of UNIX: a text-only OS for a tiny, already obsolete and desperately resource-constrained, standalone minicomputer. It was written for a machine that was already obsolete, and it shows.

No graphics. No networking. No sound. Dumb text terminals, which is why the obsession with text files being piped to other text files and filtered through things that only handle text files.

While at the same time as UNIX evolved, other bigger OSes for bigger minicomputers were being designed and built to directly integrate things like networking, clustering, notations for accessing other machines over the network, accessing filesystems mounted remotely over the network, file versioning and so on.

I described how VMS pathnames worked in this comment recently: https://news.ycombinator.com/item?id=32083900

People brought up on Unix look at that and see needless complexity, but it isn't.

VMS' complex pathnames are the visible sign of an OS which natively understands that it's one node on a network, that currently-mounted disks can be mounted on more than one network nodes even if those nodes are running different OS versions on different CPU architectures. It's an OS that understands that a node name is a flexible concept that can apply to one machine, or to a cluster of them, and every command from (the equivalent of) `ping` to (the equivalent of) `ssh` can be addressed to a cluster and the nearest available machine will respond and the other end need never know it's not talking to one particular box.

50 years later and Unix still can't do stuff like that. It needs tons of extra work with load-balancers and multi-homed network adaptors and SANs to simulate what VMS did out of the box in the 1970s in 1 megabyte of RAM.

The Unix was only looks simple because the implementors didn't do the hard stuff. They ripped it out in order to fit the OS into 32 kB of RAM or something.

The whole point of Unix was to be minimal, small, and simple.

Only it isn't any more, because now we need clustering and network filesystems and virtual machines and all this baroque stuff piled on top.

The result is that an OS which was hand-coded in assembler and was tiny and fast and efficient on non-networked text-only minicomputers now contains tens of millions of lines of unsafe code in unsafe languages and no human actually comprehends how the whole thing works.

Which is why we've build a multi-billion-dollar industry constantly trying to patch all the holes and stop the magic haunted sand leaking out and the whole sandcastle collapsing.

It's not a wonderful inspiring achievement. It's a vast, epic, global-scale waste of human intelligence and effort.

Because we build a planetary network out of the software equivalent of wet sand.

When I look at 2022 Linux, I see an adobe and mud-brick construction: https://en.wikipedia.org/wiki/Great_Mosque_of_Djenn%C3%A9#/m...

When we used to have skyscrapers.

You know how big the first skyscraper was? 10 floors. That's all. This is it: https://en.wikipedia.org/wiki/Home_Insurance_Building#/media...

The point is that it was 1885 and the design was able to support buildings 10× as big without fundamental change.

The Chicago Home Insurance building wasn't very impressive, but its design was. Its design scaled.

When I look at classic OSes of the past, like in this post, I see miracles of design which did big complex hard tasks, built by tiny teams of a few people, and which still works today.

When I look at massive FOSS OSes, mostly, I see ant-hills. It's impressive but it's so much work to build anything big with sand that the impressive part is that it works at all... and that to build something so big, you need millions of workers, and constant maintenance.

If we stopped using sand, and abandoned our current plans, and started over afresh, we could build software skyscrapers instead of ant hills.

But everyone is too focussed on keeping our sand software working on our sand hill OSes that they're too busy to learn something else and start over.


Consider watching Paul Shraeder's _First Reformed_.

We're all nearly powerless but our choices do matter.


Good find, Dartmouth was the original BASIC for their mainframe timesharing, Apple and other micro variants came later.

Speaking of, John G. Kemeny's book "Man and the Computer" is a fantastic read, introducing what computers are, how time sharing works, and the thinking behind the design of BASIC.


Before my younger daughter headed off to college, I put her through a mini library studies class. It was designed to teach her how to identify authors with different positions on a topic, write summaries, and write brief analyses. It was very loosely based on a required prep school class I took in the early 1980s. She absolutely hated it. Not sure how much good it did her given her lack of interest in the lessons.

To this day I am grateful I had to take the library studies class. And while I was not the biggest fan of Mr. Hickock, he helped prepare me to learn better. We spent a ton o' time using encyclopedias and the card catalogue.

I am a huge fan of printed books. There's nothing quite like them and they almost magical way they work.


# Well-Kept Gardens Die By Pacifism

> Good online communities die primarily by refusing to defend themselves.

> Somewhere in the vastness of the Internet, it is happening even now. It was once a well-kept garden of intelligent discussion, where knowledgeable and interested folk came, attracted by the high quality of speech they saw ongoing. But into this garden comes a fool, and the level of discussion drops a little—or more than a little, if the fool is very prolific in their posting. (It is worse if the fool is just articulate enough that the former inhabitants of the garden feel obliged to respond, and correct misapprehensions—for then the fool dominates conversations.)

Read the whole thing:

https://www.lesswrong.com/posts/tscc3e5eujrsEeFN4/well-kept-...


Google didn't change it, it embodied it. The problem isn't AI, it's the pervasive culture of PR and advertising which appeared in the 50s and eventually consumed its host.

Western industrial culture was based on substance - getting real shit done. There was always a lot of scammery around it, but the bedrock goal was to make physical things happen - build things, invent things, deliver things, innovate.

PR and ad culture was there to support that. The goal was to change values and behaviours to get people to Buy More Stuff. OK.

Then around the time the Internet arrived, industry was off-shored, and the culture started to become one of appearance and performance, not of substance and action.

SEO, adtech, social media, web framework soup, management fads - they're all about impression management and popularity games, not about underlying fundamentals.

This is very obvious on social media in the arts. The qualification for a creative career used to be substantial talent and ability. Now there are thousands of people making careers out of performing the lifestyle of being a creative person. Their ability to do the basics - draw, write, compose - is very limited. Worse, they lack the ability to imagine anything fresh or original - which is where the real substance is in art.

Worse than that, they don't know what they don't know, because they've been trained to be superficial in a superficial culture.

It's just as bad in engineering, where it has become more important to create the illusion of work being done, than to do the work. (Looking at you, Boeing. And also Agile...)

You literally make more money doing this. A lot more.

So AI isn't really a tool for creating substance. It's a tool for automating impression management. You can create the impression of getting a lot of work done. Or the impression of a well-written cover letter. Or of a genre novel, techno track, whatever.

AI might one day be a tool for creating substance. But at the moment it's reflecting and enabling a Potemkin busy-culture of recycled facades and appearances that has almost nothing real behind it.

Unfortunately it's quite good at that.

But the problem is the culture, not the technology. And it's been a problem for a long time.


For being a relatively new language, there are almost no "entry level" positions.

Just take a look at https://rustjobs.dev/. Most of them are well paid remote jobs, but they are asking for +3 years of "professional experience" with rust, "with a proven track record of building and deploying production-quality code", and more. Hell, iirc, i saw one asking for proof of contributions to the rust repo (eg, being a core maintainer).

edit: to be fair, i saw one position a while ago asking to be willing to learn rust


> Now, there are issue threads like this, in which 25 smart, well meaning people spent 2 years and over 200 comments trying to figure out how to improve Mutex. And as far as I can tell, in the end they more or less gave up.

The author of the linked comment did extensive analysis on the synchronization primitives in various languages, then rewrote Rust's synchronization primitives like Mutex and RwLock on every major OS to use the underlying operating system primitives directly (like futex on Linux), making them faster and smaller and all-around better, and in the process, literally wrote a book on parallel programming in Rust (which is useful for non-Rust parallel programming as well): https://www.oreilly.com/library/view/rust-atomics-and/978109...

> Features like Coroutines. This RFC is 7 years old now.

We haven't been idling around for 7 years (either on that feature or in general). We've added asynchronous functions (which whole ecosystems and frameworks have arisen around), traits that can include asynchronous functions (which required extensive work), and many other features that are both useful in their own right and needed to get to more complex things like generators. Some of these features are also critical for being able to standardize things like `AsyncWrite` and `AsyncRead`. And we now have an implementation of generators available in nightly.

(There's some debate about whether we want the complexity of fully general coroutines, or if we want to stop at generators.)

Some features have progressed slower than others; for instance, we still have a lot of discussion ongoing for how to design the AsyncIterator trait (sometimes also referred to as Stream). There have absolutely been features that stalled out. But there's a lot of active work going on.

I always find it amusing to see, simultaneously, people complaining that the language isn't moving fast enough and other people complaining that the language is moving too fast.

> Function traits (effects)

We had a huge design exploration of these quite recently, right before RustConf this year. There's a challenging balance here between usability (fully general effect systems are complicated) and power (not having to write multiple different versions of functions for combinations of async/try/etc). We're enthusiastic about shipping a solution in this area, though. I don't know if we'll end up shipping an extensible effect system, but I think we're very likely to ship a system that allows you to write e.g. one function accepting a closure that works for every combination of async, try, and possibly const.

> Compile-time Capabilities

Sandboxing against malicious crates is an out-of-scope problem. You can't do this at the language level; you need some combination of a verifier and runtime sandbox. WebAssembly components are a much more likely solution here. But there's lots of interest in having capabilities for other reasons, for things like "what allocator should I use" or "what async runtime should I use" or "can I assume the platform is 64-bit" or similar. And we do want sandboxing of things like proc macros, not because of malice but to allow accurate caching that knows everything the proc macro depends on - with a sandbox, you know (for instance) exactly what files the proc macro read, so you can avoid re-running it if those files haven't changed.

> Rust doesn't have syntax to mark a struct field as being in a borrowed state. And we can't express the lifetime of y.

> Lets just extend the borrow checker and fix that!

> I don't know what the ideal syntax would be, but I'm sure we can come up with something.

This has never been a problem of syntax. It's a remarkably hard problem to make the borrow checker able to handle self-referential structures. We've had a couple of iterations of the borrow checker, each of which made it capable of understanding more and more things. At this point, I think the experts in this area have ideas of how to make the borrow checker understand self-referential structures, but it's still going to take a substantial amount of effort.

> This syntax could also be adapted to support partial borrows

We've known how to do partial borrows for quite a while, and we already support partial borrows in closure captures. The main blocker for supporting partial borrows in public APIs has been how to expose that to the type system in a forwards-compatible way that supports maintaining stable semantic versioning:

If you have a struct with private fields, how can you say "this method and that method can borrow from the struct at the same time" without exposing details that might break if you add a new private field?

Right now, leading candidates include some idea of named "borrow groups", so that you can define your own subsets of your struct without exposing what private fields those correspond to, and so that you can change the fields as long as you don't change which combinations of methods can hold borrows at the same time.

> Comptime

We're actively working on this in many different ways. It's not trivial, but there are many things we can and will do better here.

I recently wrote two RFCs in this area, to make macro_rules more powerful so you don't need proc macros as often.

And we're already talking about how to go even further and do more programmatic parsing using something closer to Rust constant evaluation. That's a very hard problem, though, particularly if you want the same flexibility of macro_rules that lets you write a macro and use it in the same crate. (Proc macros, by contrast, require you to write a separate crate, for a variety of reasons.)

> impl<T: Copy> for Range<T>.

This is already in progress. This is tied to a backwards-incompatible change to the range types, so it can only occur over an edition. (It would be possible to do it without that, but having Range implement both Iterator and Copy leads to some easy programming mistakes.)

> Make if-let expressions support logical AND

We have an unstable feature for this already, and we're close to stabilizing it. We need to settle which one or both of two related features we want to ship, but otherwise, this is ready to go.

    > But if I have a pointer, rust insists that I write (*myptr).x or, worse: (*(*myptr).p).y.
We've had multiple syntax proposals to improve this, including a postfix dereference operator and an operator to navigate from "pointer to struct" to "pointer to field of that struct". We don't currently have someone championing one of those proposals, but many of us are fairly enthusiastic about seeing one of them happen.

That said, there's also a danger of spending too much language weirdness budget here to buy more ergonomics, versus having people continue using the less ergonomic but more straightforward raw-pointer syntaxes we currently have. It's an open question whether adding more language surface area here would on balance be a win or a loss.

> Unfortunately, most of these changes would be incompatible with existing rust.

One of the wonderful things about Rust editions is that there's very little we can't change, if we have a sufficiently compelling design that people will want to adopt over an edition.

> The rust "unstable book" lists 700 different unstable features - which presumably are all implemented, but which have yet to be enabled in stable rust.

This is absolutely an issue; one of the big open projects we need to work on is going through all the existing unstable features and removing many that aren't likely to ever reach stabilization (typically either because nobody is working on them anymore or because they've been superseded).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: