Hacker Newsnew | past | comments | ask | show | jobs | submit | shankys's commentslogin

I'd be really interested in hearing more details about your experience.


Support for mixed Scala/Java projects improved a lot in Scala 2.7.2, which was released in late 2008, so there aren't problems mixing Scala/Java anymore.

Here's a description of what was added since this article was written:

"The compiler can now parse (but not translate) Java source files. This makes it possible to have mixed Java/Scala projects with recursive dependencies between them. In such a project, you can submit first all the Java and Scala sources to the Scala compiler. In a second step, the Java sources are compiled using the Scala generated .class files and the Scala sources are compiled again using the Java generated .class files."


I don't know Tony personally, but I'm very familiar with his work. He's a brilliant programmer.

HN is the only reason I learned about Tony's suicidal message on IRC within an hour of it being sent. Hopefully someone who personally knows him seeing this will be able to help.


Being able to write a "hello, world" in Haskell certainly doesn't make you a good programmer. However, I would say that anyone who has already written a sizable Haskell program (5K+ LOC) is almost certainly a good programmer.

The reason is that it's more work to grok Haskell than almost any other existing language, and the job market for Haskell hackers is practically nonexistent. You have to really love programming to become a proficient Haskell hacker.


I think you meant "not over 40".


Version control is indeed always a good idea. However, I'd strongly advise choosing Git over SVN.

SVN's branching capability is a joke compared to Git, and you'll wonder how you ever lived without easy branching once you get used to having it. This is important even if you're the only developer on the project.


Since the release of SVN 1.5, branching / merging is trivial.


I know both Lisp and Haskell, and I'm pretty sure that Haskell for most people is much harder to learn.

The two biggest stumbling blocks people encounter when learning Haskell are monads and the (powerful but complex) type system.


In my experience, learning OCaml is a good way to become accustomed to the type system Haskell uses without having to also learn how to do everything with functional purity at the same time. The type systems are very similar, though Haskell has type classes and OCaml has a much more elaborate module system.


Interesting ideas.

I can't imagine myself using them for serious work, but am interested in learning these languages for mental exercise (ref http://news.ycombinator.com/item?id=112196). I attempted nostrademon's Haskell tutorial but had a hard time making sense of things and so forgetting was a problem. Moreso when you don't really spend time poking around for its own sake.

Would either of you recommend OCaml as a gentle lead-in to Haskell, so things can be more easily remembered?

I was offered a good book in ML, but I have even less idea of where that would fit.


I'd recommend OCaml over Haskell for real use, actually. Where Haskell is experimental ("[...] Haskell has a sort of unofficial slogan: avoid success at all costs." -Simon Peyton-Jones, http://www.techworld.com.au/article/261007/-z_programming_la...), OCaml is intended to be a practical multi-paradigm language, building on new ideas from SML. It has some faults (it's somewhat painful until you understand the type system, and the language seems to be all but designed to give a terrible first impression), but it's a very powerful language. It's not good as a glue language, but for stuff involving heavy numeric processing or complex data structures (e.g. compilers) it really shines. I think it's an excellent complement to Python.

IMHO, the best English-language book on OCaml is _Developing Applications with Objective CAML_, a French O'Reilly book. There's a free, complete translation available online (http://caml.inria.fr/pub/docs/oreilly-book/). I haven't read _OCaml for Scientists_, and I found _Practical OCaml_ by Joshua Smith to be a huge disappointment, FWIW.

_The Little MLer_ is a pretty good book on ML's type system. If memory serves, everything but the last chapter would also apply to to Haskell, though you'll have to transliterate syntax.

Haskell is a really fun and mind-blowing language, and it will teach you a lot, but I haven't found it to be a practical language. (To be fair, you might feel otherwise. This book looks quite good: http://book.realworldhaskell.org/) You could probably get a lot out of see-sawing back and forth between Haskell and OCaml, switching when you get stuck with one or the other. They have a lot of common concepts, but they take them in different directions.


I disagree completely.

It is true that a few years ago, OCaml was more "practical" then haskell. It was faster and had more libraries.

But today haskell is superior to OCaml in nearly all aspects.

As a language, haskell has always been cleaner and more elegant. It has simpler syntax, a more powerful type system, lazy evaluation, and in general more features.

In terms of performance, haskell has caught up with OCaml and exceeded it. The leading haskell compiler(GHC) has been adding in optimizations one after another the past few years. Deforestation, pointer tagging, parallel garbage collection, and other techniques from various research papers. GHC is a top quality professional compiler built by a bunch of geniuses. Haskell also has several implementations and compilers(GHC, Hugs, yhc, nhc, jhc...) while OCaml has only a single implementation.

In terms of tool support, haskell has a debugger(ghci debugger), a profiler(ghc) an excellent documentation generation tool(haddock), and a standard build system(cabal). These are all superior to their OCaml equivalents(where they exist at all: OCaml build systems usually are a mess of makefiles or autohell)

Haskell is catching up to OCaml in the number of libraries available. Haskell now has a CPAN-like website called hackage with hundreds of libraries for all application domains (see the list: http://hackage.haskell.org/packages/archive/pkg-list.html ). Libraries can be downloaded and installed with haskell's build system using a single command, with dependencies also automatically taken care of. The haskell standard library is also a lot better then OCaml's. OCaml has two different incompatible list types(one lazy and one strict). OCaml's file handles can be either written to or read from, not both. (It's not possible to open a file for RW in OCaml). And OCaml infamously requires special syntax for doing arithmatic with floating point numbers(1 + 2 for integer, 3.0 +. 4.0 for floating point).

In terms of community I think that both languages are about equal. Haskell has active and beginner-helpful mailing lists and an IRC channel (one of freenode's most crowded).

Overall, I believe that Haskell is more "practical" then OCaml in nearly every domain. The only thing where OCaml might be preferable is high performance numeric stuff. But this might not be the case for much longer when the haskell GHC compiler will soon get it's new native code generator.

Haskell popularity has been exploding in the past few years, with tons of new libraries and books. There have always been myths about haskell that have caused it to have a perception of being impractical, kind of similar to lisp. But the truth is that haskell is a practical language, and depending on what you need to do, it can even compete with other practical languages like java and python.


First off, thanks for disagreeing constructively. I was hoping somebody would argue with sources that Haskell has gotten better for practical use. It's a really cool language. You've convinced me to give it another look when I have time.

> OCaml build systems usually are a mess of makefiles or autohell

VERY true. OCamlMakefile (http://www.ocaml.info/ocaml_sources/ocaml-make-6.28.1/README) helps, somewhat.

> In terms of community I think that both languages are about equal.

Really? I've gotten the impression that the Haskell community is larger, or at least writes quite a bit more. (And dons on #haskell is really helpful.)

OTOH, OCaml seems to be more portable. Porting GHC to a new platform seems quite a bit more difficult than porting OCaml; its separation of byte- and native compilation helps considerably. (This may or may not be as important to you.) Also, OCaml doesn't need to do as many advanced things for optimization.

All other things aside, I still think that Haskell is probably easier to learn if you are familiar with OCaml first (which was the original question): Haskell requires you to understand several new ideas upfront before you can do much of anything, whereas in OCaml you can pick up functional programming and how to work with the type system before learning to work with lazy evaluation and monads / purity. (You can also use monads in OCaml, Scheme, etc., of course.)


> In my experience, learning OCaml is a good way to become accustomed to the type system Haskell uses

I'd agree with this. In addition I'd recommend "The Little MLer" as a great book on how to think in terms of types.

edit. didn't read further down. Seems as if it is already mentioned:)


Definitely. I'd recommend the book even if you don't use OCaml or Haskell, really; it made me a lot more conscious of how types can help think out a problem. It's one more technique for your conceptual toolbox.

It's also a fairly fast read. It's in the same format as _The Little Schemer_, FWIW.


Try call/cc when you want a challenge with Lisp (i.e. Scheme).


Zones, ZFS, and DTrace are the main differentiating technical advantages for Solaris.


ZFS is now available on FreeBSD and it features jails. I haven't kept up with it in a while regarding dtrace support though.


The version of FreeBSD I installed to try ZFS loses all data on the volume after every reboot. It's like having a 146g RAID 1 SCSI RAM disk. :/

Granted I know I picked a random version (7.0 RC1), but the differences in maturity between the platforms is astonishing. This isn't a big deal when it's something that might at worst take rebooting to recover from (say, a SMP related crash), but with filesystems it's a little more crucial that they come through.


Zones are inferior to almost every other OS's virtualization/isolation strategy, a fact that Sun seems to be recognizing now. If virtualization is a key part of your IT strategy --- and it is for most large enterprises --- Solaris isn't your OS.


Only someone who hasn't actually USED zones would say that.

Sun never said that zones were the ultimate in virutalization - which is why they are coming out with XVM (their Xen implementation, basically).

I have made plenty of actual greenbacks with zones.


I've done more than "use" zones, but I'm not going to go into details; you can infer what you'd like from my background.

On the other hand, you didn't actually make any arguments here. All you did was assert that I'd never used the zones feature, make a point about something unrelated to zones, and then say that you made money with zones. Nobody is disputing that there is money to be made selling people Solaris instead of Linux.

I would at this point be more comfortable running applications under FreeBSD jails than zones, but, for obvious reasons, I would be much more comfortable running those same applications under virtualized Linux.


OK, I will expand on my original comment to give you a better idea of my perspective.

Zones are a useful tool because they provide the needed amount of separation (for me anyways) without a lot of overhead. They are portable to whatever the Solaris kernel is ported to (x86, x64, SPARC, and there is a PPC port being worked on).

A zone with /usr, /opt, etc. mounted read-only in the zone, is more secure (assuming no security holes to bypass the read-only property) than a non-zone Solaris system, yet it works exactly the same way. I can compile something in the global (root) zone and when installed under /usr it is available in every zone, and if there is a security hole that involves writing to e.g. /usr/bin/ping , it will fail.

Note that the kernel only loads one copy of each library, no matter how many programs reference or use it; this saves RAM compared to e.g. VMWare, and may reduce disk accesses if you have short lived processes as the library may already be loaded and resolved by the link editor.

You could duplicate this, of course, under any OS with a combination of NFS read-only mounts (loopback or over ethernet) and jails, although the administration overhead would be higher.

My reference to XVM (Sun's customized Xen) was to point out that if you don't like zones, you can still use "full" virtualization from Sun; it is not an either/or choice.


NFS is a great example of something that has subtle, bad security interactions with zones.


In my experience, Solaris zones and VMWare's virtualization offerings are the most solid and reliable virtualization solutions available (VMWare is quite pricey though). Zones are the best game in town when you need OS level virtualization.

There are plenty of things that suck about Solaris -- zones aren't one of them.


You're almost making an apples-to-oranges comparison here, albeit a comparison I begged you to make.

Solaris Zones aren't virtualization. They're an isolation feature that tries to find all the shared kernel namespaces between applications to present the illusion of multiple machines. "Zoned" applications share a running kernel instance, and share a number of kernel namespaces that are not carefully isolated.

VMWare images do not share kernels. Their entire running state can be frozen and shipped across a network (or marshalled out to an iSCSI SAN) on demand.

I think Solaris Zones are a pretty crappy answer to "virtualization". It's basically just a stronger version of chroot. It's inferior to VMWare-style virtualization on security (all zones on a single Solaris instance are vulnerable to the same kernel flaws, and kernel flaws have been the majority of Solaris security issues over the past several years), and they're inferior on management and logistics.


As others have said, VMWare is virtualization and Zones is not. Solaris Zones provides a high degree of isolation that is sufficient for the vast majority of cases that Xen is being used for, with virtually ZERO runtime overhead, simple and fast configuration, and streamlined maintenence. If you need more isolation than Zones offers then you probably have to skip Xen and go with a fully virtualized solution like VMware or similar. The cost of that extra isolation is a notable increase in runtime overhead, setup effort, and maintenance cost.


Things an enterprise gets with Xen/VMWare that they don't get with Zones:

* A security model that extends through the kernel

* A performance and resource sharing model that extends through the kernel

* Push-button migration

* Support for anything other than Solaris

* "Hardware"-level suspend/resume

* Centralized management

I can go on and on about the security implications of Zones (and Jails) --- I don't think this model is well thought-through. But on the feature-list alone, Zones (and Jails) are a pale shadow of what the "mainstream" OS's offer today.


What do you mean by "security model that extends through the kernel" and "A performance and resource sharing model that extends through the kernel"?

I don't believe that most people need the suspend/resume/migration feature. If you have a cluster that can handle system failure then you can easily migrate a zone the same way you would deal with a failed system.

Anyway, I agree that VMWare/Xen offers important features for pausing and moving running applications. I use those features of VMWare every day. But, most people will do very well with Zones because they don't need and won't use and didn't learn and don't want to pay for the extra features that VMWare offers.


Again: any Solaris kernel vulnerability likely allows a non-root zone to compromise the root zone. There are other real and potential problems with pretending that kernel security is just about the filesystem namespace and some additional access control on the process table, but "one kernel memory corruption bug costs you the whole server" is a simple enough security problem to get your head around.

VMWare does not have this problem --- you need both a kernel fault (not rare) and a hypervisor fault (quite rare) to take over a whole VMWare server.

You can say "most people don't need" the features Zones don't offer, but I see my clients using them, and expect they'd mention them immediately if asked why they use VMWare.

Very few people will do well with Zones, because very few people still deploy Solaris. The choice between shelling out for Sun gear and shelling out for ESX is a no-brainer.


What are the major flaws in Buzzword? I find it vastly superior to all its competitors.


First off, it's slow. Not ultraslow, but slow enough to be considered impractical for when I feel the urge to write something.

Then, it's online and there's no good way to access it. I set it up to work with Prism, once upon a time, but Buzzword couldn't remember my password and I had to log in each time.

Formatting messes up when it's between any format and Flash. That was an annoyance.

Finally, getting a hard copy of your stuff if you've got a lot is tedious. Docs has a gadget for mass downloading. Buzzword can't be modified in that way.


I'm now very curious -- what is Facebook using Erlang for?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: