Hacker Newsnew | past | comments | ask | show | jobs | submit | omarkj_'s commentslogin

There are other reasons for rewriting like getting to a more easily maintainable code or when a previous solution has simply outlived it's usefulness.

Also, C is not the only language worth rewriting in.


"Also, C is not the only language worth rewriting in" Agreed!

I would really like to see someone write a node.js and Go back-end for the same front-end and do a head to head comparison.

As someone who spent a fair amount of time rewriting node.js prototypes in Go, I'm probably biased. I feel like javascript is a much less maintainable language. Perhaps is was the original node.js implementations (I don't think it was), but the Go versions were always faster, used less memory and IMHO were more readable.


Any particular lessons learned rewriting from node to Go?

I've got a small but growing node/socket.io app I figured I'd have to one day rewrite in go if I really wanted it to scale.


Nothing too unexpected, off the top of my head, I've noticed:

* Use Go tip; You can grab a snapshot review all the open issues for that snapshot (most are enhancements).

* Like any re-factor doing it sooner as opposed to later is less work :)

* Do some "from scratch" Go projects before doing re-factor projects to get your legs under you (if they are not already there)

* Write Go in Go, not C/Python/Java in Go. This is harder than you think when you get started, but, if you ask for help and people tell you you are fighting the system, carefully consider their advice.

* A lot of the Go community likes to use single letter variable names in contexts like receivers, struct state (just look at the stdlib), buck the system, don't do that, use short camel case names. The next guy / you in six months will be glad you did.

* If you have a Java/C++ background you might often write a single threaded version of a daemon and later multithread it later, this is generally an unnecessary step in Go.

* The Go versions really are not much larger (LOC)

* There is lots of useful Go code on github (don't be afraid to try them)

* If you are doing front-endy kind of stuff supplement "net/http" with Gorilla where needed whenever you can rather than rolling your own. ttp://www.gorillatoolkit.org/

* "go tool prof" is a great tool, know how to use it and its top20 / web commands. Even if you don't feel the pain, use it and you will learn what things you do are expensive and it will keep trouble from sneaking up on you.

* If you are using a SQL based store, use a driver that implements the interfaces in "database/sql" rather than providing its own interface. This will make your life very simple if you need to migrate between mySQL <-> Postgre, etc

* LiteIDE is a nice lean cross platform Go IDE that includes syntax highlighing, autocomplete (with gocode) and debugging support. The only think I had to do was write my own syntax highlighting theme, based on Solarized, because I thought the included ones were gross.


From R15B01 you can actually use "plugins" for process registering using the {via, Mod, Key} syntax.

Gproc supports this and using it you can do stuff like:

  gen_server:start_link({via, gproc, {Your, "very", <<"complex">>, key}, [...])
you can also roll your own.

Docs: http://www.erlang.org/doc/man/gen_server.html#start_link-3


Are you crawling for this data? Also, since currency is a tricky thing to price (almost any bank/broker can trade currencies) how do you select the "most correct" price?

Great project btw!


Yeah, currently collecting from Yahoo! Finance, which has a fairly accurate but hard-to-use API. Advantage that OXR offers is that it's super easy, and responses are about 10x faster and 350x smaller.

Pretty soon it'll be collecting from other services too and taking averages, with a few slightly more complex moving parts, as well as calculations and statistics - that's where we need to beef up the server and start making some dough though.

Also very soon we're starting to collect other types of 'freely available' trade and economics data, and adding value to it in other ways!


Hi everybody,

It's time for the second Spawnfest (48 hour programming contest for Erlang, much like Rails Rumble and Node Knockout) and this year it is scheduled for 7th and 8th of July 2012.

Our committee is currently working on sponsors and prizes. We've secured some of the big names in the Erlang community as judges.

We're very excited about this opportunity to show the world what Erlang/OTP is capable of! The contest is not limiting entrants to web applications, in fact, we'll be having nominees in different categories.

You can register your team at http://spawnfest.com/


Some financial services have already moved to the cloud, NASDAQ uses Amazon S3 for at least one product (NASDAQ Market Replay) and have done since 2008.

When it comes to banking the picture is a bit different. I know at least one small provider of banking middle office solutions, Five Degrees (www.fivedegrees.nl), is working on using the Azure service for some services.

Interesting times. As an IT developer in the financial industry, I try to follow this closely.


According to the article no customer data will be stored at Google's datacenters.


It's hard to imagine an environment where users are using spreadsheets, documents and email on Google's servers and not leaking some customer data into those tools.


Yeah this is only about office software.


And since in the article it says "its not about costs", its for sure about cost ;-)


Google will probably be bank rolling a significant % of the transition project.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: