Hacker Newsnew | past | comments | ask | show | jobs | submit | 2014-05-13login
Stories from May 13, 2014
Go back a day, month, or year. Go forward a day, month, or year.
1.Introducing the WebKit FTL JIT (webkit.org)
491 points by panic on May 13, 2014 | 96 comments
2.‘Alien’ creator H.R. Giger is dead (swissinfo.ch)
418 points by lox on May 13, 2014 | 67 comments
3.iMessage purgatory (adampash.com)
386 points by mortenjorck on May 13, 2014 | 170 comments
4.Octotree: the missing GitHub tree view (Chrome extension) (chrome.google.com)
362 points by yblu on May 13, 2014 | 106 comments
5.Computers are fast (jvns.ca)
315 points by bdcravens on May 13, 2014 | 154 comments
6.Source code of ASP.NET (github.com/aspnet)
291 points by wfjackson on May 13, 2014 | 99 comments
7.Big Cable says investment is flourishing, but their data says it's falling (vox.com)
290 points by luu on May 13, 2014 | 53 comments
8.Xeer (wikipedia.org)
246 points by mazsa on May 13, 2014 | 117 comments
9.Europe's top court: people have right to be forgotten on Internet (reuters.com)
247 points by kevcampb on May 13, 2014 | 205 comments
10.Pervasive Monitoring Is an Attack (tbray.org)
250 points by kallus on May 13, 2014 | 27 comments
11.MIT's Scratch Team releases Scratch 2.0 editor and player as open source (github.com/llk)
239 points by speakvisually on May 13, 2014 | 61 comments
12.Introducing Firebase Hosting (firebase.com)
220 points by jamest on May 13, 2014 | 75 comments
13.3D Video Capture with Three Kinects (doc-ok.org)
188 points by phoboslab on May 13, 2014 | 40 comments
14.Ask HN: Can we get a "Show HN:" section
191 points by pumpkinattwelve on May 13, 2014 | 34 comments
15.Babun – A new Windows shell (babun.github.io)
180 points by blearyeyed on May 13, 2014 | 108 comments
16.Today at 16:53:20 GMT, it'll be 1400000000 in Unix time. (epochconverter.com)
170 points by ozh on May 13, 2014 | 57 comments
17.Lectures Aren't Just Boring, They're Ineffective, Too (news.sciencemag.org)
163 points by robg on May 13, 2014 | 126 comments
18.GitHub Pages with a custom root domain is slow (instantclick.io)
156 points by dieulot on May 13, 2014 | 87 comments
19.Found after 500 years, the wreck of Columbus's flagship the Santa Maria (independent.co.uk)
146 points by adventured on May 13, 2014 | 52 comments
20.A Test for School Reform in Newark (newyorker.com)
141 points by savorypiano on May 13, 2014 | 232 comments
21.Metaprogramming for madmen (fgiesen.wordpress.com)
134 points by leeoniya on May 13, 2014 | 12 comments
22.Scaling SQL with Redis (cramer.io)
125 points by mclarke on May 13, 2014 | 35 comments
23.Understanding SaaS: Why the Pundits Have It Wrong (a16z.com)
123 points by moritzplassnig on May 13, 2014 | 27 comments

1/4 second to plow through 1 GB of memory is certainly fast compared to some things (like a human reader), but it seems oddly slow relative to what a modern computer should be capable off. Sure, it's a lot faster than a human, but that's only 4 GB/s! A number of comments here have mentioned adding some prefetch statements, but for linear access like this that's usually not going to help much. The real issue (if I may be so bold) is all the TLB misses. Let's measure.

Here's the starting point on my test system, an Intel Sandy Bridge E5-1620 with 1600 MHz quad-channel RAM:

  $ perf stat bytesum 1gb_file
  Size: 1073741824
  The answer is: 4
  Performance counter stats for 'bytesum 1gb_file':

  262,315 page-faults         #    1.127 M/sec
  835,999,671 cycles          #    3.593 GHz
  475,721,488 stalled-cycles-frontend   #   56.90% frontend cycles idle
  328,373,783 stalled-cycles-backend    #   39.28% backend  cycles idle
  1,035,850,414 instructions            #    1.24  insns per cycle
  0.232998484 seconds time elapsed
Hmm, those 260,000 page-faults don't look good. And we've got 40% idle cycles on the backend. Let's try switching to 1 GB hugepages to see how much of a difference it makes:

  $ perf stat hugepage 1gb_file
  Size: 1073741824
  The answer is: 4
  Performance counter stats for 'hugepage 1gb_file':

  132 page-faults               #    0.001 M/sec
  387,061,957 cycles                    #    3.593 GHz
  185,238,423 stalled-cycles-frontend   #   47.86% frontend cycles idle
  87,548,536 stalled-cycles-backend     #   22.62% backend  cycles idle
  805,869,978 instructions              #    2.08  insns per cycle
  0.108025218 seconds time elapsed
It's entirely possible that I've done something stupid, but the checksum comes out right, but the 10 GB/s read speed is getting closer to what I'd expect for this machine. Using these 1 GB pages for the contents of a file is a bit tricky, since they need to be allocated off the hugetlbfs filesystem that does not allow writes and requires that the pages be allocated at boot time. My solution was a run one program that creates a shared map, copy the file in, pause that program, and then have the bytesum program read the copy that uses the 1 GB pages.

Now that we've got the page faults out of the way, the prefetch suggestion becomes more useful:

  $ perf stat hugepage_prefetch 1gb_file
  Size: 1073741824
  The answer is: 4

  Performance counter stats for 'hugepage_prefetch 1gb_file':
 132 page-faults            #    0.002 M/sec
 265,037,039 cycles         #    3.592 GHz
 116,666,382 stalled-cycles-frontend   #   44.02% frontend cycles idle
 34,206,914 stalled-cycles-backend     #   12.91% backend  cycles idle
 579,326,557 instructions              #    2.19  insns per cycle
 0.074032221 seconds time elapsed
That gets us up to 14.5 GB/s, which is more reasonable for a a single stream read on a single core. Based on prior knowledge of this machine, I'm issuing one prefetch 512B ahead per 128B double-cacheline. Why one per 128B? Because the hardware "buddy prefetcher" is grabbing two lines at a time. Why do prefetches help? Because the hardware "stream prefetcher" doesn't know that it's dealing with 1 GB pages, and otherwise won't prefetch across 4K boundaries.

What would it take to speed it up further? I'm not sure. Suggestions (and independent confirmations or refutations) welcome. The most I've been able to reach in other circumstances is about 18 GB/s by doing multiple streams with interleaved reads, which allows the processor to take better advantage of open RAM banks. The next limiting factor (I think) is the number of line fill buffers (10 per core) combined with the cache latency in accordance with Little's Law.


Thanks for the rational response, Tom. I hope this doesn't get buried (someone is going through and downvoting at least all of my comment to 0).

Blog posts are best when they are sensational, and I try not to overdue it. I think React has a lot of good ideas, but "revolutionary" is a strong word. I think "refreshing" is a better word. Regardless, I think both React and Ember are the best 2 solutions out there right now, with quite different philosophies, and I'm happy that users have a choice.

Using rAF in my post was pretty much a hack. I think it was fun to take that and run with it. When you use React though, you don't actually do that, you use its `setState` method, or you use something like Cortex. If you look at my cortex example, you do use setters and getters, which give you a way to notify data change. Why don't we just use models like Ember? Because React still doesn't care how we model our data -- even if we have to call `set()` to trigger a paint update, what we get is the choice to use something like persistent data structures for our models.

=== I was completely wrong about Om, it does not continuously trigger rerendering/diffing ever 16ms with rAF. It only uses rAF to batch rendering, so multiple repaints are throttled to a minimum of 16ms ===

The on-screen issue is interesting; I need to think about it more to see if we can actually leverage it in production apps. I think we can for large list views. You don't share scroll state, do you? I absolutely agree that too many JS apps are breaking the web, and I love that Ember has defaults to make that not happen. There is a grave danger in using React and not taking care to do things right.

I'm actually really, really happy about the idea of React and Ember being the 2 ways to choose to build webapps. I have the upmost respect and love for Ember, I think it does a lot of things right, and I wouldn't be surprised if things like routing wasn't copied for libraries to use for React. <3


This is a really thoroughly researched post and jlongster has my gratitude for writing it up.

I have two concerns with this approach. Take everything I say with a grain of salt as one of the authors of Ember.js.

First, as described here and as actually implemented by Om, this eliminates complexity by spamming the component with state change notifications via requestAnimationFrame (rAF). That may be a fair tradeoff in the end, but I would be nervous about building a large-scale app that relied on diffing performance for every data-bound element fitting in rAF's 16ms window.

(I'll also mention that this puts a pretty firm cap on how you can use data binding in your app, and it tends to mean that people just use binding from the JavaScript -> DOM layer. One of the nicest things about Ember, IMO, is that you can model your entire application, from the model layer all the way up to the templates, with an FRP-like data flow.)

My second concern is that components libraries really don't do anything to help you manage which components are on screen, and in a way that doesn't break the URL. So many JavaScript apps feel broken because you can't share them, you can't hit the back button, you can't hit refresh and not lose state, etc. People think MVC is an application architecture, but in fact MVC is a component architecture— your app is composed of many MVCs, all interacting with each other. Without an abstraction to help you manage that (whether it's something like Ember or something you've rolled yourself), it's easy for the complexity of managing which components are on screen and what models they're plugged into to spin quickly out of control. I have yet to see the source code for any app that scales this approach out beyond simple demos, which I hope changes because I would love to see how the rubber hits the pavement.

It's always interesting to see different approaches to this problem. I don't think it's as revolutionary as many people want to make it out to be, but I've never been opposed to borrowing good ideas liberally, either. Thanks again, James!

27."If you're over 30, you're a slow old man" – Zuckerberg. He turns 30 tomorrow. (thedailywyatt.wordpress.com)
108 points by tfang17 on May 13, 2014 | 134 comments
28.‘No Place to Hide,’ by Glenn Greenwald (nytimes.com)
119 points by duck on May 13, 2014 | 41 comments
29.Popular fish oil study deeply flawed, new research says (cbc.ca)
104 points by fraqed on May 13, 2014 | 112 comments
30.How ACH works: A developer perspective – Part 2 (zenpayroll.com)
101 points by edawerd on May 13, 2014 | 34 comments

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: