Hacker Newsnew | past | comments | ask | show | jobs | submit | optymizer1's commentslogin

You could even go further and assume all user resources are logical extensions of sleepy.Resource:

  type HelloResource struct {
    sleepy.Resource
  }
This makes it clear that you're inheriting everything from sleepy.Resource, so you can provide reasonable implementations for all methods of resources, not just HTTP methods handlers, e.g. getBodyAsJSON(), redirect(resource), etc.

Also, depending on how your mux works, you could store app-level data in sleepy.Resource, to be accessed by all resources:

  sleepy.Resource.DB
  sleepy.Resource.Session
  sleepy.Resource.Root
and other things I can't think of right now.


Russian first names (not Cyrillic) are very strange: "Innocent Korovin" - really? Russian (cyrillic) seem more realistic though.


So.. it's like prefetching.


Here's an advice: browse for job posts you'd be interested in. Look at required/preferred qualifications. Start learning those. When ready, apply for those kinds of jobs.

Also, find out what other libraries/API they might use. For example, the company could be using some niche software that's nice to know, but it wasn't important enough to list in the job description. The HR people won't care if you say that you know X, but you might impress a fellow developer during the interview, and that's a plus.


This is, from my perspective, the only advice that matters when discerning what to learn while seeking employment. Look at businesses and people that you admire and see what tools they're using and see what they're looking for. It's one thing to see that there are a lot of (potentially boring and poorly paid) jobs out there using some technology and another to see that you can do the kind of work you want to and find a job you'll enjoy if you learn something else.


This is actually excellent advice. It will also drive you to recognize what you need to be able to demonstrate to actually get the job after you become proficient in the language / framework you select.


I'm not completely on board with #4 (initialization of variables). I agree that 'declaring and then assigning a value' _within the same block_ vs 'initialized declaration' has no speed gains, only downsides.

However, if the assignment can happen in a _different block_ (maybe inside an 'if' block) you could save 1 memory write, depending on how many times the if condition is satisfied.

This obviously optimizes for speed at the expense of maintainability, and it's for the programmer to make intelligent trade offs, but the fact is that one method is faster than the other.

Silly example that decrements 'a' repeatedly if condition is non-zero:

   //temp = 0 inside if block
   int dec(int a, int condition) {
       int temp;

       if (condition) {
           temp = 0;  //<-- memory write depends on condition

           //compute something here in a loop
           while (temp < a) {
               a -= (temp++);
           }
       }

       return a;
   }

   //temp = 0 at declaration
   int dec2(int a, int condition) {
       int temp = 0; // <-- memory write always executed

       if (condition) {
           //compute something here in a loop
           while (temp < a) {
               a -= (temp++);
           }
       }

       return a;
   }

We can disassemble the output to verify that dec() will only write to memory if the condition was satisfied (see 400458), while dec2() will always write into temp (see 400482):

    # <dec>
    400448:  push   %rbp
    400449:  mov    %rsp,%rbp
    40044c:  mov    %edi,0xffec(%rbp)
    40044f:  mov    %esi,0xffe8(%rbp)
    400452:  cmpl   $0x0,0xffe8(%rbp)  # if (condition)
    400456:  je     400473 <dec+0x2b>
    400458:  movl   $0x0,0xfffc(%rbp)  # temp = 0 <-- depends on condition
    40045f:  jmp    40046b <dec+0x23>  # while
    400461:  mov    0xfffc(%rbp),%eax
    400464:  sub    %eax,0xffec(%rbp)
    400467:  addl   $0x1,0xfffc(%rbp)
    40046b:  mov    0xfffc(%rbp),%eax
    40046e:  cmp    0xffec(%rbp),%eax
    400471:  jl     400461 <dec+0x19>  # loop
    400473:  mov    0xffec(%rbp),%eax  # return a
    400476:  leaveq
    400477:  retq

    # <dec2>
    400478:  push   %rbp
    400479:  mov    %rsp,%rbp
    40047c:  mov    %edi,0xffec(%rbp)   
    40047f:  mov    %esi,0xffe8(%rbp)
    400482:  movl   $0x0,0xfffc(%rbp)   # temp = 0 <-- always executed
    400489:  cmpl   $0x0,0xffe8(%rbp)   # if (condition)
    40048d:  je     4004a3 <dec2+0x2b>
    40048f:  jmp    40049b <dec2+0x23>  # while
    400491:  mov    0xfffc(%rbp),%eax
    400494:  sub    %eax,0xffec(%rbp)
    400497:  addl   $0x1,0xfffc(%rbp)
    40049b:  mov    0xfffc(%rbp),%eax
    40049e:  cmp    0xffec(%rbp),%eax
    4004a1:  jl     400491 <dec2+0x19>  # loop
    4004a3:  mov    0xffec(%rbp),%eax   # return a
    4004a6:  leaveq 
    4004a7:  retq
The above code was compiled with gcc 4.1.2 on amd64/linux. gcc -O2 and gcc -O3 completely do away with the 'temp' variable and generate fewer instructions.


I'd suggest that, if a variable with an undefined value is a valid state for the program, the declaration is in the wrong place. How about this alternative?

   int dec(int a, int condition) {
       if (condition) {
           int temp = 0;

           //compute something here in a loop
           while (temp < a) {
               a -= (temp++);
           }
       }

       return a;
   }


That's the obvious next step, but it's not valid C89 and that may or may not be relevant for people doing embedded programming. Personally, I'm all for using newer standards and declaring variables locally when it makes sense. Sometimes I like to see all variables at the top of the function though.


I'm not particularly familiar with x86-64 calling conventions. Where is the code that sets up the stack frame? Would this code look different if temp were created inside the if() block, not just initialized there?


In C, variables have function-level storage. I don't recall the actual term from the C standard, but the storage for _all_ variables declared in any blocks inside a function is allocated at the beginning of the function, when the stack frame is set up. So there is 'no way' to create 'temp' only inside the if block.


Okay. I knew variables had block-level scope. I've read some of the ISO C standard, but hadn't caught the part where variables have function-level storage. I was wondering because, years ago, I read some disassembled code that did lots of stack allocations in the middle of functions, and not just for the purpose of calling another function.

Can you point to a reference showing that variables have function-level storage? The closest I can find in the C standard draft I'm looking at is 6.2.4.6, which suggests block-level lifetimes for variables: "For such an object that does not have a variable length array type, its lifetime extends from entry into the block with which it is associated until execution of that block ends in any way."


I hate to be that guy, but what's the big deal? Where's the full disclosure? It looks like they're just documenting the API, which is not really disclosing much. Anyone can fire up burpsuite proxy and inspect HTTP requests and responses from their phone.

Now onto their PoC. So they don't have rate limiting on some API requests. That's pretty dumb for a service with a public API, but in my experience, most websites don't limit requests rate, because it's always a "let's toughen up security" after-thought. I remember GAE having some anti-DDoS measures, so they may be relying on that while growing the business.

The bulk registering of user accounts is more serious though and could be easily fixed (to some extent) with a captcha. This may be worthy of a tweet, maybe. Instead, Gibson listed all of SnapChat's APIs, even though most of them were irrelevant to the PoC, and slapped 'Full Disclosure' on it.

This is high-school level security researching. We were finding the same 'exploits' in high school. You could probably find these with any service that's only starting out. Glad to see that's the best Gibson could do. If I were Snapchat, I'd fix the two issues and then thank Gibson for spending the time to create an API page for SnapChat.


This comment makes you sound both arrogant and uninformed. You should rethink your tone.

First off, security exploits are not measured in how hard they are to pull off, they're measured in overall impact. This is because the point of security is to prevent such exploits, not to wave your dick around like an idiot. The point of this post is that there are very serious exploits in the service. That justifies the post being on the frontpage regardless of how hard they were to find. (Hint: the fact that you call it a "high school" exploit does not negate the fact that it's a serious vulnerability.)

Second off, Snapchat had a long time to fix this and they didn't. Maybe you would have "just fixed it" but the fact that they didn't is also newsworthy and totally justifies this post being here.


> This comment makes you sound both arrogant and uninformed. You should rethink your tone.

I see nothing wrong with the tone of the comment you replied to. The comment disagrees with the significance of the linked article and does so with examples explaining why they believe that. They could be wrong but it's hardly arrogant.

Your tone, on the other hand, is inexplicably combative.


The original comment says things like "anyone" can do this, and it's a "high school" level exploit.

The discussion is about security vulnerabilities. Comments berating the people who worked on this because they didn't pick a problem that's manly enough are completely irrelevant to the discussion at hand. They only serves to reaffirm the commenter's belief that they are smart.

Furthermore, while it is obvious the commenter doesn't believe the article is worth discussing, they certainly did not give good reasons for believing this. The whole comment is basically making the point that the summary is too elementary to be taken seriously, which ridiculous enough to deserve someone calling it out -- if it's so elementary, then why wasn't the find friends problem fixed? That only strengthens the case for this article.


You are disregarding the fact that the vast majority of attacks ARE, in fact, this simple. And your post also fails to mention what is central to Gibson's disclosure, namely the instructions for finding the phone-numbers of SnapChat users. So your post sounds really biased.


That's not an "exploit", that's just how all these services work. For a large service there really isn't a good way to implement private set intersection as would be required for this - all the techniques that might work are deep into academic-paper-only territory, forget about finding a convenient open source implementation lying around on github, let alone a mature one.

The other one, bulk registration of accounts, is also not an exploit using any conventional definition of the word. I spent years fighting bulk account signup abuse at Google. When we failed it was not an exploit in our system, because that implies you can provide some kind of cast-iron security guarantees on par with cryptography; you can't, all you can do is rate limit and try to detect bogus accounts. It's like finding a way to send spam and calling it an exploit.

The poor crypto is disappointing but hardly unique: the field of crypto in general has given people poor tools to work with. Things like NaCL are barely known where as lower level primitives are supported out of the box by basically every OS/platform out there, with little or no guidance on the best way to use them.


Are you really saying that it's not a exploit to be able to get a username from a phone number? They can bruteforce every single possible cellphone numbers (and they prove that it doesn't even take that long). How is this not important?

Do you know what is a DOX? Do you know how easy it is to get one when you have some basic information about someone? With this new information you can now find a phone number based on their username... more information about someone = easier to DOX them.

The bulk registration is not really an exploit, you are right, but it's a good way to hide the other exploit as a new user. Yeah they don't actually have to hide it because even when they say they do it, Snapchat doesn't rate limit.


Totally agree. Documenting and discovering an API is far from calling it an exploit.

The document is also concerned about SnapChat's relationship with investors and the person of the founder, which is odd in a security paper.

GibSec's other work is another SnapChat analysis, which I find odd. Maybe he/she wants to work there? :)


We don't :) (but we'd be happy to take Snapchats money and help them out!)

We documented two exploits, which are exploits, because we are exploiting code that has been incorrectly implemented.

We also noted that Snapchat must have lied to Goldman Sachs (is this what you were referring to?), as we noticed during our research that there is no mention of gender in the protocol.

Does that answer any questions?


On a related note, 10 years ago, I was asked to go to MIT and take a math exam instead of a student. Obviously, I said no, but I did not expect an MIT student to do that. He was more of the business type though.


I find the JSON benchmark misleading a bit. I posted this before, but I'll say it again: JSON serialization in Go is slow (2.5x slower than Node.js for example [1]). The web server, however, is very fast. When they measure webserver+json, Go wins because of its webserver, not because it serializes JSON faster. If you want to parse a lot of JSON objects with 1 request (or 1 script), or if you have a large JSON object to parse, Node.js will outperform Go.

That said, I rewrote my app in Go and I'm very happy with the performance, stability and testability. The recently announced go 'cover' tool is very useful and a breeze to use.

[1] Here are my benchmarks: https://docs.google.com/spreadsheet/ccc?key=0AhlslT1P32MzdGR... (includes codepad.org links to the source for each benchmark)


I optimized the Go JSON serialization in Go 1.2. See https://code.google.com/p/go/source/detail?r=5a51d54e34bb ... it went from 30% to 500% faster. It uses much less stack space now, so the hot stack splits are no longer an issue (also Go defaults to 8KB stacks for new goroutines now).


This is pretty neat. Functions that read from or write to http.Request.Body should be able to accept a Context object, assuming they take a Reader/Writer. If they take an http.Request, you're going to pass in Context.req, similar to how custom.Request would contain a custom.req field.


Interesting. Now, what does SetUser(r, "dave") do?

Since you're passing in 'r', it must keep a map of all requests and their data, i.e. map[*http.Request]string ("dave" being the string in this case).

This map should be protected by a mutex because it could be written from multiple go-routines.

Is this the case? If so, how do we avoid contention on this mutex?


No, 'r' is request-specific; it's owned by the goroutine that's dispatching the current request.

If you had actual shared state that needed to be mutated by concurrent handlers, the idiomatic solution would be to park it behind a channel on its own goroutine; that's why goroutines are so cheap, is so you can allocate them to problems like this. If you want your solution to be general and unfussy, you'd have the channel be of closures; you'd just pass whatever mutating code you want to run to the goroutine.


Actually (https://github.com/gorilla/context/blob/master/context.go):

  var (
          mutex sync.Mutex
          data  = make(map[*http.Request]map[interface{}]interface{})
          datat = make(map[*http.Request]int64)
  )

  // Set stores a value for a given key in a given request.
  func Set(r *http.Request, key, val interface{}) {
          mutex.Lock()
          defer mutex.Unlock()
          if data[r] == nil {
                  data[r] = make(map[interface{}]interface{})
                  datat[r] = time.Now().Unix()
          }
          data[r][key] = val
  }
The map is needed because 'r' is not actually (and cannot be) modified, so for a later handler (for the same request) to access the data it must be stored in this map, so 'Get' can get it back. And the map is global, so you need a mutex.

As a side note, my understanding is that it is not necessarily un-idiomatic to use a mutex in Go, particularly for shared state. See https://code.google.com/p/go-wiki/wiki/MutexOrChannel


> As a side note, my understanding is that it is not necessarily un-idiomatic to use a mutex in Go, particularly for shared state. See https://code.google.com/p/go-wiki/wiki/MutexOrChannel

It's definitely not unidiomatic (heck, one of the Go team members inspired gorilla/context). It's not ideal, but at the same time a context map a) is simple b) does not impart significant complexity on middleware and c) should still perform well, even with a fair bit of contention. Map access is pretty fast, and most of the time you're only storing small things in it.

I'd be curious to see/benchmark the results of a more complex, less map-reliant solution vs. a context map: my gut feel is that the map route wouldn't have any problems hitting 10,000req/s on a small 2GB VM. Most of the time, the context map won't be your bottleneck.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: