Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

As a Googler (who does not work on Hangsout) my own personal opinion is that I fully agree with the EFF here:

"In public explanations of its dropping XMPP support, Google has said that it was a difficult decision necessitated by new technical demands. But even if this new protocol responds to different technical requirements, that shouldn't prevent the company from making it public and interoperable. Releasing the specifications for Google Hangouts would be a good first step. Releasing free/open source clients and servers should follow. It's clear that some of Hangouts' video features have been implemented in some very Google-specific ways. But that's no excuse for leading us toward a world where the only practical choices are proprietary chat clients and protocols."

I hope the specs at some point are opened. The Hangouts team probably has good reason at the moment to strictly focus on a core set of functionality and get it working with good UX on all the platforms. People are complaining that stuff like voice calling, and some Talk features are missing, but it's probably due to focus on shipping something that works good first. It's inevitable people will reverse engineer the client, as was done with MSN, Y! Messenger, and AOL. In the late 90s, early 2000s, I remember using reverse engineered Java libraries that could talk to these services.

Google Wave was done in the opposite way, as was Open Social. They came with specs, but they did not focus on core user experience in the beginning. It's hard to win with open specs without getting consumer traction.



As another Googler, I mostly agree with you, but I also sympathize with the product and engineering teams.

They keep seeing very popular apps and messaging products that are completely proprietary and locked in, and that are able to move and innovate fast because they either don't have to conform to an existing standard, or worry about publishing and support whatever internal protocol they do use.

Sure Hangouts could publish their protocol like Wave did, but Wave was a complex mess of different protocol formats and transports and publishing that as a "standard" was way premature. Who knows how good the current Hangout protocols are, or how well they might play with real standards like WebRTC. It's probably too early for them

I remain optimistic that these things can be standardized in the future.


I agree with you, history is pretty much full of committee designed specs that failed, but full of defacto standards from commercially successful products that are later standardized.

A lot of the features, like group video calling, first need their use cases polished by user trial, then hopefully, they can be implemented on top of either existing standards, provide inputs to change those standards to satisfy the requirements.

OpenID/OAuth is an instructive example. The specs were built very much driven by committee oriented thinking, and the resulting UX implementations out of the gate were much more complicated than the Facebook Connect experience. I think actually focusing on the "design" aspect of how users use it, and then extracting the relevant technical requirements into a spec works better.


Here's how I see it ... my company is currently using Skype and for us Skype worked really well. By using Skype for example I can call any number in the U.S. for reasonable fees, something which Google doesn't allow me to do, since Google Voice is not available in my country. Skype's clients for my Android or for my Ubuntu laptop also work well.

Basically, Google is late. And the only thing that would make us switch to Google's alternatives would be an open protocol (not necessarily approved by a standards body, but at the very least with the specs published). That's because we don't trust Skype, but we would trust a protocol that has public specs. Even more than that, we would trust an open P2P protocol that wouldn't drop our connections to our list of contacts if your servers die.

And no, you won't win us by shoving that awful chat box down our throats, like in GMail or in Google+. You have to do more and be more than Skype currently is. Because you're late. And you won't win with consumers either, because consumers are already using Facebook's video chats, which is integrated with Skype. And you're late to the party all over the board, even though putting moustaches on people's faces is kind of cool.

Speaking of Skype and open P2P protocols, since Skype was acquired by Microsoft, want to bet that they'll up-yours on this one?


> And no, you won't win us by shoving that awful chat box down our throats, like in GMail or in Google+. You have to do more and be more than Skype currently is. Because you're late. And you won't win with consumers either, because consumers are already using Facebook's video chats, which is integrated with Skype. And you're late to the party all over the board, even though putting moustaches on people's faces is kind of cool.

Free group video chat and Hangouts on Air (free YouTube exporting, basically) are enough for me to seriously consider Hangouts over Skype for most interactions.


Yeah, both are pretty cool to have, I was exaggerating a little, though for companies Skype Premium isn't that expensive.


Wave was built on XMPP wasn't it? If I understand correctly XMPP is fully extensible which means Google can add whatever data they want and still be inter-operable with all the common parts and then define the new parts.

Video seems like it could be out of band using WebRTC and initiated by new XMPP data.

Watching the Wave introduction presentation from Google I/O 2009 it's hard to buy that a new protocol was needed.

http://www.youtube.com/watch?v=v_UyVmITiYQ


i'm sorry i fail to see the argument. it's perfectly fine to have your own proprietary format, but publish the specs so others can interact with it. who said anything about ietf having to sign it off.


This is exactly right. It is a false choice to suggest that "open" means "slow". But that is fostered in part by folks who think "open" means "community driven."

It is perfectly reasonable to publish the specs of what your doing with no commitment of support, simply for folks to see and possibly use at their own risk.

That said, the reason companies don't do this is often one or two reasons. Reason one is that it results in a 'noisy' side channel as people who read the specs share their opinions, but they also get bent out of shape if something they said, or think they said, appears in the spec later. The other issue is that there is always a sanitization/prep process to take essentially internal information about a service and create an external representation for consumption (even if that consumption is unsupported).

So the 'easy' choice is to not publish.


I certainly don't think open means community driven, and there are many more than two reasons companies don't publish unsupported specs.

One of the biggest is that they're just not ready for 3rd party clients in any fashion. The protcol may be _very_ rapidly changing. The auth workflow might be tied to some other system. And, regardless of messaging to use at your own risk, if you publish people will get mad when you break them. Finally, it's just a lot of work to go through if it's truly unsupported.


If your protocol is that rapidly changing, you shouldn't be launching anything to the public, period. Especially if you're replacing an existing service with it.


If it wasn't being used by the public, then the protocol wouldn't be rapidly changing. Internal use, even by a company as large as Google doesn't give enough variety of circumstance and use-case to drive the changes.


Why not?


Because as soon as you release the spec, people will start building stuff against. This creates resistance to changing the protocol.


The problem is simple.

Once you've published it, you're under pressure to remain backwards compatible. But until you've got experience with it working in the real world, you don't know what really works. And then before publishing it, you need to go through a lot of work to make the published spec clear enough that someone else can reimplement from scratch and it will actually work.

For all of those reasons, there is pressure to not publish a spec until after the technology has matured.


This is mostly crap, to be honest. It's all about messaging.

I wrote most of the first version of the wave federation spec (I cannot remember how this happened, but im pretty sure it involved being in Sydney temporarily, alcohol, and Soren being Soren. I'm fairly sure I got the short end of the stick on this one).

It was absolute and complete crap. But it kinda worked. We were clear it was a first draft, and folks understood it would completely and utterly change.

Within an hour of publishing it, the wonderful XMPP folks emailed me and asked me to fix a few things (like accidentally using some reserved namespaces/etc), and asked how they could help, pointing out the XEP process, and pointing out we were doing some things others were working as well.

With their help, it ended up as a "mostly sane" spec.

Realistically, pressure comes from improper messaging. If you tell people "here's our current thoughts, in spec form, this is all gonna change", you can do a lot of work in the open without too much customer pressure.

Now, will this slow adoption? Maybe, it depends on what kind of product it is.

But the argument that you get this pressure out of thin air is wrong. Pressure is mostly caused by self-inflicted wounds where people are not being clear about the state of the world as they see it.


That is an excellent point, and I'm glad it worked out for you. What I'd be more curious about is how well it would have aged had it not been discontinued. Comments below suggest that if you had longer to do it you would have done it differently. Which suggests to me that developers 5-10 years later would likely have wanted something quite different.

The reason that I'm cautious is that I've suffered through code that has to jump through hoops to remain compatible with something someone thought was a good idea 10 years ago. Things that seem like a good idea now don't always a few years later. When you control both ends, you have a potential upgrade path to fix it. When you don't, you're stuck.


But for most of the API's and protocols we are talking about here, 5-10 years later, developers always want something different, regardless of whether you designed it right or wrong. That's just the nature of the speed of our industry.

I'm, of course, not suggesting that you should never try to design something that will last 5-10 years, but in most cases, you can only standardize what people want to use now, and hopefully design a way of extending the protocol to be able to standardize what people want to do in the future as well.

As you say, otherwise you have to try to remain compatible with something someone thought was a good idea 10 years ago. That usually means it wasn't a good idea 10 years ago, it was something that 10 years ago, they thought would be good in the future, and they predicted wrong.

Past that, sometimes you have to just accept that the useful design life span of some protocols is not going to be as long as some customers would like.


Are you talking about the base64 encoded protocol buffers here? The Google Wave Federation draft spec reminds me much more of XEP-0239[1] than anything else.

As a reminder, here's how to send a tiny update to a wave:

  <message type="normal" from="wave.initech-corp.com" id="1-1" to="wave.acmewave.com">
    <request xmlns="urn:xmpp:receipts"/>
    <event xmlns="http://jabber.org/protocol/pubsub#event">
      <items>
        <item>
          <wavelet-update xmlns="http://waveprotocol.org/protocol/0.2/waveserver" wavelet-name="acmewave.com/initech-corp.com!a/b">
            <applied-delta><![CDATA[CiIKIAoFCNIJEgASF2ZvenppZUBpbml0ZWNoLWNvcnAuY29tEgUI0gkSABgCINKF2MwE] ]></applied-delta>
          </wavelet-update>
        </item>
      </items>
    </event>
  </message>
1. http://xmpp.org/extensions/xep-0239.html


This isn't necessarily true. Rust, for example (disclaimer: I work on Rust), was open very early on, and the language when it was released looks almost nothing like the language now. The various transitions have been painful, but the community is really understanding and helpful, and the language has a small but vibrant community as a result even though it's far from production ready.


It's this. You end up living with the consequences of your v1 design almost indefinitely, and that's a major tax on a team for what can sometimes be relatively minor benefits. Even in areas like Android where you _have_ to publish APIs, etc, I've seen the regret teams can have about a standard or API that seemed like a good idea at the time but turned into a major albatross.


IETF is probably hyperbole, but it's a valid concern. Publishing an open standard leaves you vulnerable to people expecting you to support it, especially in terms of backwards compatibility. If Google is agile and pushes a new version of the standard (obviously supported by their infrastructure) weekly that breaks backwards compatibility (which they can do because they control the entire ecosystem), they will be criticized for not taking openness serious, because competing implementations will always be playing catch-up and thus never become a viable competitor.

A completely good faith (and perhaps naive) read on the situation is that they need to settle the protocol to the point where they can commit to some level of good faith long term support and giving proper notice of breaking changes, then they'll release it.


The point isn't to get IETF to "sign it off". The idea is to get some extensive peer review from a wider audience, so that you don't end up with a widely-used protocol that's difficult to implement, missing important features, or both.

A single vendor working in a vacuum is how we end up with horrible kludges like the Public Suffix List. http://publicsuffix.org/

A lot of the failed IETF standards are ones where the IETF was constrained to trying to fix an already-deployed protocol after the fact.


We all remember Open Social, right?


I can sympathize with this notion, except that there are plenty of similar protocols that implement proprietary stuff on top of XMPP. Cisco UC, for example.


I'm not so optimistic. If their intention was to open and publish the protocol and specs, they'd announce this intention and avoid (or at least minimize) this PR backlash.


> They keep seeing very popular apps and messaging products that are completely proprietary and locked in, and that are able to move and innovate fast because they either don't have to conform to an existing standard, or worry about publishing and support whatever internal protocol they do use.

But the only feature that matters in a messaging app is knowing the person you want to contact also uses the app. In this case by choosing to "innovate fast" they are guaranteeing failure.


Relevant XKCD:

https://xkcd.com/927/


Google needs to bring WebRTC with VP9 and Opus to Hangouts already!


At least in the iOS app license information webrtc and opus are already mentioned.


I think this auto meme that "open standards" = "unable to innovate" is complete and utter hogwash.

Reminds of a project posted to HN a while back that had taken someone a month. I replicated it in the evening using all open source, open protocol components instead of being stubborn and falling prey to NIH syndrome.


> ... necessitated by new technical demands

As a bystander, this comes across as "we aren't smart enough to figure it out", or "we are smart enough, but this is a business decision with ulterior ("evil") motives". Neither of those look positive for Google.

I don't expect multi-person video conferences to work, but plain old textual messages, as already work today, shouldn't be a problem.


Wanting to maintain a competitive advantage is not evil in and of itself. Google Hangouts is far from a monopoly and they are under no obligation to make every product completely open. Not only would it be competitively stupid, but it would also hinder product development to be so heavily tied to standards publication.


Google has a marketing approach of being different by not being "evil", being open & accessible, and having smart people work there. The steady drumbeat now is one of being no different than the other guys. Google held themselves up as taking the high road, which is why so many are calling them out on this (and why people don't call out other companies for the same antics).

The issue isn't even about making hangouts open, but rather about chat messages in hangouts also being able to relay with federated XMPP servers - something GTalk does today just fine and has done so for many years.


  | It's inevitable people will reverse
  | engineer the client, as was done with MSN,
  | Y! Messenger, and AOL
That's not the end-all be-all though. I remember that AOL was pretty aggressive about keeping unauthorized clients off of their Oscar protocol (the original protocol was 'toc,' IIRC and didn't support all of the 'advanced' features like Buddy Icons, idle times, etc). At one point, AOL's servers would require random chunks of the official AOL IM client's binary as a protection measure. Obviously you can get around this, but it's not something you can do out in the open (distributing AOL IM with, say, Pidgin/libpurple would be copyright infringement).


As an Android user, the new Hangouts app that replaced the Talk app feels like one step forward and four steps back (in terms of UX). The one step forward is that the chat windows look prettier. The four steps back, from most annoying to least annoying:

1) Chat windows give me no indication of the presence of the other participant(s) in a conversation. If they sign off, I have no idea they've signed off. They frequently get messages from the tail end of a conversation much later, which is confusing for them.

2) The contact list is in alphabetical order, regardless of their presence. Talk used to separate online contacts from offline ones. This was nice because I have a lot of contacts in my gmail account, and only a dozen or so ever sign in to chat. There's no obvious way to change the sort in Hangouts.

3) The UI is higher latency. If I touch a notification in my tray, I often have to wait upwards of two seconds for the Hangouts app to load. The various screens within the app also take longer to load.

4) I used to have a custom notification set when I received a message. Hangouts replaced it with its own and there's no obvious way to change it back.

I hadn't noticed Hangouts had abandoned XMPP. I find this rather unfortunate. Adium on my Mac still connects fine, but for how long?


> It's inevitable people will reverse engineer the client, as was done with MSN, Y! Messenger, and AOL.

I hope you're right, but to my knowledge, this has never been done with Skype, and to this day you can only call a Skype-using friend/colleague if you use the official, closed-source Skype client. So let's not be sanguine.


I hope you're right, but to my knowledge, this has never been done with Skype

It was done, but Skype has a rule about this which they apply quickly: Press the cease and desist button.

I've been using IM clients with support for skype, only to see it get dropped. Which makes it doubleplus sad that MSN (which was fairly interoptable) got ditched for Skype and not the other way around.

The only "supported" way for third party IM clients to access Skype's network without risk of lawsuits is piggybacking on its COM API and have the user download and log into the official client, and have that handle all the actual communication.

You probably think "He's gotta be kidding", but sadly I'm not.

TLDR: Skype does not only not care for or provide an open-source implementation, but they are actively fighting it.


or the closed-source plugin with an API (skypekit.exe)


I was so hopeful they would release that at a library so all the open-source IM clients could link to it and get Skype integration. They indicated, years ago, that it was going to be released, but as far as I know, you can still only get it via a developer account.


I was always under the impression that the core Hangouts protocol _was_ the Wave protocol, in which case it's _already_ an open standard. What would be awfully nice is if Google (which is a category mistake - what I really mean is "the Hangouts team") would clarify this, release any updates to the protocol that they've made, and perhaps define something more than the weak API provided for the current Hangouts platform.

In particular, the right test of openness is this: can I build an interoperable client that could participate in a Hangout as a first-class entity?


As a former google wave developer, I can only say that I hope this isn't the case. Wave's federation protocol embedded an XML-like data structure (wavelets) in protobufs (binary) which were then encoded using base64 and embedded in XMPP extensions (ie, more XML).

That way lies madness.


How did Wave end up with that sort of mess? Isn't Google full of Very Smart People?


Here's a thought: what constitutes a very smart person? How often do those occur in the global population? Now, how many people are there on the planet and how big is the company?

The Earth may not be big enough for that to work out. This is before you remove people who aren't in this industry, people who are too young or too old to work there, people who don't want to work there, people who DID work there and left, and so on.

It might explain a lot, particularly when they go on a hiring bender. Where are all of these new people coming from?


Thats been the case with xooglers interviewing with my company. It seems like there are a lot of average to below average people working at Google for some reason. Or at least the below average people interview a lot.


The internal architecture of wave was very abstract. a lot of this abstraction was exposed in the first federation spec, somewhat due to necessity, somewhat due to lack of time.


They do love their XML in Android layouts, though @.@


thank you for giving me a new 'something' to contrast some of our internal transport cruft.

i can now, at least, think positively: 'well, at least it's not XML encapsulated in base64-encoded protobufs shipped over XML.'


What were the considerations which led to this design? Why not just stick with XML inside an XMPP extension directly?


> I was always under the impression that the core Hangouts protocol _was_ the Wave protocol

I've never heard this. Do you have anything that backs you up? I found a Quora answer that says that the Hangouts API is based on the the Wave API, but that's it.


Where oh where oh where did you get that idea?


Google Wave was not really done in the opposite way when it came to specs.

The spec were largely written by an an intern and a person not part of the wave team helping wave with other things.

Everyone else was focused on Wave.


I have to say I really do admire the way the EFF has taken to presuming that Google is a governmental body.


Where does the article conflate Google with a governmental body?


With the implicit assumption that there is any expectation or obligation for Google to conform to some level of openness, especially in (relatively) new products, as if it were paid for with taxpayer money and they owe us something in return. We can say that we wish it were more open, or that we won't use it because it isn't open, but I don't understand the attitude that they are committing some colossal ethical lapse or failing some social contract by not having it be more open at this stage.


The EFF is correct. Google is in lock step with the US governments plans of the complete erosion of our privacy rights. Google knows a lot about us all and the feds want that info.


It isn't accurate to state that the US government "plans the complete erosion of our privacy rights".


It certainly seems as if it does. They repeatedly pursue strategies to gain access to private data and actively work against compromise or safeguards. If that is too strong, I think everyone has to agree that shoring up privacy rights is something they are clearly hindering.


Poe's law.


It always seems to be my look Google is heading in the direction of a WebRTC based messaging service/client and then suddenly the product group veers off into another direction. WebRTC has been nearly here for several years now but it does seem to arrive...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: