I read reddit, but I would never buy a gold account. The site has too much hate speech, and it is hard to support that financially. Between the misogyny and the atheist anti-Christian bigotry, it's pretty bad. You don't have to be either a woman or a theist to see that's wrong.
The question of how to address this issue while maintaining free speech is a more complex one. The reddit founders, however, have shown no interest in finding ways to improve site quality from this perspective, and indeed, appear to support the bigotry.
Anyway, I bear a huge grudge against the site because they banned my account for sockpuppeting and harassment, so take all of the above with a grain of salt.
The site has too much hate speech, and it is hard to support that financially. Between the misogyny and the atheist anti-Christian bigotry, it's pretty bad. You don't have to be either a woman or a theist to see that's wrong.
The site doesn't promote hate speech, it allows free speech. Also I think you're overstating things.
The question of how to address this issue while maintaining free speech is a more complex one.
Free speech and censorship are mutually exclusive. Reddit already has mechanisms to allow community censorship (report posts, post deletion by moderators, banning, invite-only subreddits, etc.), but paid reddit employees only step in when things get out of hand.
Also the community is very fickle. Any perceived censorship by reddit employees that seems suspect and they get out the pitchforks. This has happened a few times (e.g. the Sears incident).
I do think the front page could be better filtered, though. Right now the default reddit front page is pretty... weird.
I was not proposing censorship. Reddit, as well as slashdot, HN, digg, etc. have a community process developed with the intent that the best comments make it to the top. The structure of that process determines what makes it to the top. Changes to that process can, very substantially, cut down on amount of certain classes of degenerate content. When bigotry began to appear on reddit, the reddit founders did nothing to combat it (and, indeed, slightly encouraged it). At the time, I think it would have been fairly easy to contain.
At this point, I'm not sure what can be done, since now bigots form a substantial portion of the reddit community.
For example, there was this guy named Lou Franklin who was probably the biggest bigot in the history of the site, and they took YEARS to finally ban his account.
I'd be interested in seeing any actual evidence you have for any of this, if any. The specific subreddits (such as /r/atheism) might sometimes be construed as a circle-jerk, but I've yet to see out-right hate speech against any sect of person that wasn't down-voted to hell.
That said, if you're a Christian on a site full of free-thinking and free speech such as reddit, you're going to have to be thick-skinned. Most, if not all, mainstream religions have never been kind to free-thinkers of their time, so you have to understand any resentment we may or may not have towards theists.
Notice how you automatically assume that the person pointing out bigotry is in the oppressed minority. This is not always the case.
If you expect a Christian on a site full of atheists to have to be thick-skinned, then by the same logic, you should expect an atheist in a high school full of Christians to be thick skinned. I don't think either of those conclusions is acceptable.
Actually, the similar situation would be an atheist in a high school full of christians in a region that's predominantly atheist, and he's not forced to be there: he has thousands of high schools to choose from, and doesn't have to go to school at all unless he wants to. Also, the only reason he was interested in this particular christian high school in the first place was the inquisitive, intellectual, creative atmosphere generated by all those christians.
> inquisitive, intellectual, creative atmosphere generated by all those christians
Are you thinking of the same reddit? It's a web site of funny videos, pictures of cats, rage comics, memes, and stupid jokes, and the occasional groupthink. 18 of the 25 front page links are imgur. Of the remaining 7, four are dumb discussions on reddit. One is a news article about the reddit founder (random crime story; otherwise uninteresting). One is a news link on youtube of actual news (if somewhat biased), and one is a link to an interesting study. Depending on how you count, that's 4-8% useful content. When you get into comments, it's even dumber.
reddit had a community of inquisitive, intellectual, creative individuals when it was formed at Harvard. Over time, the idiots and the bigots moved in, and right now it's a web site for wasting time on stupid amusements.
I was listening to right wing talk radio on a long car drive a few months back. What struct me was that the host, who mostly spewed venomous lies, would constantly refer to his viewers by some term (I forget the exact wording) like "the best and the brightest." I feel like reddit users are the left-wing equivalent of that. Clueless groupthink, combined with a very high opinion of themselves.
As an atheist, you're also free to move to a different country if Christianity bother you. US is 76.8% Christian. China is officially atheist. The only reason you're interested in this country is because it was founded on the Puritan work ethic, and has a very high level of social capital coming from Judeochristian values. But you shouldn't stay here and bitch about it. Same thing with blacks trying to go to the (better) white schools in the South prior to Brown vs. Board of Education -- they had their own place where they'd be accepted. Have fun with that logic.
Bigotry doesn't belong anywhere, even if people are free to leave. Intolerance hurts atheists more than it does Christians, and you're an idiot for endorsing it.
If it's been said once it's been said a thousand times over: if you're looking at the reddit frontpage for anything beyond slightly humorous cat pics or rage comics, then you're doing it wrong.
The stuff that gets to the frontpage is there because it's the most commonly appealing thing in the largest original subreddit communities (r/pics, etc). The intelligent discussions happen behind the scenes, on the small to medium sized subreddits. Anyone that's been on reddit for more than a week understands this.
There certainly are intelligent people in any community beyond a given size. You'll find intelligent Christians, Muslims, Jews, atheist, and Hindus. You'll find intelligent evolutionists and creation scientists. You'll find intelligent Republicans and Democrats. There are smart people who listen to Mozard, and smart people who listen to Limbaugh. You'll also find idiots in each of those communities as well. The ratios will be a little different in some cases, but in all cases, there will be both smart and dumb.
By that token, there are, without a doubt, plenty of smart people on reddit. They are, as your comment implies, a tiny minority, confined to a few subreddits (and those are mostly characterized by groupthink -- e.g. any conservative comment on most liberal subreddits will get voted down, no matter how intelligent and well thought out). The front page is defined by what most people vote for, and that's rage comics, misogyny, with the occasional sprinkling of anti-Christian bigotry (this used to be more prominent, but the average IQ has dropped to the point where r/atheism is beginning to look smart). That's representative of the average reddit user.
All that said, I'm not looking for anything on the front page beyond a way to waste a bit of time. When I first joined reddit, I looked to it for intelligent articles. Later, I looked to it for amusement. Now, I look to it less and less, since memes aren't the same thing as wit.
I can't really agree with that- it's hard to really define Reddit's "community"- one of it's strongest features is the 'subreddit' compartmentalisation. Personally, I have long since unsubscribed from the atheism and politics subreddits, so they aren't a part of my personal Reddit experience.
In case it's not obvious, someone logged into the cypherpunks account, edited the above comment, and changed the password.
cypherpunks is a generic account shared by many people on many websites. The username and password is cypherpunks/cypherpunks where allowed, and cypherpunks/cypherpunks1 where the username and password must be different. E-mail is usually cypherpunks@mailinator.com. It allows you to use websites while maintaining a semblance of anonymity.
This is now broken on HN. redditors rise to new heights in their debating ability.
I'm not going to repost all comments from the discussion, but the original comment was along the lines of:
I read reddit, but I would never buy a gold account. The site has too much hate speech, and it is hard to support that financially. Between the misogyny and the atheist anti-Christian bigotry, it's pretty bad. You don't have to be either a woman or a theist to see that's wrong.
The question of how to address this issue while maintaining free speech is a more complex one. The reddit founders, however, have shown no interest in finding ways to improve site quality from this perspective, and indeed, appear to support the bigotry.
I might go back to Firefox when it no longer consumes gigabytes of RAM after a few hours of operation. Until then, I'm with Chrome. I don't care about speed as much -- so long as it's reliable, and doesn't grind the rest of my machine to a halt, it's all good.
Chrome is very wasteful; only after many hours of operation in which you opened and closed dozens and dozens of tabs, it starts to be better than Firefox; as Firefox fragments memory and doesn't release it properly when closing a tab.
But for my usage patterns; with many tabs opened after a few hours of operation, Firefox consumes less RAM for me.
Also, don't confuse memory waist (coming from bloat and fragmentation) with memory used for improving the browsing experience, like caching. These browsers are doing a lot of caching and I personally don't like having 4 GB of memory and being left unused.
memory footprint or not, chrome is faster than firefox. both in terms of page loading, and raw operation. i might reconsider firefox when it doesn't take 4 times longer to boot up.
I am on Firefox 5 and I don't see visible differences between them anymore. Yes, Firefox 3.x used to be visibly slower.
I also don't see visible differences in boot time, but this depends a lot on how many extensions you have -- I only use Firebug and the Web Developer toolbar. And at least Firefox has plugins that aren't totally useless.
I think it depends on what hardware you're running. On my laptop Chrome is so noticeably better I only keep Firefox around for testing. On my maxed out desktop I can't tell the difference.
firefox 4 is visibly slower on a completely fresh install on my mbp. you may have a point about firefox 5, i haven't tried it. my firefox and chrome reference has only been the past to 2 years of major releases.
Starting up a browser is a relatively rare even, for me at least. The only time I restart Firefox is when it's consumed or leaked too much memory, about once a day.
I kinda wish Google had better versioning in their infrastructure. While I used neither of these products, their disappearance makes me reluctant to adopt Google products for any core infrastructure. If I buy a program from Microsoft, and Microsoft discontinues it, I get to keep using it forever. If I do the same with Google, since I'm not managing my infrastructure, at some point, it just goes away. It shouldn't be that hard to fork off sets of servers to run frozen, legacy applications, or older versions of non-legacy apps.
Google is a big organization in the business of developing web software. Most organizations are in other businesses. If I'm running a construction company, it doesn't make sense to have IT consultants come in each time a new version of something comes out. If it works, it's best to leave it alone, modulo security fixes.
Do you feel the need to upgrade your house's plumbing system and electrical each time an innovation happens? That's how most people feel about software.
When the electrical grid connecting to your house get updated and no longer supports your installed system you upgrade as well or you don't get electricity.
I don't. Why would I want to upgrade my fixtures twice a year if they're working and look fine? I'd be happy to keep the same set for 20 years, myself.
Is it really worth carrying a metaphor out to its absurd conclusion? A web browser does not exactly equate to a plumbing fixture, yet they both have maturation cycles that prompt people to upgrade over time. Browser technology moves faster than plumbing, I don't think anyone would be terribly surprised by that revelation, yet when plumbing technology advances I will upgrade. I will upgrade sooner rather than later because the savings in time and energy pay off over time, just like with browser technology. The further behind the cycle I remain, the more it costs me to modernize in the future and the more it costs me to stick with the old. The literal time period is completely irrelevant.
As a web user, I'm not grateful. I don't think having more pixel-perfect control or slightly faster JavaScript makes the web any better. All websites look slightly better, but the client side dies a little bit every time. In the web circa '96, anyone could write a web spider or web browser, and everyone did. There was tremendous innovation both client-side and server-side. Client-side innovations included things like search engines and Google. With the hyper-AJAXed world, client-side innovation becomes impossible. I don't think this is a net win for the world.
I echo the comments others have made about the wide range of improvements offered by modern browsers. But even putting that aside, the rendering capabilities you refer to make a bigger difference to web users than you might think.
You say "I don't think having more pixel-perfect control or slightly faster JavaScript makes the web any better." But--no disrespect--you probably only feel that way because you're already benefitting from the huge amounts of effort invested in supporting multiple, incompatible browsers. The web looks OK for you now because, without you noticing it, web developers have slaved to make it look OK for your unique combination of OS and browser.
It may seem like that's just a cost we web developers have to bear, with little effect on you. But that's not so. The fact that we have to spend time supporting old browsers increases the cost of everything that's created on the web. And when innovation is more expensive, it happens more slowly. The costs imposed on our industry by older browsers do effect ordinary web users, because those costs translate into a slower pace of innovation.
> I don't think having more pixel-perfect control or slightly faster JavaScript makes the web any better.
If that's all it was, then you'd be right. You can't do geolocation, local storage, audio/video, canvas/svg/webgl, etc. etc. on old browsers. It's not about making things pretty. It's about creating software that can compete with desktop alternatives.
Unspoken assumption: that competing with desktop alternatives is good or desired.
Personally, I can't think of a single desktop application that I use that is better done in a browser with the possible exception of Google Maps, and I say "possible" because I haven't seen a desktop contender.
I use GMail's web interface solely because I can't stand any of the Mac clients (at work) and Outlook isn't available on Linux (which a couple of my PCs at home run). It is cross-platform, which is a plus; it is painfully ugly and slower than a desktop alternative, which is an overwhelming minus.
I greatly prefer my web browser to be for browsing the web. While I'm all for standards in HTML/CSS/JS, I find the "web browser as your OS!" crap to be disheartening. I like things that work, and for the most part, web applications don't.
I share you views. Making a web app is easier for the developer: just one platform to support, no restrictions on implementation, etc. But a desktop app is nicer for the users (the ones we're supposed to care about, right?). In the very best case a web app would be able to function as good as a desktop app. The best case is usually not achieved.
Imagine if a person at your job said they had just made a new client app for your business users workflow. When you ask how it's implemented the person says that the GUI itself is just a skin that reads everything from the database. The GUI layout, how the buttons behave, all of it is stored in the database and the actual GUI is nothing more than a kind of platform for what's in the database. I've actually seen this done and the team who did it were sacked, their application deprecated. As far as I know it's still running because the team in charge of replacing it still doesn't completely understand it. But this is what web apps are. Model, view, presenter, they're all stored in the same place.
Personally, I prefer having a back end RESTful server with native... shall we say "fit" clients (not fat but not thin either) using it. The browser gets a simplified, default version of the app which has a link to the appropriate native client somewhere visible.
Time = $$$. Universal constant. While your desktop app may be shinier, it takes much more time to create and maintain. This means fewer apps are published and fewer features are added.
Less choices and fewer features translate to a negative for the user.
On top of that, native desktop apps are compiled ... the web is open, with html/css/javascript/etc. and this creates an open environment that encourages free software.
Free is always good for the user.
Anyway, re-arguing this sort of thing is pointless. The desktop is in a death spiral. All this stuff has already been set in stone. It's a question of when, not if.
15 years ago, I could reasonably write a search engine. Myself. 1 person. In a few weeks (modulo bandwidth and server farm). I write a program that grabs a web page, and reads out keywords. Today, if I grab a web page, quite often, that web page has nothing except for JavaScript code. That code grabs the actual content from the server, lays it out, and animates it. To write a web search engine, I need to write a complete JavaScript library.
At the time, we were talking about developing all sorts of agents. Things that would shop for you. Things that would find parts for you. Thinks that would remember what web sites you visited, and let you search them. Things that would track where in a long set of pages you were (blog, comic, etc.), and let you keep reading from there. It happened for a while, and then it died when the web became too damn hard. Writing anything that can reasonably see and parse web pages now takes many, many web years. There are only four or five organizations with that kind of resources (WebKit, Mozilla, Opera, IE, and internally, Google). There are countless things we just didn't even imagine.
It's like the DMCA. You notice all the innovations that happen, but you miss all the innovations it made impossible.
>15 years ago, I could reasonably write a search engine.
No, 15 years ago you could reasonably write a search engine for 15 years ago. It would suck by today's standards.
You want to handle Javascript? Easy! There are plenty of tools to choose from now. Run a browser as your crawler, visit the sites, and read the generated source instead of the static source. Shove that into your 15-years-ago search engine, and there's no difference.
>Things that would track where in a long set of pages you were
You mean bookmarks? Add a scroll %, assuming they're not nice enough to use anchor tags / IDs meaningfully, and you're golden.
>Writing anything that can reasonably see and parse web pages...
has become a community effort, instead of a bunch of isolated silos where people reinvented the wheel out of necessity.
The resources required aren't so large just because it's so much more complex, it's large because it's so much faster, and you won't survive if you can't compete. How long did we languish with crappy Javascript engines? How much would you need to know to actively compete in that section alone now? It's easy to make a slow-but-functional browser, and if you looked around you'd see some people doing just that. Making a fast-and-resilient one is as hard as making a fast-and-resilient anything, especially where human input (ie, HTML) is expected to be consumed.
> You mean bookmarks? Add a scroll %, assuming they're not nice enough to use anchor tags / IDs meaningfully, and you're golden.
Bookmarks in books work okay. You move them. Book marks in browsers don't. You have to remove the old one, add the new one, and the overall process is too cumbersome to be useful for the application I mentioned.
We actually built a site to solve that problem. If you have a series of pages (blog, comic, book, etc) and want to mark your place in them with a bookmark that moves as you read, try Serialist (https://serialist.net/).
As to the auto-updating bookmarks, would it resolve the issue if I made an extension to do that for you? I can see the use, honestly, and I like it. (seriously, I'm offering, and I'd probably use it myself. It'd be an interesting project. Even if it doesn't resolve the issue - we might just fundamentally disagree here, I'm OK with that.)
But why should that be part of the browser, when modern browsers allow you to do damn near anything by simply leveraging it? Why should we rely on browser makers to tell us what's possible, when we can do it ourselves, because of the changes in the past 15 years?
I'd love to see that extension. If you write it, I will use it. I use Chrome too, so it should work here.
As to what should and shouldn't be part of the browser -- the way to figure that out is experimentation and competition. When you make technologies and standards simple and easy, people will make independent implementations and try things. The vast majority will be dumb, but some (often unanticipated ones) will turn out to be useful, clever, or brilliant. That's how the technology improves.
When you make standards big and cumbersome, progress stops.
If you want to move a bookmark to a different place on a blog / content site, it is probably because you want to read new entries. RSS does this fairly well.
If you want to read through a site's archives, what I do is keep it open in a tab. It is restored when I reopen my browser, saved if I reboot, etc. It's not as handy as a bookmark, but it comes close.
With all the headless Webkit tools coming out nowadays (and all the free and fast JS engines like V8), writing a spider that runs a JS engine and clicks on all kinds of non-<a> elements is not beyond the reach of somebody innovative and motivated enough to create new kinds of spidering robots.
You won't need to write a complete JavaScript library. Look at all the testing suites that automate browser instances, Selenium being the most well-known.
15 years ago the thing we call "web application" hardly existed. If web page "has nothing except JavaScript" (e.g. GMail) is probably is web app and indexing it makes little sense anyway. If someone misuses JS on content site, that's another story.
And your comment about innovation makes no sense at all. Capabilities of modern browsers (Canvas, geolocation, local storage, offline apps, etc.) offer more opportunities for innovation than "old web" could even imagine.
I think you (and most people here) underestimate what the "old web" could imagine, though. We had all sort of ideas for agents that would go out and grab and analyze data for us in all sorts of clever and interesting ways. Search engines got built, as did one or two other things, and then the web just got too complex.
Hell, even I had a simple app that went out and grabbed all my favorite comics and showed them to me, nicely formatted, and without ads.
You mean ad filtered RSS/Atom? I assume such a program would be much faster to write these days: have a set of newsfeeds, map() them with a filter function and merge the results.
While the web gets more complex, the tools at hand get better. Much better.
Gmail's HTML view works fine in Links. That team has been showing competence and diligence that's increasingly rare, and I wish people wouldn't tar them with the same brush as the clowns who write js-only crap.
At the time, we were talking about developing all sorts of agents. Things that would shop for you. Things that would find parts for you. Thinks that would remember what web sites you visited, and let you search them. Things that would track where in a long set of pages you were (blog, comic, etc.), and let you keep reading from there.
The drive toward semantic markup in HTML5 is supposed to help the web get back to those original ideals. Over time, we'll increasingly expect web developers to conform to a subset of possible HTML arrangements, much like book publishers conform to a subset of the possible random arrangements and orientations of letters on a page (odd poetry excepted).
Most people would gladly make it harder for a single person to write a search engine if, in return, it makes it easier for them to make good web pages and web apps.
Your premise that the web is somehow less effective because you can't scrape data from pages easily doesn't make much sense to me.
Have you taken a look recently at the plethora of web APIs for just about every purpose? The modern way of collecting machine-friendly data from a server is through APIs and semantic content (RDFa, microformats, etc.).
Not through HTML / CSS / Javascript formatted pages which are made primarily for human consumption.
I would blame poor/lazy devs inappropriately using JS rather than the evolution of the browser for this. For the average web page, it's unnecessary 90% of the time to require JavaScript for any core functionality ( not so much with web applications ). I have a hard time understanding why people do this as it's often much easier to test and develop when you're layering on JS unobtrusively.
Agreed that it's nearly impossible to generally parse web pages now, though if you're screen scraping it's still pretty easy (if not easier than before) to pull out data. Before you had to parse the DOM; now you can often get structured data via JSON APIs. It's more brittle, though.
I think he's saying that it makes scraping harder.
But today JS frameworks like jQuery give us the means to do anything we want javascript-related, in any browser that half-supports javascript. By deprecating IE7 they're just saying they're going to drop all of the extra hacks they had to use to keep IE7 working.
A lot of what newer browsers give us is just better rendering. You can replace a mess of tables and nested divs with things like border-radius, which means less client-side html to wade through.
Are you kidding? The semantic web and using more metadata is making it easier than ever. Nowadays in many cases not only you have the content, as it is tagged with microformats or RDFa.
Try looking at Freebase or DBpedia and tell me where did you have such a huge amount of easily parsable, semantic content in the 90s.
More and more content is being taken entirely off the open web and siloed behind a server that talks an unstable proprietary protocol, with exactly one blob of javascript in existence that knows how to tunnel requests over HTTP to access shreds of that content and cram them into an utterly non-semantic DOM. We are hurtling backwards into the client-server hell the web had saved us from.
Yeah, I don't see that. I see more and more accessible APIs[1] and pages having more and more an incentive to being semantic due to search engines now reading that data (hRecipe, for example).
Service architecture have also been moving from stuff like SOAP to REST, which is definitively more open and accessible.
And even Ajax-ladden webpages are still just a Firebug Network tab away since they all run over HTTP, and then you have a nicely structured data format instead of having to deal with messy HTML pages.
A JSON (or SOAP) backend is only usable by third parties if its API is kept stable. There are far too many devs who redesign their backend request and response formats at the drop of a hat because they think their js client is the only one that matters (a self-fulfilling prophesy) and they can replace it simultaneously. And their responses tend to look like "here's some more markup to stuff into an arbitrary location in the DOM we're using today", not semantically structured (e.g., Rails now has this built into JavaScriptGenerator). A given site can be reverse-engineered, but anything built on that is going to be fragile and short-lived, much more so than when the typical visual rendering desired for a page determined its structure.
I don't see how that's worse than unstable, non semantically structured HTML markup of yore. In the worst cases, we're not really worse, and we have much more semantic content nowadays.
With respect, I think this is the same innovation that occurs in every industry and it's silly to bemoan the increasing complexity of the web.
In transportation:
It used to be that everyone could buy a horse and build a buggy and get around. Then cars came along and it got a lot more complicated and expensive to build a vehicle that was state-of-the-art, but tinkerers could still do it.
Now there are only a few big players who are capable of innovating and building the best and newest vehicles.
Now that I think about it though, there is still space for tinkerers and inventors in the automobile space. But You can't expect those automobiles to compete with those made by, e.g., Toyota.
In the same way, it's still possible to write a spider without a javascript renderer. It just won't be able to compete with Google.
One last point: the state of the web is based on the collective decisions of all internet users. Ultimately, people building things on the web decided more often than not that ajax-ifying things benefited their users.
If users had wanted a web client that would spider the web and shop for them, they would have latched onto it during the time of great innovation that you think is now gone. But they didn't. The things that users wanted are the things we see today, assuming that there isn't some horrible inefficiency in the feedback loop between web-builders and their users.
> I don't think having more pixel-perfect control or slightly faster JavaScript makes the web any better.
I think the point for making ie7 deprecated legacy is more about its hideous bugs in the parsing, internal document representation and rendering bugs.
It is the things that makes proper code unable to be displayed properly without the tedious work of understanding the dysfunctions to circumvent them properly. To me, the old IE rendering engines alone has slowed web innovation for at least several years all by themselves.
The Google Spider grabs pages from other servers. That's a web client. It's a web client that was easy to write when Google was started, but is almost impossible to write today. If search engines hadn't been invented 20 years ago, they'd be impossible to invent today. The only reason they still work is tremendous work on Google's end to have its spider be able to spider complex AJAXy pages, and that content creators engage in SEO and develop to Google.
It is not harder or easier. It is just different than it used to be. Things that used to be hard are easy now. Problems that didn't exist 10 years ago exist today. I develop a spider.
For the most part, the bulk of the web's content is as easily accessible as it was years ago. You make a request and you get a blob of HTML back. If you have special requirements and need to get into all the nooks and crannies you create a DOM implementation and embed a JavaScript engine. Then you parse the page into a DOM and start firing off events. There are quality open source JavaScript engines available. JavaScript and AJAX are a breeze.
Flash is a different story. If you have any requirement to follow links or process content in a Flash movie (you'd be surprised how many sites still have Flash nav) you pretty much have to write your own runtime. Unless you are big enough to have Adobe do it for you.
Depending on what you are doing with the data that your spider collects, chances are writing a spider is far easier than writing a browser. There are at least 4 widely used browser engines and plenty more toy browsers floating around.
I can guarantee that writing a spider that can deal with AJAX is not the biggest challenge of developing a search engine. Scaling it, fighting SPAM, understanding the content, indexing and then being able to provide quick lookups are much, much harder.
I recently looked through the Google developer guidelines and they still recommend not changing the page contents significantly using Javascript. Also, the !# in modern AJAX apps is there to avoid having to run the Javascript in order to crawl the content. Do Google actually do that much with the Javascript on a page even today?
I don't really quite get what point you are making here. Are you saying that the innovation today owes more to the innovation of the mid-nineties? Are you saying that any innovation now with respect to AJAX and JS does not make your life better in any way?
Seemingly you argument could be made for cars "15 years ago cars were easy to fix and understand. Now they are not, so wake me up when they are like the cars of the mid-nineties."
JS/Ajax help programmers tremendously. Helps speed. Helps functionality.
To view these things through the lense of "I can't write a crawler for them" is a pretty limited view of what today's technology offers.
I'm glad. Thank you for taking the time to read and understand. Hacker News is starting to go down the decline that hit reddit 2 years ago, where people don't bother to try to understand different viewpoints, and just downvote anything they don't agree with. It's nice to see good people still on here...
With the hyper-AJAXed world, client-side innovation becomes impossible.
Are you serious? Have you heard of the canvas element? It allows modern browsers to do things that were only possible 2 years ago in Flash. Have you noticed how there are actual web applications, not just a collection of linked pages these days? Have you noticed how with ubiquitous JavaScript, the usability and ease of websites has improved greatly?
No. I haven't. I've noticed maybe 1 or 2 web apps I want to use (Google Docs and Google Maps). Beyond that, I don't see anything that couldn't be delivered more effectively without JavaScript that I want or need.
Usability is not up. Each web site has its own, custom, non-standard user interface. I could teach my mom to use the web circa '96. I cannot teach her to use it today. It's too damn complex.
Usability would be up if the browser knew more about what to expect. You can look at things like Readability. The browser ought to know more about the content, and be able to present it in a coherent, usable way. The server-side shouldn't dictate presentation.
A big part of usability is not sitting around waiting for entire pages to reload every time you interact with them. AJAX has done great things for users by minimizing this delay. You wouldn't like Google Maps as much if you had to click an arrow and wait for a page refresh for the map to move, like MapQuest circa 2003.
Yes, the proliferation of web apps has created a diversity of user interface paradigms. Some would say this is a good thing, however, since the web has spurred all kinds of new UI philosophies, and the fact that JavaScript and HTML isn't compiled allows people to examine and re-work others' code, so good ideas spread very quickly. I for one don't intend on waiting for the HTML5 group to invent every new <input type=""> that I could conceivably need, and then wait some more for browser vendors to implement them all consistently. With JavaScript, you can currently build and deploy just about any kind of 2D client-side interaction imaginable.
In short, the vast majority of users on the internet probably have a different idea of usability than yours, and the numbers tell the rest of that story. You only need to look at the gross casserole of UI paradigms within the applications installed on your mom's PC to see how much users really care about UI standardization.
Did you read what I wrote? I mentioned Google Maps as one of the two places I found AJAX useful.
The applications on my mom's PC do have much better UI standardization than the web does. Microsoft releases UI guidelines. alt-f4 does the same thing in every application I've used, and the menu structure is roughly the same too. Apple is even better.
> Did you read what I wrote? I mentioned Google Maps as one of the two places I found AJAX useful.
Yes, and I was dissecting why you may have found it useful, because the same principle applies to hundreds of other situations that you may not have recognized.
> The applications on my mom's PC do have much better UI standardization than the web does. Microsoft releases UI guidelines. alt-f4 does the same thing in every application I've used
Questionable. About the only key shortcuts you can rely on are the ones that will work in your browser too. Alt-F4 will close your browser--that's what you wanted, right? Cut/copy/paste, print, etc. all work there as well...
> the menu structure is roughly the same too
Ha, you mean the invisible menus on Explorer and IE>8, the mega "office button" menu in Office 2007, the delightfully inconsistent menu bars in WMP>9...
Microsoft and UI guidelines in the same sentence, something doesn't compile - I would be happy if they used their own guidelines though.
the menu structure is roughly the same too
Too bad it is getting reinvented; it happened in Office 2010 and it will happen again as people are getting tired of File -> Save; and yet again when touch screens on laptops will become the norm.
So I'm sorry for your mom, but unless she never upgrades, then she's going to have to learn new things.
I do see a side of what you're saying. In those older days the web was a much simpler platform so figuring out what to do and what to click was easy. This was true in Windows too as MFC was the library of choice for UI meaning a lot of the software was easy to figure out also.
Today a lot more software and websites have broken the mold and come up with some really different(not siding better or worse because both exist out there) UX patterns. There aren't standards for web UI anymore that are practiced across the board.
I do disagree with your statement that the client should dictate how a site is presented, not the server. The browser should display the content in a standards compliant way. The days of buttons looking like windows buttons in IE and Mac buttons in Safari should be a thing of the past never to return.
This is what really irks me about Google Apps. You don't have any control over this kind of thing. They can (and do) yank the rug out from under you whenever they feel like it. A while back, they forcibly transitioned all accounts to also be normal Google accounts, with no user input. They did this before they figure out how to transition users who already had Google accounts, so now my users are split into first class and second class citizens.
It really doesn't feel like it should be the hard for Google to have some infrastructure where the user can control version transitions. Google makes a new version, but barring security issues, keeps all old versions in their cloud. At that point, you transition when you're ready.
Really, that doesn't feel like it should be that hard? As a developer, that sound like a nightmare. Services like these aren't just a piece of software that can be versioned, it's a system of systems, each one with certain requirements, limits, supported APIs etc. Wrangling all of that is hard enough just to support one user-facing site, much less keeping every version of the site available and working.
And in general, it's not. But certain platforms (e.g. OS X on PowerPC) have been artificially restricted to the point where no new browsers are being made for them, so the latest version you can get to on them is probably outside of Google's scope of support -- and if it isn't now, it will be shortly. For instance, sure, a PowerBook G4 isn't necessarily powerful by today's standards, but surely it's powerful enough for Google Apps, yet today I don't believe it's supported in the current version of any browser because of its inherent OS X 10.4 or 10.5 cap. Sure, it's possible to switch to Linux, but I don't personally believe the user should have to.
In essence, upgrades are not as simple as you may think due to forced platform incompatibility/vendor lock-in. Forcing an upgrade like this can cost users a non-trivial sum of money on top of what you're already charging for the service!
So while I appreciate the moving into the future, I feel bad for the people whose wallets are going to feel the pain of such a move.
Didn't you also bring that upon yourself by buying a product from that particular vendor (Apple) as opposed to any other? Upgrade treadmills and forced obsolescence are always a risk with software, and users need to learn to account for it.
I recently acquired a Macbook Pro from work, but I'm using Windows 7 on it full-time. I'm far too wary of being forced to upgrade when Apple stops providing security updates to the version of OS X currently installed on it.
Banks and financial institutions are subject to extensive document retention and security regulations -- using Google Docs is out of the question. Of course, that is one of the primary reasons why their upgrade paths are so slow.
they have slow update paths because they can get away with slow update paths. if web developers stop catering to lazy IT departments, maybe they won't be so slow anymore.
They have slow update paths because they've spent a lot of money on in-house middleware that they don't feel a need to update, and sometimes those systems don't work so hot on the new browser. Faced with a choice between updating old middleware and not updating the browser, they choose to wait on updating the browser. The longer they do that, the more work the middleware needs to be "up to date", and the worse the problem becomes. Definitely not isolated to banks.
Anyone know of the equivalent of Stroustrup for Python? If anyone's not familiar with Stroustrup, his book on C++ explains the language in-depth, and gives clear rationales for the design decisions made, so you understand not just the hows but also the whys. It's not a reference manual -- it emphasizes depth over breadth -- but it's also not a tutorial.
I'm not sure if I trust much be ESR. He's extremely eloquent, so what he writes always sounds convincing. In practice, he's a pretty bad developer, or at least he was at the time he wrote most of his stuff (at the time, the most serious technical accomplishment he had under his belt was a refactoring of fetchmail). When I've applied the things he's written, more often than not, they've lead me astray. He has, more often than not, messed up in contexts involving technical interaction with other people (e.g. CML2). This is in stark contrast to e.g. RMS, who usually puts people off, but whose writing tends to be dead-on correct (if you don't believe me, read almost anything he wrote 10-30 years ago, and see how it panned out -- the guy's almost a prophet, cursed to know the truth but not be able to convince people). So I take everything ESR writes with a very large, maybe unhealthy degree of skepticism. It's too easy to be convinced of something incorrect (many eyes make bugs shallow, negative attitude towards RMS, etc.).
I'm looking at his page now, and he appears to have written or contributed to more software since, so I'm not sure if this still applies. I suspect it does, but I'm not ready to pass judgment until I read his more recent essays.
I never understood where his "claim to fame" really came from? Apart from publishing and then maintaining a pre-existing dictionary on his website, publishing one of MANY books on Open Source and cursing admins with fetchmail, he hardly contributed to or founded any REALLY big projects or coded a lot like Linus or RMS did or did anything of real substance - but esr was always one of THOSE names during his heyday and he would go to great lengths telling everyone how "the community" selected him to be some sort of open source Keanu Reeves/Neo...
He's an exceptionally good communicator and has very strong soft skills, and applies them to self-promotion. He's also good at politics (as in convincing crowds of people -- not so much at interacting and making long-term connections with individuals). A lot of this has to do with feeling out crowds, and telling them what they want to hear. He's also got a slightly underdeveloped moral compass, which always helps (he was willing to be a little bit sleazy and subtly undermine competitors). That's mostly what it takes to get famous. He was also the only person, after RMS, who was arrogant enough to take credit for the whole movement.
The question of how to address this issue while maintaining free speech is a more complex one. The reddit founders, however, have shown no interest in finding ways to improve site quality from this perspective, and indeed, appear to support the bigotry.
Anyway, I bear a huge grudge against the site because they banned my account for sockpuppeting and harassment, so take all of the above with a grain of salt.