this site http://hkrnews.com/ allows you to see the article in an iframe so you can read comments and other headlines all on one page -- faster than clicking between tabs. You can also see the top comments easier by collapsing by clicking the icon
>>Trump’s heretical denial of Republican dogma about government incapacity is exactly what we need to move the party — and the country — in a new direction.
It's contradictory for libertarians to be calling for more government spending on the economy. Most of the issues, like crumbling infrastructure, would be better spent not by advocating for effective spending, but rather privatization and letting people vote online on which firm to contract out to various public works paid for by tax money that we vote on how to allocate.
>Most of the issues, like crumbling infrastructure, would be better spent not by advocating for effective spending, but rather privatization
I think the hundreds of billions of tax dollars we've spent on privatizing the expansion of internet hardware/fiber is a iron-clad evidence to this being flat out wrong. It's been a complete and total failure with no accountability.
> letting people vote online on which firm to contract out to various public works
This is getting into an entirely different argument about direct democracy. How about just having an election and voting system that is better for voters and the public will, and actually being able to elect officials who are willing to implement a fair and balanced bidding system for government contracts?
Your next door neighbor Jim who listens to Rush Limbaugh 2 hours a day and watches mainstream news outlets another 2 hours every evening and makes his voting decisions based off television commercials is not the person I want selecting a government contractor. It will become a marketing contest and needlessly increase the cost to tax payers as a result.
> I think the hundreds of billions of tax dollars we've spent on privatizing the expansion of internet hardware/fiber is a iron-clad evidence to this being flat out wrong.
Well, it would be evidence it it had actually happened, but the claim of hundreds of billions of tax dollars on this is very questionable [1].
Ok, not literal tax dollars, but dollars from the American population at large, essentially tax dollars that aren't going through the government first and instead straight into the pockets of telecom providers.
Here is my understanding, which largely disagrees with the guy in your link arguing against the $200 billion statement:
It's 1991 and the US government says "our telecom infrastructure sucks". They knew there'd be tremendous economic impact based on the quality (or lack thereof) of this infrastructure. The monopolistic nature of a system involving hardware infrastructure installed literally everywhere on essentially public land (easements) meant that existing companies owning this infrastructure have no incentive to expand it.
The options we were faced were either:
1. Using tax payer money and having the government directly build out this infrastructure to the tremendous and direct benefit of everyone
2. Changing the regulations around the existing infrastructure so that the barrier of entries were lower and more people could theoretically participate in expanding the infrastructure
These two options are the subtext for the 1996 Telecommunications Act. The guy in the thread you posted is trying to argue that the act wasn't about quid pro quo, but about deregulation. He's correct but also misleading. The act itself is marketed as being about deregulation, but the entire reason the act existed was quid pro quo. This is blatant and obvious - the most visible part of the entire act are the goals that the telecoms were required to reach in terms of expansion. It's also funny how this is marketed as deregulation when there's a huge list of requirements regulating what has to be done by certain dates.
Of course, as it often happens, immediately after this passes we have the exact opposite of expectations. There is tremendously less competition and constant mergers to create only a few massive telecoms. Today it's literally only three companies providing long distance according to Wikipedia - AT&T, Sprint and Verizon. Instead of having a government lead initiative to expand what is inarguably a utility service, we naively trusted private companies to fulfill their end of the bargain and meet the legally required goals.
Obviously this didn't pan out. Our infrastructure is shit and the progress was/is laughably short of the specified requirements.
The short summary is: we allowed private companies to continue operating in a space they really should not have, or at least in a manner they should not have. We did this because they said they would, as legally required, expand the infrastructure. That didn't happen and there's never been any accountability for it. As a result, Americans have paid hundreds of billions of dollars to these companies in excess of what they would have otherwise paid, and that excess was legally required to spent on infrastructure and never was to any significant extent.
So yes, Americans were scammed out of hundreds of billions of dollars, even if it wasn't money that went to the government first as tax income. This is also entirely ignoring the economic impact from our lack of infrastructure, which could be estimated anywhere from hundreds of billions to trillions of dollars.
This contributes nothing more than <div contenteditable> and to present this a document editor is counter-productive to users searching for something reliable. The link below shows the many shortcommings of just calling the contenteditable browser API as an emulator. For example, Google Docs does not use contenteditable at all -- for the many reasons outlined below. Building a real doc editor that functions as a widget in blogs and forums sites requires a lot more support -- of the open source alternatives, Medium is one of the best.
You might not have looked at this closely enough, but "contenteditable" is really only one line that has been added to make the demo more useful.
The real thing this is about is the CSS. There are ~250 lines that produce the correct page sizes, margins and paged content and ensure that prints are as expected.
Here's my approach when I built my text-summary app with TensorFlow's SyntaxNet.
SyntaxNet (Parsey) gives part-of-speech for each word and a parse tree showing which less-important words/phrases describe higher-level words. "She sells sea shells, down by the sea shore" => (down by the sea shore) is tagged by SyntaxNet as lower describing "sells" so it can be removed from the sentence. Removing adjectives and prepositional phrases gives us simpler sentences easily.
Next, we find key words (central to sentences) for news article based on n-grams, and then score key sentences in which they appear. Use MIT ConceptNet for common-sense linking of nouns and most likely relations between them and similar words based on vectors. Generate article summary from the grammatically simple sentences.
My question is how well the trained models interpret human meaning in joined sentences. I discovered that by simplifying sentences you lose the original meaning when that grammatically-low-importance word is central to the meaning. "Clinton may be the historically first nominee, who is a woman, from the Dem or GOP party to win presidency" is way different meaning than that if you remove the "who is a woman". I am also interested in how it makes sence to join-up nouns/entities across sentences. This will cause the wrong meaning unless you are building the human meaning structures like in ConceptNet by learning from the article itself, as opposed to pretrained models based on grammar or word vector in Gigaword.
My work for the future, is using tf–idf style approach for deciding the key words in a sentence, which I would recommend over relative grammar/vectors. In the example in your blog post ("australian wine exports hit record high in september") you left out that it's 52.1 million liters; but if the article went on to mention or relate importance to that number, by comparing it to past records or giving it the price and so on, you can see this "52.1 million liters" phrase in this one sentence has a higher score relative to the collection of all sentences. As opposed to probabilistic word cherry picking based on prior data, this approach will enable you to extract named entities and phrases and build sentences from phrases in any sentence that grammatically refer to it.
You're pointing out what's already obvious. You still need some way to find what's "less important", which is what the topic is all about, like by using grammar dependency or keywords infrequency.
I'm not trying to be hostile, I just don't understand what you mean.
You still need some way to find what's "less important", which is what the topic is all about, like by using grammar dependency or keywords infrequency.
I have some experience in this area[1]. I found keyword frequency worked quite well.
Bad programming advice. jQuery uses best practices for many utility functions. Developers should not have to reinvent the wheel or code every basic call from scratch in every project. Don't act like using pre-built common frameworks is somehow a bad idea for ideal programming. They are reliable and faster, its one less issue to worry of what could be wrong.
This isn't good advice. jQuery will get those variables just as fast. You're really splitting hairs just for the sake of criticizing. http://www.stoimen.com/blog/2010/06/19/speed-up-the-jquery-c...
Basic developers should not care about such minor differences and write code they can read & maintain. $("") stands out as a DOM element, and who wants to add an extra line for variables -- unneeded and tedious. Most developers should not have to worry "oops did i cache declare everything beforehand" or act as ifs a bad coding practice that 'takes points' in the final code
Have you even read the article you linked to? Maybe you should read the comments on the article as well...
I've experienced this in many web apps, which had major benefit from these changes. The same applies to managed code (C#), where just being mindful about how and what you write, makes everything faster.
Install useful .bashrc shortcuts: u check updates, l detailed file list, .. parent dir, i [appname] install package, x [file] uncompress file, own [dir] get access to folder, p [procname] find process by name, f [string] find string in this folder's files, gg git commit and push