Hacker Newsnew | past | comments | ask | show | jobs | submit | jonathanaird's commentslogin

The canonical way of doing things in Flutter is composition NOT inheritance. This is the whole point of Widgets. Inheritance is used in ways that generally make sense and don’t result in huge complex inheritance trees.


Dart has top-level, first-class functions that don't require wrapping everything in a class. It then proceeds to waste this by wrapping everything up into classes and hoping nobody will use inheritance.


Widgets are essentially data classes, simple wrappers for configuration information. They need to be classes because of the way the internals of the framework works. And no one is hoping you don’t use inheritance. The recommended style is very clear. You’re free to do things however you want but you have to take responsibility for doing things in a weird way.


I think the conversation around OO suffers from the same problem that any major engineering trend suffers from: namely that eventually, the concept gets conflated with the way that enterprise software and overpaid consultants completely fudge the implementation of the concepts. Consultants get paid big bucks trying to convince your company that you need to be doing microservices or OO or nosql or whatever the latest fad is and that you need them to help you implement it. It’s not based on any real technical need. It’s just institutional FOMO.

OO has it’s place in the space of possible patterns to choose from based on the kind of solution you need. I think these overarching claims of superiority are missing the point completely. I personally like a combination of function and OO concepts that can work synergistically together and play the kind of role that they excel in individually.


Off topic I know, but it is not necessary enterprise consultants who push for buzzword-compliant fads. Individual developers generally likes to work on exciting new stuff and might also consider what looks good on the CV. Developer driven startups certainly seem to be just as fad-driven compared to enterprises which are generally more conservative.

That said, it is an important observation that widely used technologies are judged on the reality of their use, while less popular technologies are judged on their potential. For example Java is judged on the quality of the code people see in the real world, developed and maintained over long time by developers of varying skill, while Haskell is judged on examples written by experts to showcase the benefits of the language.


The hardest part I think is knowing that you should ignore all of the noise around different practices paradigms, and frameworks and focus on fundamentals. The best way is to look through and try to understand some very well written open source projects.

I feel like I stumbled on doing this but for many, I think they probably just bumble along building web apps and taking much longer than necessary to understand fundamentals. You have to expose yourself to high level professional code and absorb as much as you can from it otherwise you’re in danger of plateauing.


I'm inclined to agree - there's a lot of cargo-cult practices in software development and it's hard to know that it's often over-generalised until you deeply understand the core of programming, which is essentially just breaking down big problems into small forward steps.


I agree I came into Programming via the technical side and there middle out development was how I normally worked.

By that I mean work out how the core algorithmic parts first then interface to the lower level (normally experimental rigs) and finaly polish the interface


Yes, it took me two or three years to have any sort of valid opinion on TDD or SPA vs plain html and where each would be appropriate and in what quantity. Watching talks helped me a lot. Programmers really like giving talks.


I hear this a lot and I understand people make mistakes trying to rush but this is not accurate for me. I can absolutely will myself to think faster. Maybe this comes with experience doing some meditation but when I have a difficult problem where the solution is not immediately obvious, I can sit in my chair or take a walk and focus very intensely on the problem. I get my whole mind to just focus on this one thing and I keep my mind there without getting distracted or daydreaming. I don’t try to have any kinds of thought in particular. I just focus on the problem. After doing this for a certain amount of time I will come up with my conclusion.

I could do this over a longer period of time and create much less mental strain but when I’m in a time crunch, it’s very useful.

That said if the problem is actually beyond my reach and capabilities I won’t get much of an answer.


Perhaps you are not thinking faster, instead you're doing an excellent job at focusing all of your resources on one problem.

It is the sum of all your thought that is (apparently) fixed, so moving all that thought to one task will have that single task complete faster. It doesn't change how much "thought" you could do at once overall though.


Then this article becomes trivial. Imagine we discussed running, no one would argue that your running speed is fixed, most people agree that one can vary the speed at which they run even though it is perfectly well understood that there is a maximum speed they can do so and a minimum speed. And just like running, the maximum speed that I can run is not a single well defined value. It depends on whether I am running a marathon or a sprint.

Similarly with thinking, I can vary the speed at which I'm thinking depending on whether I am playing bullet chess, speed chess or standard chess. Certainly when playing bullet chess, just like running a sprint, I am operating at my peak speed, but that speed is not sustainable for long periods of time and it's inefficient in terms of energy use, so I get burned out if I have to engage it for a long time.

If I'm running a marathon or thinking about a problem that requires deep and intense focus, I stop operating at my peak speed and instead operate at a long term sustainable speed.

This is the kind of variation that this article misses when it says that our thinking is fixed. It's anything but fixed, it's a fairly complex and poorly understood trade-off.


Exactly. I did indeed see the article as trivial. From the actual article itself :

"If you’re a knowledge worker, you can’t pick up the pace of mental discriminations just because you’re under pressure. Chances are good that you’re already going as fast as you can. Because guess what? You can’t voluntarily slow down your thinking, either."

---

Now we both know that what you say must be true. But it goes beyond anecdotal evidence. We can scan the brain with an fMRI and we see different parts of the brain light up as we think. We're not seeing thinking here - we're seeing the brain cells consume energy.

Wikipedia - "Functional magnetic resonance imaging or functional MRI (fMRI) measures brain activity by detecting changes associated with blood flow.[1][2] This technique relies on the fact that cerebral blood flow and neuronal activation are coupled. When an area of the brain is in use, blood flow to that region also increases.[3]"

If the brain was always at 100% capacity, your entire fMRI would always show complete engagement.

Like a muscle, you can overexert your brain, so like you say sustainable pace is important.

---

However, if you average over days and weeks, you could say that your level of thought IS fixed at this long term sustainable level. I think that the article is considering a project that is "man-months" long and not just thinking over the course of a minute or a day.

-- Wikipedia link for fMRI: https://en.wikipedia.org/wiki/Functional_magnetic_resonance_...


It’s happening in the tools for thought community on Twitter. It’s more about the software layer and innovations in human computer interaction design. A lot of the ideas from the 60’s and 70’s are having a resurgence like Memex, backlinks, moldable dev environments etc.


Yes, it’s just old companies like IBM trying to ride a buzz wave. The whole point of a blockchain is that it’s public and verifiable.


IMHO they've correctly assessed the situation that the buzz is the most important feature.


Touche. It’s so hard to get past the buzz in crypto because people are making way too much money on the price of tokens. It’s so distracting. I think we’ll get there eventually.


I really want something that’s strongly typed but doesn’t require code generation like protobufs do. Yaml doesn’t do it for me. The closest I can get is putting the type guarantees in the database and using GraphQL.


I made https://concise-encoding.org/ to deal with this:

- strongly typed

- ad-hoc or schema (your choice)

- no code generation step

- edit in text, send in binary


Why would someone choose this rather than msgpack or CBOR or protobuf or any of the other existing things in that space?


Because there isn't anything else in this space that:

- supports ad-hoc data structures or schemas per your preference

- supports all common types natively (doesn't require special string encoding like base64 or such nonsense)

- supports comments, metadata, references (for recursive/cyclical data), custom types

- doesn't require an extra compilation step or special definition files

- Has parallel binary and textual forms so that you're not wasting CPU and bandwidth serializing/deserializing text. Everything stays in binary except in the rare cases where humans want to look or edit.


That looks pretty good, actually.


I use JSON Schema to validate JSON documents.

https://json-schema.org/


Imho, statically typed languages are the ones that benefit most from schema. The current schema version is 12 but the implementations for Go, Rust, C++ and Java are all listed as draft 7. None of them support codegen either, just validation, so not exactly compelling.


> The current schema version is 12 but the implementations for Go, Rust, C++ and Java are all listed as draft 7

It's actually 2020-12, which is two versions after Draft 7 (they shifted from Draft n to YYYY-MM after Draft 7, and since then have had 2019-09 and 2020-12.)

And that's true of most languages, though there is some 2019-09 support. (It really doesn't help that there is also OpenAPI which baked in a variant—“extended subset”—of Draft 5 JSON Schema.)


OpenAPI 3.1 which was released recently, uses JSONSchema 2020-12 as the primary schema format. As a result, we can expect further consolidation of tooling, etc in the community.


I benefit greatly from schema validation in Ruby, ensuring that ingress-processing code does not receive e.g a String or Hash instead of an Array which would have things blow up way after the ingress edge when a call to an Array method fails, or worse, produce silently broken behaviour that may or may not blow up even farther down the road because both String and Hash respond to e.g #[](Integer).


Yeah but Ruby is a dynamically typed language. There's not much benefit to codegen since nothing is checked at compile time anyway.


I found code generation to be useful in Ruby with protobuf. This:

https://github.com/lloeki/ruby-skyjam/blob/master/defs/skyja...

gives that:

https://github.com/lloeki/ruby-skyjam/blob/master/lib/skyjam...

I would certainly enjoy having a DSL to write descriptive code to validate using JSON schema, but it would be even better if the Ruby definitions could be generated and persisted in Ruby files using that DSL.

Also, storing things in basic hash/array types works, but having dedicated types is useful, so that one can ensure not shoving one kind of hash in place of another unrelated kind of hash.

As for types themselves in general, there's RBS and Sorbet. One could have type definition generation as well for even deeper static and runtime checks.


Do you really want generated code to manipulate JSON? I'm not sure there is a demand for that.


Manipulating anything dynamic in a statically typed language is generally tedious and not type safe, so yes.


In any language I eventually need to validate. Whether I do it early, using a validator, or during processing the data at later is a choice depending on the problem.

Existence of a schema definition file and checking responses against is signalling that I can trust an API vendor to be at least aware of the requirements for clients. (Whether they randomly change the schema definition or ignore it is a second question, but at least somebody once thought about formalising and it's not an complete adhoc dump of today's internal data representation)


And there's Amazon Ion too - https://amzn.github.io/ion-docs/

"Amazon Ion is a richly-typed, self-describing, hierarchical data serialization format offering interchangeable binary and text representations. The text format (a superset of JSON) is easy to read and author, supporting rapid prototyping. The binary representation is efficient to store, transmit, and skip-scan parse. The rich type system provides unambiguous semantics for long-term preservation of data which can survive multiple generations of software evolution."


Strongly typed + no code generation is obviously doable in any dynamically typed language.

Apache Avro has support for parsing and utilizing schemas at runtime, even in C++.

For Apache Thrift you have things like thriftpy: https://thriftpy.readthedocs.io/en/latest/

I'm not aware of a type-safe mechanism for Flatbuffers or Protocol Buffers.


There are protobuf libraries without code generation, for instance: https://github.com/cloudwu/pbc (you lose the connection to the language's type system though).


Exactly. This is no fundamentally different than a traditional time-stamping service. If the time-stamping service is compromised, all of the previous timestamps are invalid.

The only alternative to a blockchain is to publish the latest hash widely in a verifiable place such as in a well-known newspaper.


It has no relation to transaction volume. It is related to the price of BTC and the block reward every 10 minutes. The higher the price, the more miners compete to find a block. Transaction volume can never increase because they have capped the block size at 1mb.


> The higher the price, the more miners compete to find a block.

But only if it's profitable to miners coming in. Ultimately mining pressures miners into finding cheap sources of electricity. Yes, many just take advantage of subsidies (governmental or environmental) to lower their costs, but as mining difficulty and competition increases inefficient are kicked out.


Good to know, appreciate the explanation. So these estimates that it uses about the same energy as Argentina are probably based on a rather old price and we could get to many multiples of that if the price keeps increasing?


It also depends on the available market for asics. In bear markets, asics are very affordable as miners go out of business but the price can go up quite a lot in bull markets which affects the profitability of buying new machines.


I wonder what the observed relationship is if we were to plot energy usage vs price over the last 3 years


In case you’re interested the two bitcoin forks BCH and BSV have unlimited block size and so energy/transaction validated can scale very well. Eventually all forks will have to rely on transaction fees as the block reward has a halving schedule. If something like BSV succeeds the hash rate and thus energy consumption will be a complicated combination of transaction volume, and variable transaction fees. It will essentially be a race to the bottom to see who can provide the cheapest transactions while still being profitable. I think one should expect much higher energy efficiency than something like Visa just from pure scale and the intense competition.


They’re putting a lot of resources behind and it actually is thriving. They have a real financial interest as there’s pretty thorough firebase support. It’s also reviving their Dart language.

As someone that’s using it, it is improving very quickly. Web support is getting quite good.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: