I’ve tried the mechanical (red switch) version, but actually prefer their membrane version because it’s quieter and I like the tactile feedback better (I haven’t tried the brown/blue switch models though).
I’ve tried the mechanical (red switch) version, but actually prefer their membrane version because it’s quieter and I like the tactile feedback better (I haven’t tried the brown/blue switch models though).
There are definitely differences in the languages, but I'm not sure that Go really had anything to figure out or invent. I don't buy that argument.
In C# most objects are 'class' objects, which are implicitly passed by pointer, although some are 'struct' objects passed by value. In Go rather than making the decision once at the type level, the decision to pass by value or pointer is made explicitly every time an object is used.
> I'm not sure that Go really had anything to figure out or invent
If you look at the discussions for Go (which started as early as 2009 [1]) [2] [3] [4] [5] generics seems as big or a bigger project as the other improvements made to the language over the last decade. My impression is there being a broad spectrum of potential trade-offs across compile speed, execution speed, convenience, etc.
The totality of the prior art here is above my pay grade, but I can quote the Haskell people they roped in [4]:
> We believe we are the first to formalise computation of instance sets and determination of whether they are finite. The bookkeeping required to formalise monomorphisation of instances and methods is not trivial ... While the method for monomorphisation described here is specialised to Go, we expect it to be of wider interest, since similar issues arise for other languages and compilers such as C++, .Net, MLton, or Rust
I think C# occupies a fairly nice design spot also (and takes advantage of its class/struct system to get fast compiles but also performance when needed). But it's not like the C# design couldn't be improved on. As far as I know you still can't write math code that works across 32 and 64 bit floats. And array covariance is implemented in a way that isn't type-safe and relies on run-time checks and exceptions. [6]
[1] https://research.swtch.com/generic
[2] https://docs.google.com/document/d/1vrAy9gMpMoS3uaVphB32uVXX...
[3] https://go.googlesource.com/proposal/+/master/design/go2draf...
[4] https://arxiv.org/pdf/2005.11710.pdf
[5] https://go.googlesource.com/proposal/+/refs/heads/master/des...
[6] https://codeblog.jonskeet.uk/2013/06/22/array-covariance-not...
There are also other differences that would prevent using C#’s system as is - Go doesn’t have the reference/value type dichotomy or inheritance. I think they also wanted to be able to abstract across float/doubles etc (unless C# added that recently).
Anyway I like both languages and I’m not trying claim one is better than the other, just trying to explain the design space Go seems to occupy.
A few years later we had go generate. Which I argued was a glorified C macro system to get around the lack of generics. Oh and everybody forgot about the single-pass thing, I guess.
Now in this thread there are a bunch of people commenting that they can't wait to combine all their copy pasted code using a generics which are implemented via a tree traversal that actually kind of complicated the idea of what a pass even is (I think memoization could make this linear, but at the cost of memory?).
This is a language that seemingly insists on breaking the wheel. Notice I didn't say reinventing, because in over a decade they haven't actually gotten a working version of the wheel out yet. I mean seriously, Go was released in 2009: if I'm not mistaken, C# had a pretty good generic system already, and there were a bunch of other workable options to copy from less-mainstream languages. Not that copying has helped: they copied coroutines from other languages, but failed to present a coherent memory sharing model (just copy Erlang FFS), which severely limits the usefulness of coroutines.
Every few years someone gets me to try Go out again and I discover, yet again, that it's still struggling with problems that were solved before it existed. It's a shame that this language has stolen so much mind share from more deserving languages.
The first thing to understand is that Go occupies a space lower level than Java but higher level than C. It includes things like explicit pointers and more control of memory layout.
Many of the design decisions make no sense without that context. Go can’t simply copy generics from Java or memory sharing from Erlang, since the design decisions made by those languages would introduce too much overhead in a language intended to be lower level. In a lower level language more aspects of how a program executes are specified explicitly, rather than accepting more overhead or guessing what a complex optimizer will do.
On generics, I think the team’s position has mostly been that it’s a big project and there’s been other priorities like rewriting the compiler in Go, improving the GC, etc. And that in the spectrum of trade-offs for generics systems they didn’t want to go all the way to Java or C++.
I feel like a lot of HN commenters compare Go to higher level languages and find it lacking, which may be a fair assessment for certain problems but isn’t really understanding the niche it tries to occupy.
The endgoal is probably to allow neater refactoring opportunities and raise the level of efficiency by standardising on an easier-to-parse-into-AST-like-structure syntax, enabling generic tools to be built that will deal with such structures directly.
While I sense that there is something smart that could be done in that area, there are already plenty of de facto standardized languages for structured data (most notably XML), and already a bunch of real world programming languages that work.
So I think it'd be easier to grasp their goals if they started off of an existing language ecosystem (that language spec is the definition for their AST parser), and attempted to build the tools they want: this would have more quickly formalized what's missing in the source code format for it to remain human editable, but at the same time easier and richer to parse.
Again, it's all a bit vague since you can build a development environment based on text files where you are not seeing all of that generated text at all times (if ever).
As the goal seems to be to find improvements to the process of program development while being unconstrained by complexity of (re)parsing text files repeatedly, I would have started with trying to add those improvements and ignoring the slowness/cost of parsing (or simply used an existing language that's cheap to parse, like Lisp).
There's pretty far you can go with text-based formats since you aren't obligated to display the file exactly as it's stored on disk (and many current IDEs do minor code folding things). For example embedded images can be displayed inline in the source code, but be stored as some loadImage() function on disk. You could even have some comments with base64 binary data if you really needed - at that point binary vs text is mostly a performance issue, but parsing is usually pretty fast so being text-based might still have an advantage because of better interoperability with source control etc.
For example:
> Also, keep in mind that LVT would see the elimination of the portion of property tax that falls on buildings. I just checked my own property tax records (I live in the suburbs of a medium-sized town far from any major urban cores). If the assessed land share more than doubled to 40%, under a 100% LVT regime I'd actually save $545.05 on my property taxes every year–and that's without a Citizen's Dividend.
Picking the author’s house, applying arbitrary land value tax numbers to it, and declaring a net tax savings for the author as a victory for LVT is not really a useful data point. It seems that actually analyzing the winners and losers of a LVT would require a more thorough simulation, but that’s outside the scope of the article.
The article also had an unsettlingly high number of “If we assume that X is true, then Y is a reasonable conclusion” while completely sidestepping an examination of whether or not “X” was true or even reasonable.
The second part looks to be more interesting: I’d like to see some arguments for why a LVT wouldn’t just be passed along to the renters like every other expense is currently.
I think the general idea goes something like: a new apartment can collect $10k/mo rent, and so after construction expenses a developer can pay say $5k/mo for the land. These numbers don't change with a LVT or not.
Without the LVT, the $5k/mo that they can afford translates to say a $500k mortgage, which roughly determines the land price (as multiple developers bid for the same land).
With a LVT, there's say a $2k/mo tax, so they only have $3k/mo to pay the mortgage, which translates to a $300k land price using the same logic.
So by working backwards in this simple thought experiment the LVT is just reducing the value of the land without changing much else about the housing market. Intuitively it also sounds reasonable that taxing land values might reduce land values as a result.
I'd be interested to see the upcoming detailed analysis though.
this is not recursive, really. It is not written to imply that experience is universal, in fact quite the opposite. Most of us generally posit that rocks do not have experiences. Most of us posit that humans do, and likewise (famously) so do bats. But what the experiences that a bat has? It's not unreasonable to assume that they are quite different from the ones a human has.
Ergo, we cannot really compare the experiences of a bat with those of a human. But we can note that they both have experiences, and that is the magical part. I was just being flowery by using "experience" twice in a row, but in another sense I was trying to differentiate between the questions of "why are our experiences as they are?" and "why do we have any experience at all?" This is something that I think Dennett in particular fails spectacularly at (and he agreed with me, once :)
As for the "wetness is a qualia" point, yes, that is good summary of my point, but I am sorry, I do not understand the notation you are using after that.
Certainly physically ingesting a small amount of certain psychedelic chemicals can have profound effects on subjective experience, so it can't be too disconnected from physical processes.
The whole issue of qualia / experiences seems like it is an extension of the dualist/materialist perspectives rather than something that sheds light on it one way or another - dualists will view qualia as something non-material and so something that materialism can't explain, and materialists will view qualia as something material along with consciousness itself.
The overwhelming majority of human beings both now and historically are dualists, so this isn't surprising to me at all.
Materialists always lose me when they start trying to explain things like how exactly inanimate matter started thinking. I've never seen any recourse other than handwaving of the highest order. Usually it's something along the lines of emergent behavior, but no mechanism is ever given for how or why said emergence occurs.
At which level do the existing rules of the material world start to seem less plausible to you than a second unseen world with different rules, that interacts with this one under specific circumstances (if that's what you mean by dualism)?
As we can presently see with computers and machine learning, pretty complex behavior is possible through systems that operate via known physical processes.
If you work on Android games for example would you get any kind of control over mem layout, irrespective of the stack you will be using?
I've only used it in passing, but everytime I see examples they're verbose and look clunky.
For example, the chainable methods are nice, but comparing to JS looks more like ES5 than modern code.
Do you find Go preferable to other languages for solo projects?
To me it feels like C with some extra features, kind of like Objective-C. If you need to do some C-like things but can afford the small latency etc overhead introduced by Go, what Go provides (GC, large standard library, green threads, package management, etc) feels like absolute luxury compared to using C. C# is another GC language with the ability to go lower level, but the syntax can also be pretty verbose.