Readit News logoReadit News
norir commented on I'm too dumb for Zig's new IO interface   openmymind.net/Im-Too-Dum... · Posted by u/begoon
Galanwe · 6 days ago
The Zig's language is really good, but the standard library is really a big work in progress, constantly shifting, missing a lot of bits, overly abstracted at some places and too low level at other places.

I would say just stay away from the standard library for now and use your OS API, unless you're willing to be a beta tester.

norir · 6 days ago
From my perspective, Zig is trying to do far too many things to ever reach a baseline of goodness that I consider acceptable. They are in my view quite disrespectful to their users who they force to endure churn at the whims of their dictator. Now that enough people have bought in and accepted that a broken tool is ok so long as it hasn't been blessed with a 1.0 all of its clear flaws can be overlooked in the hope of the coming utopia (spoiler alert: that day will never arrive).

Personally, I think it is wrong to inflict your experiments on other people and when you pull the rug out from underneath say, well, we told you it was unstable, you should't have depended on us in the first place.

I don't even understand what zig is supposed to be. Matklad seems to think it is a machine level language: https://lobste.rs/s/ntruuu/lobsters_interview_with_matklad. This contrasts with the official language landing page: Zig is a general-purpose programming language and toolchain for maintaining robust, optimal and reusable software. These two definitions are mutually incompatible. Moreover, zig is clearly not a general purpose language because there are plenty of programming problems where manual memory management is neither needed nor desirable.

All of this confusion is manifest in zig's instability and bloated standard library. Indeed a huge standard library is incompatible with the claims of simplicity and generality they frequently make. Async is not a feature that can be implemented universally without adding overhead and indirection because of the fundamental differences in capabilities exposed by the various platforms. Again, they are promising a silver bullet even though their prior attempt, in which they publicly proclaimed function coloring to be solved, has been abandoned. Why would we trust them to get it right a second time?

There are a very small number of assembly primitives that every platform provides that are necessary to implement a compiler. Load/store/mov/inc/jeq/jump and perhaps a few others. Luajit implements its parser in pure assembly and I am not aware of an important platform that luajit runs on that zig goes. I do the vast majority of my programming in lua and _never_ run into bugs in the interpreter. I truly cannot think of a single problem that I think zig would solve better than luajit. Even if that did exist, I could embed the zig code in my lua file and use lua to drive the zig compiler and then call into the specialized code using the lua ffi. But the vast majority of code does not need to be optimized to the level of machine code where it is worth putting up with all of the other headaches that adopting zig will create.

The hype around zig is truly reaching llm levels of disconnection from reality. Again, to believe in zig, one has to believe it will magically develop capacities that it does not presently have and for which there is no plan to actually execute besides vague plans of just wait.

norir commented on Fennel libraries as single files (2023)   andreyor.st/posts/2023-08... · Posted by u/todsacerdoti
fredrikholm · 16 days ago
Single file dependencies are amazing. I've never understood why it's so unpopular as a distribution model.

They can be a bit clunky in some languages (eg. C), but even then it's nothing compared to the dystopian level of nightmare fuel that is a lot of dependency systems (eg. Maven, Gradle, PIP). Free vendoring is a nice plus as well.

Lua does it right; happy to see some Fennel follow in that tradition.

norir · 16 days ago
I suspect much of this is a historical result of c's compilation model. Since c compilers define a compilation unit as a file, there is no good way to do incremental compilation in c without splitting the project into multiple files. In an era of much slower computers, the incremental compilation was necessary so this was an understandable choice.

For me, today this split is almost always a mistake. Having everything in one file is superior the vast majority of the time. Search is easy and it is completely clear where everything is in the project. Most projects written by a single individual will be fewer than 10K lines which many compilers can clean compile in less than one second. And I have reached the stage of my programming journey where I would rather not ever work on sprawling hundred K + line projects written by dozens or hundreds of authors.

If the single file gets too unwieldy, splitting it in my opinion usually makes the problem worse. It is like cleaning up by sweeping everything under the rug. The fact that the file got so unwieldy was a symptom that the design got confused and the single file no longer is coherent. Splitting it rarely makes it more coherent.

To make things more concrete and simple. For me the following structure is strictly better (in lua like the op)

     foo.lua:
       local bar = function() print "bar" end
       return function()
         print "foo"
         bar()
       end
 
compared to

    bar.lua:
      return function() print "bar" end
    foo.lua:
      local bar = require "bar"
      return function()
        print "foo"
        bar()
      end
In the latter, I now both have to keep track of what and where bar is and switch files to see its contents or rely on fancy editors tricks. With the former, I can use vim and if I want to remind myself of the definition bar, it is as easy as `?bar =`. I end up with the same code either way, but it is much easier to view in a single file and I can take advantage of lua's scoping rules to keep module details local to the module even from other modules defined in the same file.

For me, this makes it much easier to focus and I am liberated from the problem of where submodules are located in the project. I can also recursively keep subdividing the problem into smaller and smaller subproblems that do not escape their context so that even though the file might grow large, the actual functions tend to be reasonably small.

That this is also the easiest way to distribute code is a nice bonus.

norir commented on Let's get real about the one-person billion dollar company   marcrand.com/p/lets-get-r... · Posted by u/bizgrayson
AndreLock · 17 days ago
Using valuation as a measuring stick is a bit silly given how much valuations can be inflated based on a pre-revenue idea and the pedigree of the company founder(s). When Mira Murati and Ilya Sutskever left OpenAI and walked into a room of investors saying, "I'm starting my own AI company", they arguably both created one-person, billion dollar companies right then and there.
norir · 17 days ago
You can argue it, but I disagree. Whatever their skills and experience, they did nothing alone.
norir commented on Zig's Lovely Syntax   matklad.github.io/2025/08... · Posted by u/Bogdanp
norir · 19 days ago
From my vantage point, Zig's syntax perfectly matches the language: it is ad-hoc, whimsical and serendipitous. It is lacking in grace, elegance and compassion.
norir commented on Zig's Lovely Syntax   matklad.github.io/2025/08... · Posted by u/Bogdanp
IshKebab · 19 days ago
Maybe if you've never tried formatting a traditional multiline string (e.g. in Python, C++ or Rust) before.

If it isn't obvious, the problem is that you can't indent them properly because the indentation becomes part of the string itself.

Some languages have magical "removed the indent" modes for strings (e.g. YAML) but they generally suck and just add confusion. This syntax is quite clear (at least with respect to indentation; not sure about the trailing newline - where does the string end exactly?).

norir · 19 days ago
Significant whitespace is not difficult to add to a language and, for me, is vastly superior than what zig does both for strings and the unnecessary semicolon that zig imposes by _not_ using significant whitespace.

I would so much rather read and write:

    let x = """
      a
      multiline string
      example
    """
than

    let x =
      //a
      //multiline string
      //example
    ;
In this particular example, zig doesn't look that bad, but for longer strings, I find adding the // prefix onerous and makes moving strings around different contexts needlessly painful. Yes, I can automatically add them with vim commands, but I would just rather not have them at all. The trailing """ is also unnecessary in this case, but it is nice to have clear bookends. Zig by contrast lacks an opening bracket but requires a closing bracket, but the bracket it uses `;` is ambiguous in the language. If all I can see is the last line, I cannot tell that a string precedes it, whereas in my example, you can.

Here is a simple way to implement the former case: require tabs for indentation. Parse with recursive descent where the signature is

    (source: string, index: number, indent: number, env: comp_env) => ast
Multiline string parsing becomes a matter of bumping the indent parameter. Whenever the parser encounters a newline character, it checks the indentation and either skips it, or if is less than the current indentation requires a closing """ on the next line at a reduced indentation of one line.

This can be implemented in under 200 lines of pure lua with no standard library functions except string.byte and string.sub.

It is common to hear complaints about languages that have syntactically significant whitespace. I think a lot of the complaints are fair when the language does not have strict formatting rules: python and scala come to mind as examples that do badly with this. With scala, practically everyone ends up using scalafmt which slows down their build considerably because the language is way too permissive in what it allows. Yaml is another great example of significant whitespace done poorly because it is too permissive. When done strictly, I find that a language with significant whitespace will always be more compact and thus, in my opinion, more readable than one that does not use it.

I would never use zig directly because I do not like its syntax even if many people do. If I was mandated to use it, I would spend an afternoon writing a transpiler that would probably be 2-10x faster than the zig compiler for the same program so the overhead of avoiding their decisions I disagree with are negligible.

Of course from this perspective, zig offers me no value. There is nothing I can do with zig that I can't do with c so I'd prefer it as a target language. Most code does not need to be optimized, but for the small amount that does, transpiling to c gives me access to almost everything I need in llvm. If there is something I can't get from c out of llvm (which seems highly unlikely), I can transpile to llvm instead.

norir commented on Show HN: Dlg – Zero-cost printf-style debugging for Go   github.com/vvvvv/dlg... · Posted by u/0xFEE1DEAD
dspillett · a month ago
> it doesn't appear to truly be zero cost if you log variables that can't be eliminated.

I'd say it is fair to call it zero cost, if the costs you are seeing are due to the way you are using it. If the values being logged are values you are already computing and storing, constants, or some mix of the two (concatenated via its printf function), by my understanding (caveat: I've never actually used Go myself) all the logging code should be stripped out as dead code in the link stage.

Obviously if you are performing extra computations with intermediate values to produce the log messages, your code there might produce something that is not reliably detected as eliminable dead code so there would be cost there.

> The only way (I believe) to implement zero cost is some type of macro system which go does not support.

That looks to be effectively what it is doing, just at the link stage instead of in a preprocessor. Where C & friends would drop the code inside #ifdef when the relevant value is not set (so it won't be compiled at all) this should throw it out later in the process if DLG_STACKTRACE isn't set at compile time.

So there will always be a compile-time cost (unlike with a preprocessor macro, something will be produced then thrown away and the analysis of what to throw away will take longer with more code present), but not a run-time cost if the logging is not enabled, assuming you don't have any logic in the trace lines beside the calls to this module.

norir · a month ago
The problem is calling anything zero cost is inherently wrong. Nothing in life has no cost. I know I am being pedantic but I think a more accurate description is "zero production runtime cost," which is also how I interpret rust's zero cost abstraction. In that case too, I find the terminology grating because there are very obviously costs to using rust and its abstractions.

One obvious cost to code that is removed in production is that there is now a divergence between what your source code says it does and what it actually does. I now need to keep in mind the context the program is running in. There is a cost to that. It might be worth it, but it isn't zero.

norir commented on Futurehome smart hub owners must pay new $117 subscription or lose access   arstechnica.com/gadgets/2... · Posted by u/duxup
norir · a month ago
It is hard to overstate how wrong they are doing things if it costs more than a dollar or so per year for managed configuration. I previously worked for a cloud managed device company and it was obscene how high the margin was on the mandatory software licenses we bundled with the hardware and we were also collecting a huge amount of data, not just providing configuration.
norir commented on Pony: An actor-model, capabilities-secure, high-performance programming language   ponylang.io/... · Posted by u/RossBencina
swiftcoder · a month ago
> in a world where there are many languages that do the same thing it really boils down to using the one with the syntax that you like the most

Wat? If all languages were just syntax re-skinning, we really wouldn't need more than one compiler backend...

Generally the semantic differences are much more important. Rust isn't interesting for its syntax, it's interesting for its ownership rules and borrow checker. Erlang isn't interesting because of its syntax, it's interesting for its actor model concurrency. And so on...

norir · a month ago
I agree and disagree completely with this statement. Syntax is superficial. It is the first thing that people will notice about the language (unless you hide it from them). One quickly notices that if you don't like a language syntax, you can always write a compiler that operates at a purely syntactic level to transform your desired syntax to the actual target language.

But just because syntax is superficial doesn't mean that it isn't important. If a language has such poor syntax that I feel the need to write my own compiler to work around its syntax, I have to seriously question the skills and/or motivations of the author. If I am capable of writing a compiler at the syntactic level, why not just go all in and write my own compiler that implements _my_ desired semantics? A language that I find subjectively distasteful at the syntactic level is nearly guaranteed to be filled with semantic and architectural decisions that I also dislike. Consider Rust, I do not think that its syntax and abysmal compilation times can be decoupled. I would rather write my own borrow checker than subject myself to writing rust. And the reason is not the syntax, which I do strongly dislike, but the semantic properties of the language, such as horrible compilation times and compiler bugs (if a language has more than 100 open issues on github, I consider it broken beyond repair).

norir commented on Why I write recursive descent parsers, despite their issues (2020)   utcc.utoronto.ca/~cks/spa... · Posted by u/blobcode
norir · a month ago
There is a straightforward technique for writing unambiguous recursive descent parsers. The high level algorithm is this: parsing always consumes one character, never backtracks and is locally unambiguous.

You then construct the parser by combining unambiguous parsers from the bottom up. The result ends up unambiguous by construction.

This high level algorithm is much easier to implement without a global lexer. Global lexing can be a source of inadvertent ambiguity. Strings make this obvious. If instead, you lex in a context specific way, it is usually easy to efficiently eliminate ambiguities.

norir commented on Why concatenative programming matters (2012)   evincarofautumn.blogspot.... · Posted by u/azhenley
mikewarot · a month ago
Long, long ago I wrote a Forth of OS/2 out of spite. I've had a long standing interest in the language. It's been my opinion for a while that Forth is a less powerful language than LISP, even though they initially appear to just be the opposites of each other in terms of argument handling.

The thing that makes Forth and other similar languages write-only is that you can easily lose track of the variables when you are forced to toss them all on a stack. When you add the ability to use local variables, you get some of that power back, but then you've added a huge impedance mismatch with the rest of the system.

As much as I hate all the () in Lisp, it's the source of much of its power.

norir · a month ago
This is mostly a problem of concatenative language design. There is nothing preventing function definitions from having type signatures that are statically checked. In my concatenative language compilers, I always provide a function that causes the compiler to halt and print out all of they types on the stack at the call site during compilation as well as a runtime function that prints out the runtime values. I have found this to be quite effective for debugging. Because I don't support global variables, these functions give a complete view of ALL of the live values at any point in the program, which is quite hard in most languages.

Local variables are also not necessary to have reasonable ergonomics, even in mathy contexts. Instead, the language can define special operators that access fields on the stack by type name and move or copy them to the top of the stack. For some reason, in the math example, the author deliberately added an unnecessary parameter that had to be dropped and put the arguments in the wrong order. Cleaning up the signature, there are several reasonable ways to solve the problem in a concatenative language with the features I've described.

    type n number
    type y n
    type x n
    def op_math_example [(y x) -> (n)] \\x * \\y \\y * + \y abs -
where \FOO moves a value of type FOO to the top of the stack and \\FOO copies the value to the top, leaving the original where it was. Yes, the \\ is a little ugly, but these should generally be used sparingly.

Even if we restore the original signature and use the author's syntax, there is a much cleaner solution to the problem without the move/copy operators.

    def sq dup *
    def op_math_full drop swap sq over sq + swap abs -
This is extremely cryptic. I suspect most forth programmers would have added a stack signature comment, but it is even better if the compiler statically verifies the signature for us, which is how the language I am describing differs from vanilla forth (where stack comments also have the risk of being wrong in an evolving system). If we restore the type system, it becomes:

   type n number
   type x n
   type y n
   type z n
   def sq [(n) -> (n)] dup *
   def op_math_impl [(y x) (n)] sq over sq + swap abs -
   def op_math_full [(x y z) -> (n)] drop swap op_math_impl
Of course it is still not as obvious to the reader what op_math_full actually does as (y * y) + (x * x) - abs(y). But in practice it would have some name like projection_size so once it has been written correctly, it doesn't really matter that it is a bit obscure to read at a glance. You also get really good at simulating the stack in your head and they type signature makes this much easier. Still, when you are confused, you can always add stack debugging like so:

    def projection_size [(x y z) -> (n)] drop swap PRINT_STACK sq over sq + swap abs -
    1 2 3 projection_size
and the output would be something like:

    ({ y 2 } { x 1 })
Writing programs in this style is very nice so long as you have enough tooling to facilitate it.

I have only used concatenative languages that I have implemented myself, however. I have read quite a bit about forth, but for me, there are clear problems with vanilla forth that are solved for me with the meta operators like [, [[, \ and \\ as well as good debug tools like PRINT_STACK above. Forth was an incredible discovery for its time. We can do a lot better now.

u/norir

KarmaCake day1009November 17, 2022
About
For those who haven't been trained by it, no discipline feels better at the time, but painful.
View Original