I wish more programming languages provided interfaces for building libraries around the std library like what Go does. It makes using libraries a lot better because you aren't dependent on that library as long as it uses the std library interface.
This is huge for things like database drivers which might become outdated or not support certain features. In Go switching database drivers in as simple as importing the new library. You dont have to change your code.
In rust for example you have to go out and pick a database driver and no two libraries will work the same. If you pick one postgres library and it becomes outdated you have to go rewrite you code to support the next one to move too.
This is why I would never use Rust, or Zig for being things like http servers.
> I wish more programming languages provided interfaces for building libraries around the std library like what Go does. It makes using libraries a lot better because you aren't dependent on that library as long as it uses the std library interface.
> This is huge for things like database drivers which might become outdated or not support certain features. In Go switching database drivers in as simple as importing the new library. You dont have to change your code.
The effect is that every database driver becomes outdated and doesn't support certain features, instead of just a few. Python people have a saying that the standard library is where modules go to die. Java's database drivers massively lag behind (e.g. there's still no good async support for most of them), because JDBC is a lowest common denominator that every driver has to be dumbed down to, but is established enough that it sucks all the oxygen away from any efforts to write better drivers.
Far better to keep the standard library small and allow modules to update on their own terms and their own schedule. If the library you're using does what you need, you can stick with it, but a better alternative (which always means a new interface in practice - the idea that you can make substantial improvement without changing the interface is mostly a myth, because the interface is the most important part of a library) can appear and compete on its merits, and if the improvements are worth migrating to then it will win out.
All those complains always forget that the standard library happens to be available everywhere the language has a full implementation available, while third party libraries are hit and miss.
I rather have outdated code that works, than not having anything at all.
one approcach with golang is make interface that can be leaked if you really want, you will be using standard interface 99% but if you need sth not provided by abstraction you can ask for underlying impl `db.Driver()` and cast to appropriate type.
While in general I agree, database libs are probably the worst example of this - in practice you can't just swap sqlite for postgres or mysql. I usually avoid the generic driver wherever possible and use one specific to my database.
Agreed. Wrapping all db access in its own class or module is one of the easiest architectural decisions to be made in a project. Allows you to use specific drivers but isolates changes to a few easily found locations. Makes it dead simple to test too.
> In Go switching database drivers in as simple as importing the new library.
Neat!
Kinda like Java. (JDBC)
Kinda like C#. (IDb)
Kinda like Python. (DB-API)
Kinda like PHP. (PDO)
Kinda like Perl. (DBI)
Kinda like C/C++. (ODBC)
Of course, you work as the lowest common denominator and have abstractions that don't quite match the implementation. And let's be honest, how often do you switch databases.
But yeah this is very common.
---
I'd argue in practice, Java follows your advice the most of any community:
> I wish more programming languages provided interfaces for building libraries around the std library like what Go does.
Java does, but apart from JDBC for databases, and the servlet-API, I haven't seen many cases where that really improved the world of programming. Java-EE relied heavily on such abstractions, but nowadays those would be replaced by simpler, less abstract constructs with less redirections.
> This is huge for things like database drivers which might become outdated or not support certain features. In Go switching database drivers in as simple as importing the new library. You don't have to change your code.
Database drivers are an abstraction that tends to leak it's underlying technology very fast.
The idea of just being to swap your database out under the hood is neat, but generally holds only for simple projects.
Also the abstraction that helps you here is SQL really, not the driver.
> I wish more programming languages provided interfaces for building libraries around the std library like what Go does. It makes using libraries a lot better because you aren't dependent on that library as long as it uses the std library interface.
So, standard interfaces, or dynamic or structural typing? Lots of languages have those (some all three.)
As someone who only has some minor experience with Go, can you maybe elaborate a bit on what you mean by the stdlib providing such interfaces? I don’t recall any such paradigm being called out when first learning the language and it sounds pretty interesting.
notice that the SQLite package isn't used directly, its only imported to handle the database behind the scenes. all the actual code is written using only the standard library
In Go if you'd like to make an interface-compatible database driver, you have to construct structs with non-exported fields from another package using unsafe.
database/sql provides a generic interface for SQL which drivers use to build their libraries. You can switch from one postgres driver like pq to pgx easily because they both support database/sql.
I just tried it out and it's very slow. Using `wrk` to hit a basic endpoint that just prints "hello", I got ~500 req/sec. Using a third party zig implementation, I got ~175000 req/sec.
It's also both cumbersome to setup and use.
Before learning Zig, I used to think Zig needed an http server in the standard library. After using it for a few months, and watching this implementation get added, I think it's a mistake - there just isn't enough bandwidth to support a _quality_ kitchen-sink included stdlib.
I do not think fast needs to be a goal? Great to have sure, but a slow, stable, compliant implementation is perfectly fit for purpose in the standard library.
If you are getting 500 reqs per second for a statically typed compiled systems programming language with no garbage collector on a modern machine, you have bigger problems than standards compliance.
1Ghz/500 is ~2Mhz per request. Taking 2 million cycles to respond to a request in a benchmarked environment on localhost means something has gone horribly wrong.
One of the things that made Go good is that the stdlib may not have everything, but what it has was production grade in terms of performance. I don't see value in spreading work out thin just to have something in the stdlib.
I wish Zig had Go-like interfaces. I understand why they don't as they have decided control flow must be explicit but having to include a heap of boilerplate to get the same result isn't a win for readability or maintenance. Given my limitations and tastes I would not want to write or maintain web backend code in Zig but there is a lot to like about Zig as a C replacement and having batteries included for things like HTTP is a win for any language.
> having batteries included for things like HTTP is a win for any language
I actually kind of disagree with this.
I was a super early Go contributor and helped a tiny bit on the HTTP library (back then it was the core `http` package, now it's under `net/http`). Imo, a lot of HTTP stuff tends to be very "grey area" (what are sensible timeouts? how do you handle different socket errors? should we allow self-signed certificates?) so a lot of opinionated design debate ends up happening, and there's also a lot of scope creep, including having to include things like SSL/proxying if you really want "batteries" to be included which is a lot of non-trivial work (all of a sudden, you also need a elliptic curve cryptography lib, too) that basically has nothing to do with the language itself.
I know we live in an "HTTP world" and it's a lot easier to sell a language that can also do stuff on the web, but it would be pretty far down my list.
Agreed about Go-like interfaces. I don’t think it’d fit Zig to make them dynamic at runtime the way Go does, but even just as a compile-time constraint it would make building a composable ecosystem like Go’s much easier. See writer: anytype.
There have been many arguments that Zig's Try/Catch is hidden control flow. Though many would argue that trying to stick to "catch phrases", can result in painting one's self into a corner, versus what's best to accomplish a task.
Lots of people love interfaces (or similar like "traits"). Helps them get what they need done and there are lots of examples of their usage in different languages. Accomplishing the task, is what's most important.
My understanding is http/2 is out of scope. The purpose is to have functional http to implement the package manager. I would expect a high performance http client/server to be a third party library, not std lib.
Regarding 1, do you mean that the server would send the response before the request has been received completely? Or is the response for a different request?
Many people are not aware that - currently - Zig completely rebuilds your entire application with every compilation - including any parts of the standard library you use. There is not yet any incremental compilation.
It takes a lot of time investment to make compilation fast, and we have gone all in on this investment. There was almost an entire year when not much happened besides the compiler rewrite (which is now done). Finally, we are starting to see some fruits of our labor. Some upcoming milestones that will affect compilation speed:
* x86 backend (90% complete) - eliminates the biggest compilation speed bottleneck, which is LLVM.
* our own self-hosted ELF linker - instead of relying on LLD, we tightly couple our linker directly with the compiler in order to speed up compilation.
* incremental compilation (in place binary patching) - after this Zig will only compile functions affected by changes since the last build.
* Interned types & values. Speed up Semantic Analysis which is the remaining bottleneck of the compiler
* Introduce a separate thread for linking / machine code generation, instead of ping-ponging between Semantic Analysis & linking/mcg
* Multi-threaded semantic analysis. Attack the last remaining bottleneck of the compiler with brute force
Anyway, I won't argue with cold hard performance facts if you have some to share, but I won't stand accused of failing to prioritize compilation speed.
Zig is working on incremental linking, and even hot-code-swapping while your program is running. I would suggest Zig cares more about compilation speed than most other languages, just hasn't gotten there yet.
The Zig compiler is significantly limited in performance by the llvm backend. Even with a different backend, I also suspect that the language is already complex enough that it is difficult to write a genuinely fast compiler (by which I mean a compiler that can produce good machine code from most files less than say 10k lines of code in under 50ms, which I am quite certain is possible -- but llvm takes about 50ms just to start up and is also slow once it actually starts doing work.)
To write a fast compiler, it has to be fast from the start. It is extremely difficult to make a slow compiler fast because usually there are pervasive design issues that cannot be eliminated by hotspot optimization. These design decisions often reflect the design of the language itself, which is to say that some languages are more amenable to fast compilation than others and I suspect that Zig is at best average or slightly above average for languages of similar expressivity.
Zig is plenty fast for a compiled language with optimizations. The Go compiler has no debug/release profiles and the generated machine code isn’t as optimized as say a language using llvm. If you compare it to C++ or Rust, Zig actually has better compile times, and it keeps improving.
It's not literally every line, but `try` means "unwrap the error union result or pass the error back up to the caller". It's roughly equivalent to Rust's `?` operator (although Rust's can also unwrap optionals or return `None`)
It makes it obvious that those calls can fail. Every single write to disk, network or malloc can in theory fail. This just makes it obvious that some code needs to handle those edge cases (or, ykno, you can pass them on to the human user).
In addition to the other answers, this is the most logical way to handle most errors in this specific type of code. The http code in the Zig standard library doesn’t know how the programmer wants to handle all the errors, so it uses try to bubble up any error it doesn’t know how to handle.
Why so many uses of try? Because Zig is all about readability, and anytime you see try you _know_ that the called function can possibly fail.
Rust used to use the word try in a similar context, but was considered too noisy, and opted for ? instead, which I think worked out very well in practice.
Rust's ? is the Try operator, which is currently contemplating Try::branch() to produce a ControlFlow and then if the ControlFlow is Break it returns whatever is inside the Break.
Historically ? was just doing what try! did but while that's equivalent to the current behaviour for Result, there are several more implementations of the Try trait today (and in nightly you can implement it on your own types)
What are you doing with lines 1 and 2? Most people just `return fmt.Errorf("stuff the caller can't know: %w", err)` and call it a day. To me, this beats exceptions because a human can provide context; what iteration of the loop were we on, what specific sub-operation failed, etc.
Spending a lot of time typing in error handling code seems OK to me. You are never going to get paged when your software is working. So you'll probably spend more of your time dealing with rare error conditions. Thus, the code spends a lot of lines capturing the circumstances around the error conditions, so you can go change a line of code and redeploy instead of spending 6 months debugging.
This is huge for things like database drivers which might become outdated or not support certain features. In Go switching database drivers in as simple as importing the new library. You dont have to change your code.
In rust for example you have to go out and pick a database driver and no two libraries will work the same. If you pick one postgres library and it becomes outdated you have to go rewrite you code to support the next one to move too.
This is why I would never use Rust, or Zig for being things like http servers.
> This is huge for things like database drivers which might become outdated or not support certain features. In Go switching database drivers in as simple as importing the new library. You dont have to change your code.
The effect is that every database driver becomes outdated and doesn't support certain features, instead of just a few. Python people have a saying that the standard library is where modules go to die. Java's database drivers massively lag behind (e.g. there's still no good async support for most of them), because JDBC is a lowest common denominator that every driver has to be dumbed down to, but is established enough that it sucks all the oxygen away from any efforts to write better drivers.
Far better to keep the standard library small and allow modules to update on their own terms and their own schedule. If the library you're using does what you need, you can stick with it, but a better alternative (which always means a new interface in practice - the idea that you can make substantial improvement without changing the interface is mostly a myth, because the interface is the most important part of a library) can appear and compete on its merits, and if the improvements are worth migrating to then it will win out.
Very Hard Disagree. JDBC is pretty amazing actually. Thats why you find so many Database tools and wizards written in Java as opposed to Rust.
Regarding async - With Java 19 virtual threads , you have newer versions of JDBC drivers support async with none to minimal changes. Example: https://medium.com/oracledevs/introduction-to-oracle-jdbc-21...
I am very, very glad that human civilisation standardised on one system for gears instead of every manufacturer making their own special snowflake.
That's just a quote from a 10 years old talk by Kenneth Reitz, and it's quite an overstatement.
I rather have outdated code that works, than not having anything at all.
Neat!
Kinda like Java. (JDBC)
Kinda like C#. (IDb)
Kinda like Python. (DB-API)
Kinda like PHP. (PDO)
Kinda like Perl. (DBI)
Kinda like C/C++. (ODBC)
Of course, you work as the lowest common denominator and have abstractions that don't quite match the implementation. And let's be honest, how often do you switch databases.
But yeah this is very common.
---
I'd argue in practice, Java follows your advice the most of any community:
* JDBC
* Servlets
* SLF4J (not technically stdlib but very common)
Java does, but apart from JDBC for databases, and the servlet-API, I haven't seen many cases where that really improved the world of programming. Java-EE relied heavily on such abstractions, but nowadays those would be replaced by simpler, less abstract constructs with less redirections.
> This is huge for things like database drivers which might become outdated or not support certain features. In Go switching database drivers in as simple as importing the new library. You don't have to change your code.
Database drivers are an abstraction that tends to leak it's underlying technology very fast.
The idea of just being to swap your database out under the hood is neat, but generally holds only for simple projects.
Also the abstraction that helps you here is SQL really, not the driver.
Spring, Quarkus, Helidon, Liberty, Wildfly, TomEE, Payara all support it.
It turns out such standards are quite valuable for the industry at scale, instead of playing tetris with third party library stuff of various quality.
So, standard interfaces, or dynamic or structural typing? Lots of languages have those (some all three.)
notice that the SQLite package isn't used directly, its only imported to handle the database behind the scenes. all the actual code is written using only the standard library
It's also both cumbersome to setup and use.
Before learning Zig, I used to think Zig needed an http server in the standard library. After using it for a few months, and watching this implementation get added, I think it's a mistake - there just isn't enough bandwidth to support a _quality_ kitchen-sink included stdlib.
1Ghz/500 is ~2Mhz per request. Taking 2 million cycles to respond to a request in a benchmarked environment on localhost means something has gone horribly wrong.
Server: https://ziglang.org/documentation/master/std/#A;std:http.Ser...
Client: https://ziglang.org/documentation/master/std/#A;std:http.Cli...
I actually kind of disagree with this.
I was a super early Go contributor and helped a tiny bit on the HTTP library (back then it was the core `http` package, now it's under `net/http`). Imo, a lot of HTTP stuff tends to be very "grey area" (what are sensible timeouts? how do you handle different socket errors? should we allow self-signed certificates?) so a lot of opinionated design debate ends up happening, and there's also a lot of scope creep, including having to include things like SSL/proxying if you really want "batteries" to be included which is a lot of non-trivial work (all of a sudden, you also need a elliptic curve cryptography lib, too) that basically has nothing to do with the language itself.
I know we live in an "HTTP world" and it's a lot easier to sell a language that can also do stuff on the web, but it would be pretty far down my list.
How do Go-like interfaces make control flow non-explicit?
Lots of people love interfaces (or similar like "traits"). Helps them get what they need done and there are lots of examples of their usage in different languages. Accomplishing the task, is what's most important.
I couldn't grok from this test suite alone, but I do have questions.
1. Do either the server or client support 'full duplex', i.e. streaming the response body while streaming the request body in parallel?
2. Are there provisions for HTTP/2?
I'll answer my own questions if I find the answers first, but I'm not very good at Zig.
Deleted Comment
Everyone wants to go after Go, but they fail to understand how important compile speed is
It takes a lot of time investment to make compilation fast, and we have gone all in on this investment. There was almost an entire year when not much happened besides the compiler rewrite (which is now done). Finally, we are starting to see some fruits of our labor. Some upcoming milestones that will affect compilation speed:
* x86 backend (90% complete) - eliminates the biggest compilation speed bottleneck, which is LLVM.
* our own self-hosted ELF linker - instead of relying on LLD, we tightly couple our linker directly with the compiler in order to speed up compilation.
* incremental compilation (in place binary patching) - after this Zig will only compile functions affected by changes since the last build.
* Interned types & values. Speed up Semantic Analysis which is the remaining bottleneck of the compiler
* Introduce a separate thread for linking / machine code generation, instead of ping-ponging between Semantic Analysis & linking/mcg
* Multi-threaded semantic analysis. Attack the last remaining bottleneck of the compiler with brute force
Anyway, I won't argue with cold hard performance facts if you have some to share, but I won't stand accused of failing to prioritize compilation speed.
Based on your Milan presentation as well, I have a lot of hope that Zig is seriously pushing on this.
To write a fast compiler, it has to be fast from the start. It is extremely difficult to make a slow compiler fast because usually there are pervasive design issues that cannot be eliminated by hotspot optimization. These design decisions often reflect the design of the language itself, which is to say that some languages are more amenable to fast compilation than others and I suspect that Zig is at best average or slightly above average for languages of similar expressivity.
VC++ does import std in a fraction of time it takes to #include <iostream>.
541 != 114
Did you mean
figuratively*?Don't be a pedant.
Why so many uses of try? Because Zig is all about readability, and anytime you see try you _know_ that the called function can possibly fail.
Historically ? was just doing what try! did but while that's equivalent to the current behaviour for Result, there are several more implementations of the Try trait today (and in nightly you can implement it on your own types)
Spending a lot of time typing in error handling code seems OK to me. You are never going to get paged when your software is working. So you'll probably spend more of your time dealing with rare error conditions. Thus, the code spends a lot of lines capturing the circumstances around the error conditions, so you can go change a line of code and redeploy instead of spending 6 months debugging.
I find the anti-exception attitude of newer languages questionable.