> In general everything about it feels like it makes projects easy to work on for 5 days, abandon for 2 years, and then get back into writing code without a lot of problems.
To me this is one of the most underrated qualities of go code.
Go is a language that I started learning years ago, but did't change dramatically. So my knowledge is still useful, even almost ten years later.
I've picked up some Go projects after no development for years, including some I didn't write myself as a contractor. It's typically been a fairly painless experience. Typically dependencies go from "1.3.1" to "1.7.5" or something, and generally it's a "read changelogs, nothing interesting, updating just works"-type experience.
On the frontend side it's typically been much more difficult. There are tons of dependencies, everything depends on everything else, there are typically many new major releases, and things can break in pretty non-obvious ways.
It's not so much the language itself, it's the ecosystem as a whole. There is nothing in JavaScript-the-language or npm-the-package-manager that says the npm experience needs to be so dreadful. Yet here we are.
Arguably it's just the frontend. You can use old node in backend as much as you please. Frontend UI expectations evolve so quickly while APIs & backend can just stay the same, if it works it works.
I agree but those first 5 days are going to be a mixed bag as you pick through libraries for logging, database drivers, migrations, as well as project organization, dependency injection patterns for testing, organize your testing structure, and more.
If you have a template to derive from or sufficient Go experience you'll be fine, but selecting from a grab bag of small libraries early on in a project can be a distraction that slows down feature development significantly.
I love Go but for rapid project development, like working on a personal project with limited time, or at a startup with ambitious goals, Go certainly has its tradeoffs.
I think 80% of this is people coming to Go from other languages (everybody comes to Go from some other language) and trying to bring what they think was best about that language to Go. To an extent unusual in languages I've worked in, it's idiomatic in Go to get by with what's in the standard library. If you're new to Go, that's what you should do: use standard logging, just use net/http and its router, use standard Go tests (without an assertion library), &c.
I'm not saying you need to stay there, but if your project environment feels like Rails or Flask or whatever in your first month or two, you may have done something wrong.
I really think the library search is more of something you inherit from other languages, though database drivers are something you need to go looking for. The standard library has an adequate HTTP router (though I prefer grpc-gateway as it autogenerates docs, types, etc.) and logger (slog, but honestly plain log is fine).
For your database driver, just use pgx. For migrations, tern is fine. For the tiniest bit of sugar around scanning database results into structs, use sqlx instead of database/sql.
func main() {
foo := &Foo{
Parameter: goesHere,
}
bar := &Bar{
SomethingItNeeds: canJustBeTypedIn,
}
app := &App{
Foo: foo,
Bar: bar,
}
app.ListenAndServe()
}
If you need more complexity, you can add more complexity. I like "zap" over "slog" for logging. I am interested in some of the DI frameworks (dig), but it's never been a clear win to me over a little bit of hand-rolled complexity like the above.
A lot of people want some sort of mocking framework. I just do this:
type testWhateverer {
n int
}
var _ Whateverer = (*testWhateverer)(nil)
func (w *testWhateverer) Whatever() { w.n++ }
func TestFoo(t *testing.T) {
x := &testWhateverer{}
foo(x)
if got, want := x.n, 1; got != want {
t.Errorf("expected Whatever to have been called: invocation count:\n got: %v\n want: %v", got, want)
}
}
It's easy. I typed it in an HN comment in like 30 seconds. Whether or not a test that counts how many times you called Whatever is up to you, but if you need it, you need it, and it's easy to do.
> I agree but those first 5 days are going to be a mixed bag as you pick through libraries for logging, database drivers, migrations, as well as project organization, dependency injection patterns for testing, organize your testing structure, and more.
So the same as every other language that lacks these in the standard lib?
About 50% of the time learning a new language I find that the consensus framework/library choice is not quite to my taste. It isn’t that it’s bad, it just often feels like one thing went viral and then ends up boxed in by their success such that people double down/put up with it rather than evolve it further.
Point being you’re probably going to spend those first five days evaluating the options. The “community” doesn’t know your taste or your needs. You have no idea what their goals are, or what the average skill level is. All of those things can make a big difference in selecting tech to build atop of.
I had this exact same experience with Go. I picked up .Net (and Asp.Net for web stuff) on linux recently and found it to be much more easier to get started batteries included than Go. Don't need external libraries for any of the things you mentioned (ORM, logging, metrics, DI etc.).
Razor pages are very interesting as well. I haven't used them enough to have a solid opinion yet but I really liked them for quick server rendered pages.
I would also say library quality can be generally low. E.g. there are numerous flag parsing libraries but not a single one comes even close to clap in rust.
This used to be true of PHP as well, though them finally picking off the hairy bits of bad assumptions has eroded that a fair bit. For a long time you didn't usually need to concern yourself what version of PHP 4-5 or so what version you were running on.
> Go is a language that I started learning years ago, but did't change dramatically.
A lot of people underrate that quality I feel. C had that quality but pushed it to neurotic levels where it would change so slowly even when it needed to do it faster. Other language in contrast change too fast and try to do too many things at once.
You do not often get credit for providing boring, stable programs that work. I hope they continue to do it this way and do not get seduced into overloading the language.
We have a lot of fast moving languages already, so having one that moves slowly increases potential choice.
As a teacher, I have a program I've been using for the last 8 years or so. I distribute the compiled version to students for them to compare their own results with what's expected. The program's written in go. I update it every other year or so. So I'm exactly in the case you describe.
I never had any issue. The program still compiles perfectly, cross-compiles to windows, linux and macos, no dependency issue, no breaking change in the language, nothing. For those use-cases, go is a godsend.
there’s just 1 static binary, all I need to do to deploy it is copy the binary. If there are static files I can just embed them in the binary with embed.
embed package allows to embed assets directly into the binary, so the files are read once during build time and then you can access them as e.g. a byte slice, a string, or a special FS object that acts like an in-memory file system
As for sqlc, I really wanted to like it, but it had some major limitations and minor annoyances last time I tried it a few months ago. You might want to go through its list of issues[1] before adopting it.
Things like no support for dynamic queries[2], one-to-many relationships[3], embedded CTEs[4], composite types[5], etc.
It might work fine if you only have simple needs, but if you ever want to do something slightly sophisticated, you'll have to fallback to the manual approach. It's partly understandable, though. It cannot realistically support every feature of every DBMS, and it's explicitly not an ORM. But I still decided to stick to the manual approach for everything, instead of wondering whether something is or isn't supported by sqlc.
One tip/gotcha I recently ran into: if you run Go within containers, you should set GOMAXPROCS appropriately to avoid CPU throttling. Good explanation here[6], and solution here[7].
I agree that sqlc has limits, but for me it is great because it takes care of 98% of the queries (made up number) and keeps them simple to write. I can still write manual queries for the rest of them so it's still a net win.
It gets mentioned a lot in the context of database/sql and sqlc, but Jet has been a great alternative so far, most notably because of its non-issue with dynamic queries support.
That loop function should really have a Context so it can be cancelled; that's future work. But the idea stands -- it should be considered normal for transactions to fail, so you should always have a retry loop around them.
It's controversial for many good reasons. You make the general claim that retrying a db transaction should be the rule, when most experts agree that it should be the exception. Just in the context of web development it can be disputed on the account that a db transaction is just a part of a bigger contract that includes a user at the other end of a network, a request, a session, and a slew of other possible connected services. If one thing shows signs of being unstable, everything should fail. That's the general wisdom.
More specific to the code that you linked to, the retry happens in only two specific cases. Even then, I personally don't find what it's doing to be such great engineering. It hacks its way around something that should really be fixed by properly setting the db engine. By encroaching like this, it effectively hides the deeper problem that SQLite has been badly configured, which may come to bite you later.
Failing transactions would raise a stink earlier. Upon inquiry, you'd find the actual remedy, resulting in tremendous performance. Instead, this magic loop is trying to help SQLite be a database and it does this in Go! So you end up with these smart transactions that know to wait in a queue for their turn. And for some time, nobody in the dev team may be aware that this can become a problem, as everything seems to be working fine. The response time just gets slightly longer and longer as the load increases.
Code that tries to save failing things at all cost like this also tends to do this kind of glue and duct tape micromanaging of dependencies. Usually with worse results than simply adjusting some settings in the dependencies themselves. You end up with hard to diagnose issues. The code itself becomes hard to reason about as it's peppered with complicated ifs and buts to cover these strange cases.
Transactions are hard, and in reality there's a shit-ton of things people do that has no right to be close to a transaction (but still are), and transactions were a good imperative kludge at the time that has just warped into a monster that people kinda accept over the years.
A loop is a bad construct imho, something I like far better is the Mnesia approach that simply decides that transactional updates are self-contained functional blocks and the database manages the transactional issues (yes, this eschews the regular SQL interfaces and Db-application separation but could probably be emulated to a certain degree).
You'll just end up looping until your retry limit is reached. SQLite just isn't very good at upgrading read locks to write locks, so the appropriate fix really is to prevent that from happening.
I've needed the exact same loop on (an older) Postgres to stop production from hitting transient errors. It's fundamental to the concept of concurrent interactive transactions.
Yes, that's exactly what the linked code does: Calls your function, and if it returns an error, check through the wrapped errors to see if it's one of the SQLite errors which should be retried. If it is, try the transaction again; if not, pass the error up.
OK, a bunch of the replies here seem to be misunderstanding #1. In particular, the assumption is that the only reason a transaction might fail is that the database is too busy.
I come from the field of operating systems, and specifically Xen, where we extensively use lockless concurrency primitives. One prime example is a "compare-exchange loop", where you do something like this:
y = shared_state_var;
do {
oldx = y;
newx = f(oldx); // f may be arbitrarily complicated
} while((y = cmpxchg(&shared_state_var, oldx, newx)) != oldx);
Basically this reads oldx, mutates it into newx (using perhaps a quite complicated set of logic). Then the compare exchange will atomically:
- Read shared_state_var
- If and only if this value if equal to oldx, set it to newx
- In any case, return oldx
In the common case, when there's no contention, you read the old value, see that it hasn't changed, and then write the new value. In the uncommon case, you notice that someone else has changed the value, and so you'd better re-run the calculations.
From my perspective, database transactions are the same thing: You start a transaction, read some old values, you make some changes on those values. When you commit the transaction, if some of the the thing's you've read have been changed in the meantime, the transaction will fail and you start over again.
That's what I mean when I say "database transactions are designed to fail". Of course the transaction may fail because you have a connection issue, or a disk issue, or something like that; that's not really what I'm talking about. I'm saying specifically that there may be a data race due to concurrent accesses. Whenever there are more than one thing accessing the database, there is always the chance of this happening, regardless of how busy the system is -- even if in an entire week you only have two transactions, there's still a chance (no matter how small) that they'll be interleaved such that one transaction reads something which is then written to before the transaction is done.
Now SQLite can't actually have this sort of conflict, because it's always single-writer. But essentially what that means is that there's a conflict every time where there are two writes, not only when some data was overwritten by another process. Something that happens at a very very low rate when you're using a proper RDBMS like Postgres, now happens all the time. But the problem isn't with SQLite, it's with your code, which has assumed that transactions will never fail do to concurrency issues.
I always see SQLite as recommended, but every time I look into it there are some non-obvious subtleties around txn lock, retry behavior, and WAL mode. By default if you don't tweak things right getting frequent SQLITE_BUSY errors seems to occur at non-trivial QPS.
Is there a place that documents what the set-and-forget setting should be?
You shouldn't blindly retry things that fail as a default, and you should really not default into making the decision of what to do on a server that is just on the middle between the actual user and the database.
Handling errors on the middle is a dangerous optimization.
Others have said much about a transaction loop, but I also don't think that database transactions are necessarily designed to fail in the sense that the failure is a normal mode of operation. Failing transactions are still considered exceptional; their sole goal is to provide logical atomicity.
You're the first person I've heard say so. When I was learning DB stuff, there were loads of examples of things that looked at the error from a transaction. Not a single one of them then retried the transaction as a result.
The OP's comment is a symptom of this -- they did some writes or some transactions and were getting failures, which means they weren't retrying their transactions. And then when they searched to solve the problem, the advice they received wasn't "oh, you should be retrying your transactions" -- rather, it was some complicated technical thing to avoid the problem by preventing concurrent writes.
GOMEMLIMIT has really cut down on the amount of time I’ve had to spend worrying about the GC. I’d recommend it. Plus, if you’re using kubernetes or docker, you can automatically set it to the orchestrator-managed memory limit using something like https://github.com/KimMachineGun/automemlimit — no need to add any manual config at all.
stdlib templates are a bit idiosyncratic and probably not the easiest to start with, but they do work and don't have "weird issues" AFAIK. What issues did you encounter?
I don't know what issues others have had with it, but for me one notable thing is that html/template strips all comments out. This is by design, but it's not documented anywhere. I've proposed making this configurable, but my proposal has gotten no traction so far.
I am just trying Templ. I like what I am seeing for the most part. There are some tooling ergonomics to work out. Lots of "suddenly the editor things everything is an error and nothing will autoimport or format" back to mostly working. Click to definition goes to the autogenerated code instead of the templ file. Couple things like that. But soooooooooo much better to deal with code gen than html/template. That thing is a pita
Good to see author's mention about routing. I am mentally stuck with mux for a long time and didn't pay attention to the new release features. Happy that I always find things like these on HN.
What I love about Go is its simplicity and no framework dependency. Go is popular because it has no dominating framework. Nothing wrong with frameworks when it fits the use case but I feel that we have become over dependent on framework and Go brings that freshness about just using standard libraries to create something decent with some decent battle tested 3rd party libraries.
I personally love "library over framework" mindset and I found Go to do that best.
Also, whether you want to build a web app or cli tool, Go wins there (for me at least). And I work a lot with PHP and .NET as well and love all 3 overall.
Not to mention how easy was it for someone like me who never wrote Go before to get up and running with it quickly. Oh did I mention that I personally love the explicit error handling which gets a lot of hate (Never understood why). I can do if err != nil all day.
I like Go for this reason as well. In Python I found the Flask framework to be suitably unobtrusive enough to be nice to use (never liked Django), but deploying python is a hassle. Go is much better in that area. The error handling never bothered me either.
I think if Go shipped better support for auth/sessions in the standard library more people would use it. Having to write that code yourself (actually not very hard, but intimidating if you've never done it before) deters people and ironically the ease of creating Go packages makes it unclear which you should use if you're not going to implement it yourself.
I am a Django apologist because I grew up with Django. So with that being said, I'm not out to convert you but I am genuinely curious what you don't like about it. Promise I won't refute anything I just like to try to understand where it turned off folks.
I don't like flask because it seems just easy enough to be really productive in the beginning but you eventually need most of the things Django gives you. So I would rather pick up Django Rest Framework or Django Ninja than Flask or Fast API. In those cases I jump straight to Go and use it instead because the same library decisions on the Go side give me a lot more lift on the operations end (easy to deploy, predictable performance into thousands of requests per second if built correctly).
My main gripe about go is that it's decent for the middle and late stages and really really bad to start with. You'll spend way too much time rewriting stuff you literally get for free by running "rails new" or "bundle add devise"
> (actually not very hard, but intimidating if you've never done it before)
These stuff are never really hard, but you will make countless vulnerabilities that way. The most important job of a good framework is cutting down significantly on the possible to get wrong use cases, exposing you to already safe APIs.
I'm curious in what sense you find Python difficult to deploy? My company has tons of Python APIs internally and we never have much trouble with them. They are all pretty lightly used services so it it something about doing it on a larger scale?
To me this is one of the most underrated qualities of go code.
Go is a language that I started learning years ago, but did't change dramatically. So my knowledge is still useful, even almost ten years later.
On the frontend side it's typically been much more difficult. There are tons of dependencies, everything depends on everything else, there are typically many new major releases, and things can break in pretty non-obvious ways.
It's not so much the language itself, it's the ecosystem as a whole. There is nothing in JavaScript-the-language or npm-the-package-manager that says the npm experience needs to be so dreadful. Yet here we are.
If you have a template to derive from or sufficient Go experience you'll be fine, but selecting from a grab bag of small libraries early on in a project can be a distraction that slows down feature development significantly.
I love Go but for rapid project development, like working on a personal project with limited time, or at a startup with ambitious goals, Go certainly has its tradeoffs.
I'm not saying you need to stay there, but if your project environment feels like Rails or Flask or whatever in your first month or two, you may have done something wrong.
For your database driver, just use pgx. For migrations, tern is fine. For the tiniest bit of sugar around scanning database results into structs, use sqlx instead of database/sql.
I wouldn't recommend using a testing framework in Go: https://go.dev/wiki/TestComments#assert-libraries
Here's how I do dependency injection:
If you need more complexity, you can add more complexity. I like "zap" over "slog" for logging. I am interested in some of the DI frameworks (dig), but it's never been a clear win to me over a little bit of hand-rolled complexity like the above.A lot of people want some sort of mocking framework. I just do this:
Then in the tests: It's easy. I typed it in an HN comment in like 30 seconds. Whether or not a test that counts how many times you called Whatever is up to you, but if you need it, you need it, and it's easy to do.So the same as every other language that lacks these in the standard lib?
https://github.com/mikestefanello/pagoda
Point being you’re probably going to spend those first five days evaluating the options. The “community” doesn’t know your taste or your needs. You have no idea what their goals are, or what the average skill level is. All of those things can make a big difference in selecting tech to build atop of.
Razor pages are very interesting as well. I haven't used them enough to have a solid opinion yet but I really liked them for quick server rendered pages.
A lot of people underrate that quality I feel. C had that quality but pushed it to neurotic levels where it would change so slowly even when it needed to do it faster. Other language in contrast change too fast and try to do too many things at once.
You do not often get credit for providing boring, stable programs that work. I hope they continue to do it this way and do not get seduced into overloading the language.
We have a lot of fast moving languages already, so having one that moves slowly increases potential choice.
I never had any issue. The program still compiles perfectly, cross-compiles to windows, linux and macos, no dependency issue, no breaking change in the language, nothing. For those use-cases, go is a godsend.
Dead Comment
Having a true single binary bundling your static resources is so convenient.
https://github.com/golang/pkgsite
there’s just 1 static binary, all I need to do to deploy it is copy the binary. If there are static files I can just embed them in the binary with embed.
Deleted Comment
As for sqlc, I really wanted to like it, but it had some major limitations and minor annoyances last time I tried it a few months ago. You might want to go through its list of issues[1] before adopting it.
Things like no support for dynamic queries[2], one-to-many relationships[3], embedded CTEs[4], composite types[5], etc.
It might work fine if you only have simple needs, but if you ever want to do something slightly sophisticated, you'll have to fallback to the manual approach. It's partly understandable, though. It cannot realistically support every feature of every DBMS, and it's explicitly not an ORM. But I still decided to stick to the manual approach for everything, instead of wondering whether something is or isn't supported by sqlc.
One tip/gotcha I recently ran into: if you run Go within containers, you should set GOMAXPROCS appropriately to avoid CPU throttling. Good explanation here[6], and solution here[7].
[1]: https://github.com/sqlc-dev/sqlc/issues/
[2]: https://github.com/sqlc-dev/sqlc/issues/3414
[3]: https://github.com/sqlc-dev/sqlc/issues/3394
[4]: https://github.com/sqlc-dev/sqlc/issues/3128
[5]: https://github.com/sqlc-dev/sqlc/issues/2760
[6]: https://kanishk.io/posts/cpu-throttling-in-containerized-go-...
[7]: https://github.com/uber-go/automaxprocs
https://github.com/go-jet/jet/
OK, here's a potentially controversial opinion from someone coming into the web + DB field from writing operating systems:
1. Database transactions are designed to fail
Therefore
2. All database transactions should done in a transaction loop
Basically something like this:
https://gitlab.com/martyros/sqlutil/-/blob/master/txutil/txu...
That loop function should really have a Context so it can be cancelled; that's future work. But the idea stands -- it should be considered normal for transactions to fail, so you should always have a retry loop around them.
More specific to the code that you linked to, the retry happens in only two specific cases. Even then, I personally don't find what it's doing to be such great engineering. It hacks its way around something that should really be fixed by properly setting the db engine. By encroaching like this, it effectively hides the deeper problem that SQLite has been badly configured, which may come to bite you later.
Failing transactions would raise a stink earlier. Upon inquiry, you'd find the actual remedy, resulting in tremendous performance. Instead, this magic loop is trying to help SQLite be a database and it does this in Go! So you end up with these smart transactions that know to wait in a queue for their turn. And for some time, nobody in the dev team may be aware that this can become a problem, as everything seems to be working fine. The response time just gets slightly longer and longer as the load increases.
Code that tries to save failing things at all cost like this also tends to do this kind of glue and duct tape micromanaging of dependencies. Usually with worse results than simply adjusting some settings in the dependencies themselves. You end up with hard to diagnose issues. The code itself becomes hard to reason about as it's peppered with complicated ifs and buts to cover these strange cases.
A loop is a bad construct imho, something I like far better is the Mnesia approach that simply decides that transactional updates are self-contained functional blocks and the database manages the transactional issues (yes, this eschews the regular SQL interfaces and Db-application separation but could probably be emulated to a certain degree).
https://www.erlang.org/doc/apps/mnesia/mnesia_chap4.html
I come from the field of operating systems, and specifically Xen, where we extensively use lockless concurrency primitives. One prime example is a "compare-exchange loop", where you do something like this:
Basically this reads oldx, mutates it into newx (using perhaps a quite complicated set of logic). Then the compare exchange will atomically:- Read shared_state_var
- If and only if this value if equal to oldx, set it to newx
- In any case, return oldx
In the common case, when there's no contention, you read the old value, see that it hasn't changed, and then write the new value. In the uncommon case, you notice that someone else has changed the value, and so you'd better re-run the calculations.
From my perspective, database transactions are the same thing: You start a transaction, read some old values, you make some changes on those values. When you commit the transaction, if some of the the thing's you've read have been changed in the meantime, the transaction will fail and you start over again.
That's what I mean when I say "database transactions are designed to fail". Of course the transaction may fail because you have a connection issue, or a disk issue, or something like that; that's not really what I'm talking about. I'm saying specifically that there may be a data race due to concurrent accesses. Whenever there are more than one thing accessing the database, there is always the chance of this happening, regardless of how busy the system is -- even if in an entire week you only have two transactions, there's still a chance (no matter how small) that they'll be interleaved such that one transaction reads something which is then written to before the transaction is done.
Now SQLite can't actually have this sort of conflict, because it's always single-writer. But essentially what that means is that there's a conflict every time where there are two writes, not only when some data was overwritten by another process. Something that happens at a very very low rate when you're using a proper RDBMS like Postgres, now happens all the time. But the problem isn't with SQLite, it's with your code, which has assumed that transactions will never fail do to concurrency issues.
Is there a place that documents what the set-and-forget setting should be?
Handling errors on the middle is a dangerous optimization.
The OP's comment is a symptom of this -- they did some writes or some transactions and were getting failures, which means they weren't retrying their transactions. And then when they searched to solve the problem, the advice they received wasn't "oh, you should be retrying your transactions" -- rather, it was some complicated technical thing to avoid the problem by preventing concurrent writes.
Sooner or later you will hit html/template, and realize it's actually very weird and has a lot of weird issues.
Don't use html/template.
I grew to like Templ instead
Another go mod that helps a lot when massaging JSON (something most web servers end up doing sooner or later) is GJSON [2].
--
1: https://github.com/a-h/templ
2: https://github.com/tidwall/gjson
Big fan of Echo and it has much better docs.
https://echo.labstack.com/
I personally love "library over framework" mindset and I found Go to do that best.
Also, whether you want to build a web app or cli tool, Go wins there (for me at least). And I work a lot with PHP and .NET as well and love all 3 overall.
Not to mention how easy was it for someone like me who never wrote Go before to get up and running with it quickly. Oh did I mention that I personally love the explicit error handling which gets a lot of hate (Never understood why). I can do if err != nil all day.
A big Go fan.
I think if Go shipped better support for auth/sessions in the standard library more people would use it. Having to write that code yourself (actually not very hard, but intimidating if you've never done it before) deters people and ironically the ease of creating Go packages makes it unclear which you should use if you're not going to implement it yourself.
I don't like flask because it seems just easy enough to be really productive in the beginning but you eventually need most of the things Django gives you. So I would rather pick up Django Rest Framework or Django Ninja than Flask or Fast API. In those cases I jump straight to Go and use it instead because the same library decisions on the Go side give me a lot more lift on the operations end (easy to deploy, predictable performance into thousands of requests per second if built correctly).
These stuff are never really hard, but you will make countless vulnerabilities that way. The most important job of a good framework is cutting down significantly on the possible to get wrong use cases, exposing you to already safe APIs.