Readit News logoReadit News
voigt · a year ago
> In general everything about it feels like it makes projects easy to work on for 5 days, abandon for 2 years, and then get back into writing code without a lot of problems.

To me this is one of the most underrated qualities of go code.

Go is a language that I started learning years ago, but did't change dramatically. So my knowledge is still useful, even almost ten years later.

arp242 · a year ago
I've picked up some Go projects after no development for years, including some I didn't write myself as a contractor. It's typically been a fairly painless experience. Typically dependencies go from "1.3.1" to "1.7.5" or something, and generally it's a "read changelogs, nothing interesting, updating just works"-type experience.

On the frontend side it's typically been much more difficult. There are tons of dependencies, everything depends on everything else, there are typically many new major releases, and things can break in pretty non-obvious ways.

It's not so much the language itself, it's the ecosystem as a whole. There is nothing in JavaScript-the-language or npm-the-package-manager that says the npm experience needs to be so dreadful. Yet here we are.

mewpmewp2 · a year ago
Arguably it's just the frontend. You can use old node in backend as much as you please. Frontend UI expectations evolve so quickly while APIs & backend can just stay the same, if it works it works.
tmpz22 · a year ago
I agree but those first 5 days are going to be a mixed bag as you pick through libraries for logging, database drivers, migrations, as well as project organization, dependency injection patterns for testing, organize your testing structure, and more.

If you have a template to derive from or sufficient Go experience you'll be fine, but selecting from a grab bag of small libraries early on in a project can be a distraction that slows down feature development significantly.

I love Go but for rapid project development, like working on a personal project with limited time, or at a startup with ambitious goals, Go certainly has its tradeoffs.

tptacek · a year ago
I think 80% of this is people coming to Go from other languages (everybody comes to Go from some other language) and trying to bring what they think was best about that language to Go. To an extent unusual in languages I've worked in, it's idiomatic in Go to get by with what's in the standard library. If you're new to Go, that's what you should do: use standard logging, just use net/http and its router, use standard Go tests (without an assertion library), &c.

I'm not saying you need to stay there, but if your project environment feels like Rails or Flask or whatever in your first month or two, you may have done something wrong.

jrockway · a year ago
I really think the library search is more of something you inherit from other languages, though database drivers are something you need to go looking for. The standard library has an adequate HTTP router (though I prefer grpc-gateway as it autogenerates docs, types, etc.) and logger (slog, but honestly plain log is fine).

For your database driver, just use pgx. For migrations, tern is fine. For the tiniest bit of sugar around scanning database results into structs, use sqlx instead of database/sql.

I wouldn't recommend using a testing framework in Go: https://go.dev/wiki/TestComments#assert-libraries

Here's how I do dependency injection:

   func main() {
       foo := &Foo{
           Parameter: goesHere,
       }
       bar := &Bar{
           SomethingItNeeds: canJustBeTypedIn,
       }
       app := &App{
           Foo: foo,
           Bar: bar,
       }
       app.ListenAndServe()
   }
If you need more complexity, you can add more complexity. I like "zap" over "slog" for logging. I am interested in some of the DI frameworks (dig), but it's never been a clear win to me over a little bit of hand-rolled complexity like the above.

A lot of people want some sort of mocking framework. I just do this:

   - func foo(x SomethingConcrete) {
   -     x.Whatever()
   - }
   + interface Whateverer { Whatever() }
   + func foo(x Whateverer) {
   +     x.Whatever()
   + } 
Then in the tests:

   type testWhateverer {
      n int
   }
   var _ Whateverer = (*testWhateverer)(nil)
   func (w *testWhateverer) Whatever() { w.n++ }
   func TestFoo(t *testing.T) {
       x := &testWhateverer{}
       foo(x)
       if got, want := x.n, 1; got != want {
           t.Errorf("expected Whatever to have been called: invocation count:\n  got: %v\n want: %v", got, want)
       }
   }
It's easy. I typed it in an HN comment in like 30 seconds. Whether or not a test that counts how many times you called Whatever is up to you, but if you need it, you need it, and it's easy to do.

tacticus · a year ago
> I agree but those first 5 days are going to be a mixed bag as you pick through libraries for logging, database drivers, migrations, as well as project organization, dependency injection patterns for testing, organize your testing structure, and more.

So the same as every other language that lacks these in the standard lib?

joshlemer · a year ago
I stumbled on this Go starter project that has enough batteries included to get you started I think. You might find it useful

https://github.com/mikestefanello/pagoda

mattgreenrocks · a year ago
About 50% of the time learning a new language I find that the consensus framework/library choice is not quite to my taste. It isn’t that it’s bad, it just often feels like one thing went viral and then ends up boxed in by their success such that people double down/put up with it rather than evolve it further.

Point being you’re probably going to spend those first five days evaluating the options. The “community” doesn’t know your taste or your needs. You have no idea what their goals are, or what the average skill level is. All of those things can make a big difference in selecting tech to build atop of.

DarkCrusader2 · a year ago
I had this exact same experience with Go. I picked up .Net (and Asp.Net for web stuff) on linux recently and found it to be much more easier to get started batteries included than Go. Don't need external libraries for any of the things you mentioned (ORM, logging, metrics, DI etc.).

Razor pages are very interesting as well. I haven't used them enough to have a solid opinion yet but I really liked them for quick server rendered pages.

sureglymop · a year ago
I would also say library quality can be generally low. E.g. there are numerous flag parsing libraries but not a single one comes even close to clap in rust.
ocdtrekkie · a year ago
This used to be true of PHP as well, though them finally picking off the hairy bits of bad assumptions has eroded that a fair bit. For a long time you didn't usually need to concern yourself what version of PHP 4-5 or so what version you were running on.
almatabata · a year ago
> Go is a language that I started learning years ago, but did't change dramatically.

A lot of people underrate that quality I feel. C had that quality but pushed it to neurotic levels where it would change so slowly even when it needed to do it faster. Other language in contrast change too fast and try to do too many things at once.

You do not often get credit for providing boring, stable programs that work. I hope they continue to do it this way and do not get seduced into overloading the language.

We have a lot of fast moving languages already, so having one that moves slowly increases potential choice.

aczerepinski · a year ago
It’s why I moved my personal site from Phoenix to Go and it has proven to he a great choice. I have zero dependencies so they never go out of date.
WuxiFingerHold · a year ago
Absolutely, but not really underrated. The large and useful std lib plays an important role in the long term stability.
sacado2 · a year ago
As a teacher, I have a program I've been using for the last 8 years or so. I distribute the compiled version to students for them to compare their own results with what's expected. The program's written in go. I update it every other year or so. So I'm exactly in the case you describe.

I never had any issue. The program still compiles perfectly, cross-compiles to windows, linux and macos, no dependency issue, no breaking change in the language, nothing. For those use-cases, go is a godsend.

Dead Comment

yegle · a year ago
It's sad https://pkg.go.dev/embed was not mentioned in a post about web development in Go :-)

Having a true single binary bundling your static resources is so convenient.

linhns · a year ago
Massively underrated. It's actually used to build the pkg.go.dev website itself.

https://github.com/golang/pkgsite

avinassh · a year ago
it does mention it:

there’s just 1 static binary, all I need to do to deploy it is copy the binary. If there are static files I can just embed them in the binary with embed.

dullcrisp · a year ago
It is mentioned now.
okibry · a year ago
Which time golang read file, build time or run time ?
nasretdinov · a year ago
embed package allows to embed assets directly into the binary, so the files are read once during build time and then you can access them as e.g. a byte slice, a string, or a special FS object that acts like an in-memory file system
catlifeonmars · a year ago
Build time
estebarb · a year ago
Build time. It literally embeeds the file in the binary.

Deleted Comment

imiric · a year ago
There are some good tips here.

As for sqlc, I really wanted to like it, but it had some major limitations and minor annoyances last time I tried it a few months ago. You might want to go through its list of issues[1] before adopting it.

Things like no support for dynamic queries[2], one-to-many relationships[3], embedded CTEs[4], composite types[5], etc.

It might work fine if you only have simple needs, but if you ever want to do something slightly sophisticated, you'll have to fallback to the manual approach. It's partly understandable, though. It cannot realistically support every feature of every DBMS, and it's explicitly not an ORM. But I still decided to stick to the manual approach for everything, instead of wondering whether something is or isn't supported by sqlc.

One tip/gotcha I recently ran into: if you run Go within containers, you should set GOMAXPROCS appropriately to avoid CPU throttling. Good explanation here[6], and solution here[7].

[1]: https://github.com/sqlc-dev/sqlc/issues/

[2]: https://github.com/sqlc-dev/sqlc/issues/3414

[3]: https://github.com/sqlc-dev/sqlc/issues/3394

[4]: https://github.com/sqlc-dev/sqlc/issues/3128

[5]: https://github.com/sqlc-dev/sqlc/issues/2760

[6]: https://kanishk.io/posts/cpu-throttling-in-containerized-go-...

[7]: https://github.com/uber-go/automaxprocs

bornfreddy · a year ago
I agree that sqlc has limits, but for me it is great because it takes care of 98% of the queries (made up number) and keeps them simple to write. I can still write manual queries for the rest of them so it's still a net win.
0x_rs · a year ago
It gets mentioned a lot in the context of database/sql and sqlc, but Jet has been a great alternative so far, most notably because of its non-issue with dynamic queries support.

https://github.com/go-jet/jet/

CBarkleyU · a year ago
Unfortunately it relies on CGO for SQLite, which is a bummer
gwd · a year ago
> I learned the hard way that if I don’t do this then I’ll get SQLITE_BUSY errors from two threads trying to write to the db at the same time.

OK, here's a potentially controversial opinion from someone coming into the web + DB field from writing operating systems:

1. Database transactions are designed to fail

Therefore

2. All database transactions should done in a transaction loop

Basically something like this:

https://gitlab.com/martyros/sqlutil/-/blob/master/txutil/txu...

That loop function should really have a Context so it can be cancelled; that's future work. But the idea stands -- it should be considered normal for transactions to fail, so you should always have a retry loop around them.

mekoka · a year ago
It's controversial for many good reasons. You make the general claim that retrying a db transaction should be the rule, when most experts agree that it should be the exception. Just in the context of web development it can be disputed on the account that a db transaction is just a part of a bigger contract that includes a user at the other end of a network, a request, a session, and a slew of other possible connected services. If one thing shows signs of being unstable, everything should fail. That's the general wisdom.

More specific to the code that you linked to, the retry happens in only two specific cases. Even then, I personally don't find what it's doing to be such great engineering. It hacks its way around something that should really be fixed by properly setting the db engine. By encroaching like this, it effectively hides the deeper problem that SQLite has been badly configured, which may come to bite you later.

Failing transactions would raise a stink earlier. Upon inquiry, you'd find the actual remedy, resulting in tremendous performance. Instead, this magic loop is trying to help SQLite be a database and it does this in Go! So you end up with these smart transactions that know to wait in a queue for their turn. And for some time, nobody in the dev team may be aware that this can become a problem, as everything seems to be working fine. The response time just gets slightly longer and longer as the load increases.

Code that tries to save failing things at all cost like this also tends to do this kind of glue and duct tape micromanaging of dependencies. Usually with worse results than simply adjusting some settings in the dependencies themselves. You end up with hard to diagnose issues. The code itself becomes hard to reason about as it's peppered with complicated ifs and buts to cover these strange cases.

whizzter · a year ago
Transactions are hard, and in reality there's a shit-ton of things people do that has no right to be close to a transaction (but still are), and transactions were a good imperative kludge at the time that has just warped into a monster that people kinda accept over the years.

A loop is a bad construct imho, something I like far better is the Mnesia approach that simply decides that transactional updates are self-contained functional blocks and the database manages the transactional issues (yes, this eschews the regular SQL interfaces and Db-application separation but could probably be emulated to a certain degree).

https://www.erlang.org/doc/apps/mnesia/mnesia_chap4.html

tedunangst · a year ago
You'll just end up looping until your retry limit is reached. SQLite just isn't very good at upgrading read locks to write locks, so the appropriate fix really is to prevent that from happening.
yencabulator · a year ago
I've needed the exact same loop on (an older) Postgres to stop production from hitting transient errors. It's fundamental to the concept of concurrent interactive transactions.
Thaxll · a year ago
You should never do blind retries in an infinite for loop, ideally it should be a generic retry function ( bounded ) that type check the error.
gwd · a year ago
Yes, that's exactly what the linked code does: Calls your function, and if it returns an error, check through the wrapped errors to see if it's one of the SQLite errors which should be retried. If it is, try the transaction again; if not, pass the error up.
gwd · a year ago
OK, a bunch of the replies here seem to be misunderstanding #1. In particular, the assumption is that the only reason a transaction might fail is that the database is too busy.

I come from the field of operating systems, and specifically Xen, where we extensively use lockless concurrency primitives. One prime example is a "compare-exchange loop", where you do something like this:

    y = shared_state_var;
    do {
        oldx = y;
        newx = f(oldx); // f may be arbitrarily complicated
    } while((y = cmpxchg(&shared_state_var, oldx, newx)) != oldx);
Basically this reads oldx, mutates it into newx (using perhaps a quite complicated set of logic). Then the compare exchange will atomically:

- Read shared_state_var

- If and only if this value if equal to oldx, set it to newx

- In any case, return oldx

In the common case, when there's no contention, you read the old value, see that it hasn't changed, and then write the new value. In the uncommon case, you notice that someone else has changed the value, and so you'd better re-run the calculations.

From my perspective, database transactions are the same thing: You start a transaction, read some old values, you make some changes on those values. When you commit the transaction, if some of the the thing's you've read have been changed in the meantime, the transaction will fail and you start over again.

That's what I mean when I say "database transactions are designed to fail". Of course the transaction may fail because you have a connection issue, or a disk issue, or something like that; that's not really what I'm talking about. I'm saying specifically that there may be a data race due to concurrent accesses. Whenever there are more than one thing accessing the database, there is always the chance of this happening, regardless of how busy the system is -- even if in an entire week you only have two transactions, there's still a chance (no matter how small) that they'll be interleaved such that one transaction reads something which is then written to before the transaction is done.

Now SQLite can't actually have this sort of conflict, because it's always single-writer. But essentially what that means is that there's a conflict every time where there are two writes, not only when some data was overwritten by another process. Something that happens at a very very low rate when you're using a proper RDBMS like Postgres, now happens all the time. But the problem isn't with SQLite, it's with your code, which has assumed that transactions will never fail do to concurrency issues.

krackers · a year ago
I always see SQLite as recommended, but every time I look into it there are some non-obvious subtleties around txn lock, retry behavior, and WAL mode. By default if you don't tweak things right getting frequent SQLITE_BUSY errors seems to occur at non-trivial QPS.

Is there a place that documents what the set-and-forget setting should be?

arp242 · a year ago
_journal_mode=wal&_busy_timeout=200 seems to work well enough.
marcosdumay · a year ago
You shouldn't blindly retry things that fail as a default, and you should really not default into making the decision of what to do on a server that is just on the middle between the actual user and the database.

Handling errors on the middle is a dangerous optimization.

lifthrasiir · a year ago
Others have said much about a transaction loop, but I also don't think that database transactions are necessarily designed to fail in the sense that the failure is a normal mode of operation. Failing transactions are still considered exceptional; their sole goal is to provide logical atomicity.
returningfory2 · a year ago
I don't think this controversial. Retrying failed transactions is a common strategy.
gwd · a year ago
You're the first person I've heard say so. When I was learning DB stuff, there were loads of examples of things that looked at the error from a transaction. Not a single one of them then retried the transaction as a result.

The OP's comment is a symptom of this -- they did some writes or some transactions and were getting failures, which means they weren't retrying their transactions. And then when they searched to solve the problem, the advice they received wasn't "oh, you should be retrying your transactions" -- rather, it was some complicated technical thing to avoid the problem by preventing concurrent writes.

rad_gruchalski · a year ago
Wouldn't you need two contexts? One for retry cancellation, one for underlying resources to be passed on to?
physicles · a year ago
GOMEMLIMIT has really cut down on the amount of time I’ve had to spend worrying about the GC. I’d recommend it. Plus, if you’re using kubernetes or docker, you can automatically set it to the orchestrator-managed memory limit using something like https://github.com/KimMachineGun/automemlimit — no need to add any manual config at all.
nickzelei · a year ago
Oh this is a good find. Thank you for sharing that link!
arccy · a year ago
https://pkg.go.dev/go.uber.org/automaxprocs is another useful one if you set CPU limits
trustno2 · a year ago
Other note

Sooner or later you will hit html/template, and realize it's actually very weird and has a lot of weird issues.

Don't use html/template.

I grew to like Templ instead

emmanueloga_ · a year ago
Templ [1] is great!

Another go mod that helps a lot when massaging JSON (something most web servers end up doing sooner or later) is GJSON [2].

--

1: https://github.com/a-h/templ

2: https://github.com/tidwall/gjson

arp242 · a year ago
stdlib templates are a bit idiosyncratic and probably not the easiest to start with, but they do work and don't have "weird issues" AFAIK. What issues did you encounter?
kbolino · a year ago
I don't know what issues others have had with it, but for me one notable thing is that html/template strips all comments out. This is by design, but it's not documented anywhere. I've proposed making this configurable, but my proposal has gotten no traction so far.
sethammons · a year ago
I am just trying Templ. I like what I am seeing for the most part. There are some tooling ergonomics to work out. Lots of "suddenly the editor things everything is an error and nothing will autoimport or format" back to mostly working. Click to definition goes to the autogenerated code instead of the templ file. Couple things like that. But soooooooooo much better to deal with code gen than html/template. That thing is a pita
coffeeindex · a year ago
What’s so bad about html/template?
srameshc · a year ago
Good to see author's mention about routing. I am mentally stuck with mux for a long time and didn't pay attention to the new release features. Happy that I always find things like these on HN.
JodieBenitez · a year ago
Nice new feature, would actually make me want to use Go without Gin.
leetrout · a year ago
I am over Gin and have been for years yet everyone keeps using it because it has inertia. The docs are garbage.

Big fan of Echo and it has much better docs.

https://echo.labstack.com/

rwdf · a year ago
I've grown to prefer go-chi over Gin (or Echo), since it's just the standard library with some QoL features on top.
codegeek · a year ago
What I love about Go is its simplicity and no framework dependency. Go is popular because it has no dominating framework. Nothing wrong with frameworks when it fits the use case but I feel that we have become over dependent on framework and Go brings that freshness about just using standard libraries to create something decent with some decent battle tested 3rd party libraries.

I personally love "library over framework" mindset and I found Go to do that best.

Also, whether you want to build a web app or cli tool, Go wins there (for me at least). And I work a lot with PHP and .NET as well and love all 3 overall.

Not to mention how easy was it for someone like me who never wrote Go before to get up and running with it quickly. Oh did I mention that I personally love the explicit error handling which gets a lot of hate (Never understood why). I can do if err != nil all day.

A big Go fan.

jeffreyrogers · a year ago
I like Go for this reason as well. In Python I found the Flask framework to be suitably unobtrusive enough to be nice to use (never liked Django), but deploying python is a hassle. Go is much better in that area. The error handling never bothered me either.

I think if Go shipped better support for auth/sessions in the standard library more people would use it. Having to write that code yourself (actually not very hard, but intimidating if you've never done it before) deters people and ironically the ease of creating Go packages makes it unclear which you should use if you're not going to implement it yourself.

leetrout · a year ago
I am a Django apologist because I grew up with Django. So with that being said, I'm not out to convert you but I am genuinely curious what you don't like about it. Promise I won't refute anything I just like to try to understand where it turned off folks.

I don't like flask because it seems just easy enough to be really productive in the beginning but you eventually need most of the things Django gives you. So I would rather pick up Django Rest Framework or Django Ninja than Flask or Fast API. In those cases I jump straight to Go and use it instead because the same library decisions on the Go side give me a lot more lift on the operations end (easy to deploy, predictable performance into thousands of requests per second if built correctly).

atomicnumber3 · a year ago
My main gripe about go is that it's decent for the middle and late stages and really really bad to start with. You'll spend way too much time rewriting stuff you literally get for free by running "rails new" or "bundle add devise"
kaba0 · a year ago
> (actually not very hard, but intimidating if you've never done it before)

These stuff are never really hard, but you will make countless vulnerabilities that way. The most important job of a good framework is cutting down significantly on the possible to get wrong use cases, exposing you to already safe APIs.

kolja005 · a year ago
I'm curious in what sense you find Python difficult to deploy? My company has tons of Python APIs internally and we never have much trouble with them. They are all pretty lightly used services so it it something about doing it on a larger scale?
codegeek · a year ago
Some great points about the downsides of Go. Btw I was a Flask junkie in early days.