Our job often involves breaking down big problems into many little problems. So it should be clear that making little steps makes progress towards solving the big problems. It can be easy to feel like that progress isn't happening and it can be frustrating that it isn't happening fast enough. But our experience should also tell us that it all seems to quickly come together towards the end. There was never a magic leap, it was all the small steps put together.
“Never doubt that a small group of thoughtful, committed citizens can change the world; indeed, it’s the only thing that ever has.”
I think it’s very promising, if you believe in the potential of Linux on the desktop, that gaming used to be the standard “Linux doesn’t do what I need, so I stay on Windows” argument. Thanks to a lot of investment and hard work, particularly by Valve and others contributing to software like Wine/Proton, that is no longer the case. Many games work fine on Linux today, even among the big names. Some even have native versions. It mostly seems to be “anti-cheat” measures that are statistically indistinguishable from malware that still cause trouble.
Another potential sticking point for adoption by home users today is that few, if any, of the big streaming services work well on Linux. This also seems to come down at least partly to DRM. A cynic might suggest that this is because Linux will give a more appropriate response if a copy protection system tries to do invasive things that it has no business doing on someone else’s computer. In any case, it’s another significant barrier, but if we could get to the point where you could at least watch HD content like users of other platforms when you’re paying the same subscription fees, it’s another barrier that could fall.
This latter example is, of course, more than a little ironic given the subject of today’s discussion. But then the behaviour that the DRM system is being subverted to protect against by Signal probably wouldn’t fly for more than five minutes on Linux in the first place, so I don’t think Linux not enabling intrusive/abusive DRM is really the problem here…
var favoriteFoodsOfFurryPetsOfFamousAuthorsOfLongChineseBooksAboutHistory = books
.filter(book =>
book.pageCount > 100 and
book.language == "Chinese" and
book.subject == "History" and
book.author.mentions > 10_000
)
.flatMap(book => book.author.pets)
.filter(pet => pet.is_furry)
.map(pet => pet.favoriteFood)
.distinct()
Or in Scala: val favoriteFoodsOfFurryPetsOfFamousAuthorsOfLongChineseBooksAboutHistory = (for {
book <- books if
book.pageCount > 100 &&
book.language == "Chinese" &&
book.subject == "History &&
book.author.metnions > 10_000
pet <- book.author.pets if
pet.is_furry
} yield pet.favoriteFood).distinct
Though, most Scala programmers would prefer higher-order functions over for-comprehensions for this.An alternative, which in FP-friendly languages would have almost identical performance, would be to make the shift in objects more explicit:
var favoriteFoodsOfFurryPetsOfFamousAuthorsOfLongChineseBooksAboutHistory =
books
.filter(book => isLongChineseBookAboutHistory(book))
.map(book => book.author)
.filter(author => isFamous(author))
.flatMap(author => author.pets)
.filter(pet => pet.isFurry)
.map(pet => pet.favouriteFood)
.distinct()
I slightly prefer this style with such a long pipeline, because to me it’s now built from standard patterns with relatively simple and semantically meaningful descriptions of what fills their holes. Obviously there’s some subjective judgement involved with anything like this; for example, if the concept of an author being famous was a recurring one then I’d probably want it defined in one place like an `isFamous` function, but if this were the only place in the code that needed to make that decision, I might inline the comparison.http://literateprogramming.com/
For a book-length discussion of this see:
https://www.goodreads.com/book/show/39996759-a-philosophy-of...
previously discussed here at: https://news.ycombinator.com/item?id=27686818 (and other times in a more passing mention --- "Clean Code" adherents should see: https://news.ycombinator.com/item?id=43166362 )
That said, I would be _very_ interested in an editor/display tool which would show the "liveness" (lifespan?) of a variable.
I did once write a moderately substantial application as a literate Haskell program. I found that the pros and cons of the style were quite different to a lot of more popular/conventional coding styles.
More recently, I see an interesting parallel between a literate program written by a developer and the output log of a developer working with one of the code generator AIs. In both cases, the style can work well to convey what the code is doing and why, like good code comments but scaled up to longer explanations of longer parts of the code.
In both cases, I also question how well the approach would continue to scale up to much larger code bases with more developers concurrently working on them. The problems you need to solve writing “good code” at the scale of hundreds or maybe a few thousand lines are often different to the problems you need to solve coordinating multiple development teams working on applications built from many thousands or millions of lines of code. Good solutions to the first set of problems are necessary at any scale but probably insufficient to solve the second set of problems at larger scales on their own.
This only works for versions of your package that do exist in nixpkgs but aren’t currently the default for your chosen channel, so it doesn’t help if your channel is out of date and you want to install a newer version that hasn’t been packaged yet. But then that’s the case in almost any Linux distro if you rely on installing your software from the distro’s native package repo, and much the same solutions exist in NixOS as in other distros. Although if you’re really determined, you can also start writing your own derivations to automate building the latest versions of your favourite applications before they’re available from nixpkgs…
¹ https://lazamar.co.uk/nix-versions/
² https://lazamar.github.io/download-specific-package-version-...
I've been getting by with buildFHSenv and Flakes, which, despite my complaints, really isn't that annoying. My goal at this point is to eventually compile all my flakes and take on Lutris.
As a fellow daily driver of NixOS, you’ve just summed up my biggest problem with it. You can do almost anything, and once you’ve figured out how and why you do it a certain way, that way often makes a lot of sense and the NixOS design can offer significant benefits compared to more traditional distros without much downside. But NixOS is out of the mainstream and its documentation is often less than ideal, so there is a steep learning curve that is relatively difficult to climb compared to a more conventional distro like Ubuntu.
The shared object problem in particular comes up so often, particularly if you use applications or programming languages with their own ecosystem and package management, that I feel like having nix-ld installed and activated by default with a selection of the most useful SOs available out of the box would be a significant benefit to new users. Or if including it in a default installation is a step too far, many users would probably benefit from some HOWTO-style documentation so they can learn early on that nix-ld exists, how it helps with software that was built for “typical” Linux distros, why you don’t need or want it when running software that was already built for NixOS such as the contents of nixpkgs, and how to work out which shared objects an application needs and how to find and install them for use with nix-ld.
I haven’t yet felt confident enough in my own NixOS knowledge to contribute something like that, but one nice thing about the NixOS community is that there are some genuine experts around who often pop up in these discussions to help the rest of us. I wonder if there’s scope for sponsoring some kind of “Big NixOS Documentation Project™” to fund a few of those experts to close some of those documentation gaps for the benefit of the whole community…
When it works, it’s great. I like that I can install (and uninstall) much of the software I use declaratively, so I always have a “clean” base system that doesn’t accumulate numerous little packages at strange versions over time in the way that most workstations where software is installed more manually tend to do.
This is a trade-off, though. Much is made of the size of the NixOS package repository compared to other distros, but anecdotally I have run into more problems getting a recent version of a popular package installed on my NixOS workstation than I had in probably a decade of running Debian/Ubuntu flavoured distros.
If the version of the package you want isn’t available in the NixOS repo, it can be onerous to install it, because by its nature NixOS doesn’t follow some popular Linux conventions like FHS. Typically, you write and maintain your own Nix package, which often ends up similar to fetching a known version of a package from a trusted source and then following the low-level build-from-source process, but all wrapped up in Nix incantations that may or may not be very well documented, and sometimes with a fair bit of detective work to figure out all the versions and hashes of not just the package you want but also all its dependencies, which may in turn need packaging similarly themselves if you’re unlucky.
It’s also possible to run into this when you’re not installing whole software applications, including those that are available from the NixOS package repository, but rather things like plug-ins for an application or libraries for a programming language. You might end up needing a custom package for the main application so that its plug-in architecture or build system can find the required dependencies in the expected places when you try to install the extra things. Again, this is all complexity and hassle that just doesn’t happen on mainstream Linux distros. If I install Python and then `pip install somepackage` then 99.9% of the time that just works everywhere else but frequently it won’t work out of the box on NixOS.
It’s one of those things that is actually perfectly reasonable given the trade-offs that are explicitly being made, yet still makes NixOS time-consuming and frustrating in a way that other systems simply aren’t when you do run into the limitations.
This comment is already way too long, so I’ll just mention as a footnote that NixOS also tries to reconcile two worlds, and not all Linux software is particularly nicely arranged to be managed declaratively. So in practice, you still end up with some things being done more traditionally/imperatively anyway, and then you have a hybrid system that compromises some of the main benefits of the declarative/immutable pattern. There are tools like Flakes and Home Manager that help to overcome some of this as well, and as others have said, they are promising steps in good directions, but we’re not yet realising the full potential of this declarative style and it’s hard to see how we get from here to there quickly.
I don't think that's a big problem anymore. Between typeshed and typing's overall momentum, most libraries have at least decent typing and those that don't often have typed alternatives.
ORMs have entered the chat…
These sometimes use a lot of dynamic modification, such as adding implicit ID fields or adding properties to navigate a relationship with another type that is defined in code only from the other side.
It can also be awkward to deal with “not null” database fields if the way the ORM model classes are defined means fields are nullable as far as the Python type hints are concerned, yet the results of an actual database query should never have a null value there. Guarding against None every time you refer to one of them is tedious.
I’m not exactly the world’s loudest advocate for ORMs anyway, but on projects that also try to take type safety seriously, they do seem to be a bit of a dark corner within the Python ecosystem.
I've read that F# has units, Ada and Pascal have ranges as types (my understanding is these are runtime enforced mostly), Rust will land const generics that might be useful for matrix type stuff some time soon. Does any language support all 3 of these things well together? Do you basically need fully dependent types for this?
Obviously, with discipline you can work to enforce all these things at runtime, but I'd like it if there was a language that made all 3 of these things straightforward.
Matrix dimensions are certainly doable, for example, because templates representing mathematical types like matrices and vectors can be parametrised by integers defining their dimension(s) as well as the type of an individual element.
You can also use template wizardry to write libraries like mp-units¹ or units² that provide explicit representations for numerical values with units. You can even get fancy with user-defined literals so you can write things like 0.5_m and have a suitably-typed value created (though that particular trick does get less useful once you need arbitrary compound units like kg·m·s⁻²).
Both of those are fairly well-defined problems, and the available solutions do provide a good degree of static checking at compile time.
IMHO, the range question is the trickiest one of your three examples, because in real mathematical code there are so many different things you might want to constrain. You could define a parametrised type representing open or closed ranges of integers between X and Y easily enough, but how far down the rabbit hole do you go? Fractional values with attached precision/error metadata? The 572 specific varieties of matrix that get defined in a linear algebra textbook, and which variety you get back when you compute a product of any two of them?
1. No network latency, you do not have to send anything across the atlantic.
2. Your get privacy.
3. Its free, you do not need to pay any SaaS business.
An additional would be, the scale being built-in. Every person has their own setup. One central agency doesn't have to take care of all.
While the modern world of mobile devices and near-permanent fast and reliable connectivity has brought some real advantages, it has also brought the ability for software developers to ruthlessly exploit their users in ways no-one would have dreamt of 20 or 30 years ago. Often these are pitched as if they are for the user’s benefit — a UI “enhancement” here, an “improved” feature there, a bit of casual spying “to help us improve our software and share only with carefully selected partners”, a subscription model that “avoids the big up-front cost everyone used to pay” (or some questionable logic about “CAPEX vs OPEX” for business software), our startup has been bought by a competitor but all the customers who chose our product specifically to avoid that competitor’s inferior alternative have nothing to worry about because they have no ulterior motive and will continue developing it just the way we have so far.
The truth we all know but don’t want to talk about is that many of the modern trends in software have been widely adopted because they make things easier and/or more profitable for software developers at the direct expense of the user’s experience and/or bank account.