Readit News logoReadit News
nikeee · 2 years ago
Are there some insights on why they chose the rather unusual stack of using rust and flutter for the new UI? To the outside observer, it seems like it's not a carefully taken business decision but some random engineer who wanted to learn rust and flutter.

Firstly rust for an installer UI? Does it need to be especially fast or memory safe? Maybe. Does it have nice bindings for a UI layer? Maybe, or did they create the bindings for flutter (a UI framework designed for object-oriented Dart) by themselves? So why flutter? I guess it can do some pretty animations. Being maintained by Google and designed for Dart, it may just get abandoned in some time, amplified by the fact that noone uses Dart, except for flutter.

addicted · 2 years ago
Because it’s Canonical and they wouldn’t be Canonical if they didn’t do something randomly and unnecessarily different only to collapse under the weight of maintaining their own unique bespoke technology and then go right back to what everyone else was doing, creating a whole lot of confusion and wasted effort in the interim.
oooyay · 2 years ago
Canonical made an announcement some time ago when Flutter started supporting desktop usecases that they were all in on Flutter: https://snapcraft.io/blog/canonical-enables-linux-desktop-ap...

This is them actually discussing the installer: https://ubuntu.com/blog/flutter-and-ubuntu-so-far

kyrofa · 2 years ago
It's top-down engineering. Mark commanded the desktop team to go all-in on Flutter. This is how Canonical functions.
wmf · 2 years ago
So he's learned nothing from Unity and Mir.
QuercusMax · 2 years ago
Google just laid off some large portion of the Flutter team: https://news.ycombinator.com/item?id=40184763
m463 · 2 years ago
Didn't know what flutter was.

Flutter is an open-source UI software development kit created by Google. It can be used to develop cross platform applications from a single codebase for the web,[4] Fuchsia, Android, iOS, Linux, macOS, and Windows.[5]

https://en.wikipedia.org/wiki/Flutter_(software)

1oooqooq · 2 years ago
it compiles code into dart and then add helpers to further release a desktop/Android/ios shell-app that have a dart interpreter or a helper that compiles dart into java/objc on top of a shell app project.

it's a freaking mess but ambitious if you want a single code based (with 10 code bases hidden)

voidr · 2 years ago
The Cosmic Desktop Environment is built with Rust and that seems to work fine, so likely the issue is with flutter, which I don't really understand why chose that over Qt or GTK.

Apparently Microsoft writes their installer in HTML, CSS and JavaScript and that also works fine.

It is very embarrassing they Canonical somehow manages to screw up something as simple as an installer.

edwinjm · 2 years ago
Haha. It indeed seems a bit overkill. I can image you don't want you animations to be stuttering while downloading and installing tens of packages so Rust seems like a good solution. The advantage of Flutter is that you have full control of the UI no matter which device you use. Why not use GNOME? Maybe GNOME only works good after everything is installed? They might have good reasons but maybe they got sick of C++ and GNOME and wanted to try something else.
Quekid5 · 2 years ago
Something something Snap. Something something Mir. Something something Upstart.

... it's just Shuttleworth's latest fancy, I expect.

cycomanic · 2 years ago
I am always amazed how the red hat crowd managed to change the narrative around Ubuntu always starting their own thing instead of using something established.

Reality is that both snap (2014) and upstart (2006) came before flatpak (2015) and systemd (2010).

Now there might be valid reason why the later options are superior, however accusing Ubuntu to just do something different out of "fancy" seems quite unfair. They did develop a solution for things theat where widely regarded as problems (as evidenced by others later doing the same thing), where they failed (and there are multiple reasons IMO) is getting their solutions adopted by the wider community.

foul · 2 years ago
Albeit NIH and Silicon Valley behaviour, I like how Canonical however ships things coming from their R&D and M&A. Probably most of the Linux userspace is mantained jointly by IBM and Microsoft nowadays, it's not bad to hear outsider work.
dyingkneepad · 2 years ago
I'm here just to remind everybody that the Debian Stable release cadence is about the same as Ubuntu LTS. Plus Debian doesn't have snaps, Unity and the other Ubuntu-specific "value-adds".

(on the other hand, Debian's bug report is stuck in the 1920's, still being completely based on e-mail)

Edit: also, if you use new hardware, just install Debian Testing, configure /etc/apt/sources.list to say "trixie" instead of "testing", which will ensure that next year you'll be using Debian Stable then Trixie becomes Stable.

dima55 · 2 years ago
And on top of that, Debian's bug tracker is where development happens (as opposed to Ubuntu's, which is a black hole). And packages that cannot be built, get a bug report BEFORE the release on Debian, but are silently not included in the release on Ubuntu. Ubuntu is "Debian with extra crap and extra bugs". Debian strongly recommended.
mianos · 2 years ago
I feel this has changed over the years too. Many years ago I chose Ubuntu as it was fresher and bugs seemed to be fixed quicker, maybe as it was always more recent.

I moved to Debian and found there was not really any small bugs and, as a bonus, no major things completely broken like Ubuntu broke a few packages with some sort of snap dpkg confusion.

Ubuntu has a bit of a stinky vibe lately. The whole lxd thing and the number of people I know, some I have worked with and are great, went to work there and left quickly.

bayindirh · 2 years ago
> still being completely based on e-mail

Nope, the process can start with "reportbug" utility, which needs a setup in the first run, but allows bug reporting from the comfort of your terminal.

On the other hand, once you install Debian, you won't be reinstalling it for the next two decades. It'll just update.

> also, if you use new hardware, just install Debian Testing...

Sure, that's fine for personal systems & office workstations which are behind a NAT, with a peaceful network.

For any server and internet facing devices, install Debian Stable with NetInstall ISO and enable firmware installation. Testing does not get security updates. I repeat: Testing does not get security updates.

--Sent from my Debian Trixie box.

dyingkneepad · 2 years ago
Yeah, good reminder about Testing security. But some packages with serious security issues get a fast pass from Unstable to Testing sometimes.

Chances are that if you're installing a server that's new enough to need to run Debian Testing (or newer), you probably already know what you need to know. So yeah Testing is more for Desktops.

I tried the reportbug utility once, it made up an email for me (related to my machine's hostname) and I never ever heard about the bug again. Couldn't find it on the web interface. I'm not that stupid and I failed to report a bug. Beautiful web interfaces are a million times better for bugs, and they're also capable to send e-mail notifications to whoever wants them.

kyrofa · 2 years ago
Even when I worked at Canonical I never installed the latest LTS until at least its first point release in the summer. Maybe I'm part of the problem: if more people followed my lead, the initial release would get buggier and buggier.
sshine · 2 years ago
Yes, that is the problem.

That is why so much software sucks.

Because the people who make it don’t use it, and hardly care to thoroughly test if it works.

We’re lucky to catch things with extensive dogfooding.

(Also, no blame here. I could imagine working for Canonical and not even running Ubuntu.)

kyrofa · 2 years ago
> Also, no blame here.

Oh don't worry, no offense taken. It's an interesting problem: dogfooding is definitely a good way to catch problems, but if your release is unstable enough to break machines, you can end up with a wide swath of the company being unproductive. The stability of the overall release wasn't one of my core responsibilities, so naturally I gave more priority to the things that were; I needed a release that worked so I could get my job done.

There might be some cultural solutions to that. For example, if the company expects that employees are dogfooding, perhaps testing out a beta in a specific timeframe, it could be culturally expected that devs might have broken machines during that timeframe. If was that taken into account in both the schedule as well as support paths, that would make things a bit easier. Honestly, though, even if that were the case, it would still be a hard sell for me. Quite simply: I don't like failing at my duties. I would need to be convinced that this was one of my duties, and I'm not sure how the company would pull that off. And that's ignoring other very practical concerns, such as the fact that a fair number of Canonical engineers only have one work machine. Having that machine down, depending on the definition of "down," can make it difficult to get support in the first place.

popey · 2 years ago
There certainly used to be a strong push to have internal people use the product a lot more during the development cycle. There was also a real desire to make the devel version actually usable. That fell by the wayside, sadly.

Having your developer workstation break while you have a backlog full of stuff to do, would absolutely make you less motivated to run the developer release. Especially if you're not on the desktop team.

kyrofa · 2 years ago
I suspect it was a lot easier to feel like that was one's duty when the company was smaller and you were closer to every piece of it. It was also probably easier to get issues resolved when you knew exactly who to talk to. Maintaining that culture as you grow is probably quite the challenge!
SoftTalker · 2 years ago
Same. I never move to the next LTS release until the prior one is getting close to EOL. Let others shake out the bugs.
trm42 · 2 years ago
Haven't used any Linux Distribution on Desktop so cannot comment that side or the installer but my home server has been running Ubuntu LTSes for over a decade. Last night I upgraded it to 24.04 LTS and I was really surprised how well and easy it was to upgrade. Couple of previous upgrades were a lot more hairier things breaking in surprisingly ways after upgrade but this time everything worked perfectly from the first reboot.
logicprog · 2 years ago
That experience actually aligns pretty well with the op, because my experience and the general consensus among Linux users that I've seen is that canonicals focusing almost all of its effort on making Ubuntu an excellent server experience because that's where the actual business is, and is letting the desktop sort of bit rot into oblivion.
hervem · 2 years ago
How did you upgrade it? As I know there is a blocker bug which isn't fixed yet.
wildylion · 2 years ago
As a person responsible for endpoints in my company, I was awaiting 24.04 with impatience, because of how they promised TPM-based LUKS encryption. Finally, no telling users why they need TWO passphrases on their machines!

Well, yes. You bet the encryption works. Even an advanced hacker will have a hard time unlocking the hard drive... if after the install the TPM refuses to escrow the key, and you can only see your recovery seed AFTER a successful installation!

shrimp_emoji · 2 years ago
Ubuntu is the buggiest distro I've ever used.

It's ironic cuz it's supposed to be the most stable, mainstream one. But, from installation to day to day usage, it was crashtastic in the years I was on it.

By comparison, my equal number of years split between Manjaro and Arch, I've had almost zero issues. Somehow, the scary, dangerous rolling distros seem the most stable. It's hard to wrap a brain around.

jjcm · 2 years ago
Are you using the GUI or are you using it headless? Just to pile on anecdotal evidence, the oldest ubuntu server I have running has been going for 11 years now with no issues. I have around 4 others also running with no issues. These are all headless though.
horsellama · 2 years ago
What’s the best alternative distro for ML work? I mainly need the nvidia stack + PyTorch
rickspencer3 · 2 years ago
I am so biased this borders on shilling, but may I suggest taking a look at openSUSE Leap Micro? https://get.opensuse.org/leapmicro/5.5/

We (SUSE) have extensive testing and quality control, if you are looking for something very stable. Leap is the version built out of the same bits as the enterprise version. The "Micro" part means it is smaller, transactional, and immutable. It's built for containerized and virtualized applications. You can install the nvidia OS bits in the OS, and then build a container with libraries.

lnxg33k1 · 2 years ago
I’m using debian sid without issues to communicate, but Ive used linux for 20 years so maybe my definition of issues might be poisoned

I switched to sid from arch because their approach to software major release is dangerous (i.e. adopt plasma 6 the day of release)

exe34 · 2 years ago
My 3090 is plugged into an old intel pc running Debian 11. Tried Debian 12, but couldn't quite get the right version of cuda to work with both tensorflow and pytorch.
logicprog · 2 years ago
Personally I'd just pick whatever Linux distro suits your fancy for other reasons, and then the official PyTorch docker image as a distrobox. The combination of Nvidia drivers, CUDA/CUDNN/etc libraries, and Python libraries is extremely fragile and sensitive to version bumps and stuff so I'd want it in an isolated consistent official box. This I'd what I do on ublue.
skywhopper · 2 years ago
Debian will be the closest in terms of functionality and organization.
FridgeSeal · 2 years ago
PopOS.

Comes with the drivers you need, super nice and streamlined UX.

bayindirh · 2 years ago
Debian.
homarp · 2 years ago
arch ?
all2 · 2 years ago
I managed to get a 4090 hooked up on an Arch machine. It buzzes when the LLM is generating stuff. It isn't a straight forward thing, though, you still have to dig through the deps and get your versions just right. On top of that you need to get your versions from bottom-to-top correct for everything to work. Some versions of Cuda don't play nicely with some versions of the Python modules, so take care if you go this route.
asddubs · 2 years ago
the .0 releases can have their problems, but a canonical engineer also did literally submit a kernel patch to fix a bug that prevented my machine from booting with any new enough kernel in response to a bug report.