Readit News logoReadit News
ionrock commented on FastHTML – Modern web applications in pure Python   fastht.ml/... · Posted by u/bpierre
bbminner · a year ago
I have been reading about these kinds of projects for some time, and even prototyped my own a while back, but one question that keeps popping up is - all these python abstractions over how html and js and dom interact tend to be extremely leaky. So, as soon as you go beyond the simple todo example, you need to manage BOTH the standard browser model and the abstraction model in your head, and all the funny ways in which they interact. Do these kinds of libraries prove useful for people beyond their authors (who have a good mental model of a framework in their head anyway because they are developing it)?
ionrock · a year ago
This kind of framework helps to optimize a bit for returning hypertext (iow HTML snippets) rather than leveraging a frontend system that only interfaces with the backend via an API. From that perspective, you need to be able to send HTML snippets precisely and manage more URLs that provide the snippets. React already has a pretty strong abstraction around HTML with JSX that has been generally morphed into web components. Writing the HTML components on the server using a library that maintains valid HTML is convenient, and it also means you can deploy an application without having to bundle a bunch of template files.

I will say I do think some opinions on how to structure URLs to return snippets might be valuable. Some of these frameworks leverage headers htmx sends to use just part of the page, but I think it is easier to just have individual URLs for many use cases. I've used Go and Templ in a similar fashion and one benefit with Templ is that the snippets are effectively functions, so returning the specific section and passing args is reasonably natural when breaking something out of a page.

Overall though, the goal is to avoid duplicating your data model and abstractions in the UI in favor of relying better networks, faster browsers, and HTML improvements to create interesting interfaces with simpler code.

ionrock commented on Show HN: PlayBooks – Jupyter Notebooks style on-call investigation documents   github.com/DrDroidLab/Pla... · Posted by u/TheBengaluruGuy
chasinglogic · 2 years ago
Whenever I see tools like this I always think "that wouldve been great at my old job where we didn't do post mortems"

But nowadays I think if I can automate a runbook can I not just make the system heal itself automatically? If you have repeated problems with known solutions you should invest in toil reduction to stop having those repeated problems.

What am I missing? I think I must be missing something because these kinds of things keep popping up.

ionrock · 2 years ago
Writing post-mortems is generally pretty kludgy. You might have a Slack bot that records the big picture items, but ideally, a post-mortem would include connections to the nitty-gritty details while maintaining a good high-level overview. The other thing most post-mortems miss is communicating the discovery process. You'll get a description of how an engineer suspected some problem, but you rarely get details as to how they validated it such that others can learn new techniques. At a previous job, I worked with a great sysadmin/devop who would go through a concise set of steps when debugging things. We all sat down as a team, and he showed us the commands he ran to confirm transport in different scenarios. It was an enlightening experience. I talked to him and other DevOps folks about Rundeck, and it was clear that the problem isn't whether something can be automated, but rather whether the variables involved are limited enough to be represented in code. When you do the math, the time it would take to write code to solve some issues is not worth the benefit.

Iterating on the manual work to better communicate and formalize the debugging process could fit well into the notebook paradigm. You can show the scripts and commands you're running to debug while still composing a quality post-mortem, as the incident is happening where things are fresh.

The other thing to consider is how often you get incidents and how quickly you need to get people up to speed. In a small org, devs can keep most things in their head and use docs, but when things get larger, you need to think about how you can offload systems and operational duties. If a team starts by iterating on operational tasks in Notebooks, you can hand those off to an operations team over time. A quality, small operations team can take on a lot work and free up dev time for optimizations or feature development. The key is that devs have a good workflow to hand off operational tasks that are often fuzzier than code.

The one gotcha with a hosted service IMO is that translating local scripts into hosted ones takes a lot of work. On my laptop, I'm on a VPN and can access things directly, where you need to figure out how to allow a 3rd party to connect to production backend systems. That can be a sticky problem that makes it hard to clarify the value.

ionrock commented on CSS written in pure Go   github.com/AccentDesign/g... · Posted by u/andrewfromx
ionrock · 2 years ago
After dealing with the current state of affairs regarding the front end, I think this is pretty interesting. I've been writing Go web apps with templ and htmx. I've punted on dealing with CSS using a cdn and bulma. I did get Tailwind and friends working, but it required a lot of work being outside a React/JS framework like next.js. It also felt really weird using npm to install CSS.

The nice thing about this is that you can get a tailwind Go lib to programmatically build into your binary. There are no extra files, one build step, and one binary output. After trying out Fly and Vercel, I went to running things with docker on a VM, and a single binary in a container makes things much simpler IMO.

This looks pretty cool and I look forward to seeing folks do interesting things with it.

ionrock commented on Neovide – A simple, no-nonsense, cross-platform GUI for Neovim   neovide.dev... · Posted by u/frankjr
anpep · 2 years ago
Should I take this opportunity to switch to Neovim for once? I've attempted using vim/neovim/emacs, etc. many times but it's just so confusing to me.

Why would I go over the trouble of debugging my editor for simple stuff like having LSP completions and semantic highlighting? It's insanely difficult for me to wrap my mind around vim packages, configs, etc., when VSCode/GoLand/et al. do a pretty darn job being decent editors that you don't need to hack on and just work out of the box.

Don't get me wrong, I'm not throwing shade on vim/emacs, I'm just wondering what am I missing since everyone's been super happy and productive with vim for ages, and if I'm approaching these tools the wrong way...

ionrock · 2 years ago
While I can't say this is true for vim, in Emacs, I found that the customizability helps for a lot of different programing tasks. I run my terminals in Emacs and they are associated with my projects. Magit (the Emacs git package) helps me do complex rebases with diffs alongside creating branches and everything else you might do in git (even the reflog when things get rough). There is event a handy rest-mode that lets me write and save HTTP sessions. I connect to my database in buffer as well. What makes all this so handy is that I can move the buffers around to compare things side-by-side, use a single large buffer, etc. While VS Code has splits and terminals, I found that in Emacs I can access everything from my keyboard and now that I've gotten used it, I don't even think about it.

I've heard that a lot of vim folks get similar behavior via tmux and leverage other shell tools.

I'm not going to argue you should switch, because it is an investment. It is like owning a house, you have autonomy but you're also on the hook to fix the air conditioner. You also can't just drop it and move to the next editor. Your hands and workflows become tied to your editor. Keybindings may be similar, but it is not the full story. Either way, it is a journey getting good with your tools, so enjoy it!

ionrock commented on Htmx and Web Components: A Perfect Match   binaryigor.com/htmx-and-w... · Posted by u/alexzeitler
ionrock · 2 years ago
I don't know if I fully grok the web components described by the author. I did find that using templ with Go makes it reasonable to have components you can inject some logic and state into for sending back HTML to htmx where it makes sense. The thing that makes sense is we focus on generating HTML once and there is no need to redefine models in the frontend. This is important because it means we don't take HTML, convert it to a data structure, validate the data, then send the data to a server, that has to validate things again, and return some resulting data, that once again gets rendered as HTML. Each data transition is expensive from a complexity perspective and requires caching state where caching is hard.

I'll admit that I'm not a frontend guru so my bias for writing apps like I did 20 years ago likely shows here.

Still, when I consider all the different abstractions and details around using something like Next.js + Typescript, and where the division between server and client becomes mixed, reducing complexity feels advantageous.

Tailwind is another good example where the explicit nature of attaching styles makes sense if you are writing React components. You've encapsulated all that noise behind a simple JSX tag. Using Tailwind with something that returns HTML then requires a similar way to encapsulate the noise. Again, I found using Templ with Go made that feasible, but I still have avoided Tailwind just b/c I don't want to introduce the noise too soon if possible.

It all makes me wonder if we lost the idea of a fullstack engineer, not because the problems became so challenging that we needed the extra complexity, but rather because we organized our applications into frontend and backend organizationally when we should have been more diligent in maintaining teams that could do everything.

The real tl;dr is that Templ for Go is pretty handy for writing components :)

ionrock commented on I quit my job to work full time on my open source project   ellie.wtf/posts/i-quit-my... · Posted by u/cwaffles
Brian_K_White · 2 years ago
You don't need any of it.

History is useful enough to exist as a feature, I up-arrow routinely, but it doesn't actually matter when it doesn't exist.

I find the idea of going out of your way to preserve and migrate years of shell history and make it searchable in a db about like:

You have a problem that water is flooding your kitchen floor. Normally you deal with a spill with a mop or towels. There is now too much water and so you decide that your normal towels aren't good enough and so you get more & better towels, or even put a sump pump in the corner to keep pumping all this water away.

I've written a lot of complicated pipelines and awk and sed etc, but they were either one-offs that are of hardly any value later, or I made a script, and the few things that are neither of those, are so few they automatically don't matter because they are few.

It's not illegal or immoral, just goofy.

ionrock · 2 years ago
I had a similar thought when I first looked at it, but then I thought about my browser history and URL bar. It is sort of a lot of work to open files to write scripts, keep them organized, and make them accessible just to make some commands simpler to run. I wrote https://github.com/ionrock/we for this very reason. I moved most args to env vars and made loading different env vars easily via files. Maybe the history is a better way to make these things reproducible and useful by avoiding the redirection necessary by scripts?

While I agree it may not work with everyone's workflow, maybe it could be a powerful change to folks workflow. I'm going to try it out and see for myself!

ionrock commented on Show HN: Chet – Record your commands to speed up local development   chet.monster... · Posted by u/ionrock
ionrock · 2 years ago
Chet is meant to be a helpful big brother that keeps track of how long commands take to run. It stores the timings in a local SQLite database so you can run queries to find opportunities to optimize. There is also a simple service where you can post timings so your whole team can measure things.

I know there are other similar tools out there, but I hope this one might be helpful or interesting to someone!

ionrock commented on Cloud Firewalls   digitalocean.com/products... · Posted by u/AYBABTME
throwasehasdwi · 9 years ago
Isn't this just doing the same exact thing as iptables only worse since it's not transparent to the operating system?

I've created bad firewall rules by mistake many times and enforcing them transparently so the machines can't see them makes the issue almost impossible to debug and fix.

Of course I have the same gripe with AWS VPC setups I guess... I just think it's funny how the cloud keeps reinventing cloud versions of things that perform objectively worse than the original, but then everyone still uses them out of pure convenience or stupidity.

ionrock · 9 years ago
Personally, I think it is more convenient to think about things like this (ie firewall rules) as data, which makes the use of an API a convenient way to work with the data. The converse, in my mind, is that I'd have to configure each node and ensure a text representation of my firewall rules are correct. That opens the door for some thinking about concurrency that I thankfully get to avoid with an API like this.

That said, I can see your point that you are hiding some details from the OS that might be helpful such as what hosts you can talk to.

Fortunately, just because you might configure a firewall with an API rather than some Ansible plays, it doesn't mean that you can't continue to use Ansible to fill in the gaps. For example, if you did use Ansible to previously configure your iptables, you might change the playbook to call the API based on some YAML. You might use the same YAML to write some information on the host that your application can use to understand the firewall rules that are used.

The point being is that it is always good to remember these are not either/or decisions.

Lastly, I'll also speak up for those folks that don't know much about firewalls and iptables. I understand the principles, but I'm far from feeling confident managing that system myself. In my case, I'm really glad to have an option that lets me get the benefits without forcing me to operate a system I'm not well equipped to do.

ionrock commented on When Node.js is the wrong tool for the job   medium.com/@jongleberry/w... · Posted by u/vmware505
tps5 · 9 years ago
I think "always async" is the main advantage of node.

My general (perhaps wrong) impression is that other languages commonly used in backends are moving toward async io, usually through maturing libraries.

ionrock · 9 years ago
Most services that might be considered "backends" (ie databases, queues, cloud services) end up being in written in languages that can safely use async techniques for I/O and still use real threading or some other method for managing CPU bound problems.

Many of the applications people think of for node.js end up being "glue", much like python, and can live within this constraint for a very long time, where the I/O optimization is a nice benefit.

u/ionrock

KarmaCake day57November 13, 2008View Original