Readit News logoReadit News
pankalog commented on AI will make formal verification go mainstream   martin.kleppmann.com/2025... · Posted by u/evakhoury
pankalog · 2 months ago
The topic of my research right now is a subset of this; it essentially researches the quality of the outputs of LLMs, when they're writing tight-fitting DSL code, for very context-specific areas of knowledge.

One example could be a low-level programming language for a given PLC manufacturer, where the prompt comes from a context-aware domain expert, and the LLM is able to output proper DSL code for that PLC. Think of "make sure this motor spins at 300rpm while this other task takes place"-type prompts.

The LLM essentially needs to juggle between understanding those highly-contextual clues, and writing DSL code that very tightly fits the DSL definition.

We're still years away from this being thoroughly reliable for all contexts, but it's interesting research nonetheless. Happy to see that someone also agrees with my sentiment ;-)

pankalog commented on Asahi Linux Still Working on Apple M3 Support, M1n1 Bootloader Going Rust   phoronix.com/news/Asahi-L... · Posted by u/LorenDB
jcalvinowens · 4 months ago
I don't understand the obsession with the new apple hardware. How is it worth this much trouble? My XPS13 works perfectly with Linux straight out of the box for half the price... and never in my entire life have I needed more than the eight hours of battery life it reliably delivers for me.

I do most of my work over SSH on big metal machines, maybe that's the disconnect? But seriously, there are few things in the world that matter less to me than how fast my laptop is. I did some real work a few weeks ago on a ten-year-old Celeron POS and it didn't bother me at all.

pankalog · 4 months ago
> I do most of my work over SSH on big metal machines, maybe that's the disconnect?

Yeah, I believe that's where the disconnect is. I moved from a Thinkpad to the 16in Macbook Pro with the M3 Pro chip, and I am able to reliably build and write code that runs locally on 5 different Docker containers, for at least 10 hours. I once did a 48hr hackathon with this laptop and I only had to charge it I think 4 or 5 times. I need to be very mobile as I'm going to different locations to attend meetings or write code, and it's able to do everything reliably for a (very extended) workday.

I would have to move from wall socket to wall socket on my old Thinkpad, but something to note is that I was using Windows 10 at the time. The Macbook's best-in-class (in performance-per-watt and per-kg) hardware combined with the software was something that became unbeatable for my workflow.

That being said, my next laptop will be a reliable, non-Apple, but Apple-like performance, ARM64 laptop, and I'll be using some Linux distribution on it.

pankalog commented on Credential Stuffing   ciamweekly.substack.com/p... · Posted by u/mooreds
mooreds · 4 months ago
Is the paper public? Would love to review/reference it for the newsletter.
pankalog · 4 months ago
No unfortunately, and it's pretty old. It was a paper/report for a course during my undergrad, so not polished by any means.
pankalog commented on Credential Stuffing   ciamweekly.substack.com/p... · Posted by u/mooreds
pankalog · 4 months ago
Some years ago I researched the whole credential stuffing ecosystem for a course paper at uni.

Credential Stuffing is (or at least was) a gigantic market, and it is one of the biggest headaches for the biggest pay-walled services, like Netflix, HBO, Prime, etc.

The people that made a living out of it were stuffing millions or billions of credentials (sourced from database leaks) in the most popular services, hoping to then sell the accounts for small amounts of money, like a dollar for a Netflix account with a 10-day warranty. It's a numbers game at heart with a substantial technical aspect, where you need to optimize your checker code to essentially send properly formatted requests that can't be intercepted and don't arouse suspicion, and then you had an ecosystem of "methods" that are certain request-response chains that make your login request look like it's from a real person. People needed to figure out advanced methods to not invoke a CAPTCHA check, which is cost-prohibitive, but not impossible to solve automatically (AI wasn't a thing back then). You then have to buy millions of proxies that are able to route the requests from different IPs so that you're not sending millions of requests from a single IP. Checkers had reached a point where, depending on your proxies, were performing 10,000 or even 20,000 checks per minute. Multithreading was the cornerstone of these technologies, as a simple 2vCPU VM was already bottlenecked by proxy speeds.

Back when I looked into it, it was the wild west, as SSO and other technologies just weren't a thing yet. Companies would become fads of this credential stuffing scene, and it would take a dev team an entire sprint just for them to make a login page that was able to at least force a CAPTCHA check for each single request, and that's IF they had the proper monitoring tools to notice the gigantic spike in login requests. Having a valid account to a service like Ebay where you can then order whatever you want with the linked credit-card, you can understand how big of a security issue this is.

I haven't looked at it recently, but I assume that this has become vastly more difficult for the common-place services like streaming providers and digital goods marketplaces. SSO, IAM platforms like Keycloak, and advanced request scanning techniques have evolved. I'm guessing things have become substantially better, but it's always going to be a big issue for those smaller websites without a dedicated dev team or without at least someone maintaining them.

pankalog commented on Software update bricks some Jeep 4xe hybrids over the weekend   arstechnica.com/cars/2025... · Posted by u/gloxkiqcza
pankalog · 4 months ago
I recently worked at a big home lighting company, working on the OS of the router device that communicates with the light bulbs themselves and the internet/user.

Our OTAU architecture uses A/B system updates [1]. Core idea is that both the kernel and the rootfs (read-only) partitions had 2 different bootslots in storage, and the OTAU would only write to the bootslot that is unused. Hence, if something went wrong, the system would automatically fallback to the previous version by just switching the bootslot used. Over the numerous years that that architecture was used, I couldn't find a single post-mortem that resulted in devices being bricked. Something to note is that the rootfs partition was overlaid with a writable partition for persisting state data etc.

Now that was a $two-figure USD device, not a $5/6-figure USD electric SUV. Is this a cost-cutting measure? At those price levels, doubling your NAND size is not even half of a percent of the total cost of the vehicle.

Unless there was a serious issue that the used bootslot corrupted the unused bootslot, then I don't see how this could have happened.

It's saddening that car manufacturers are so unserious about the code they're deploying.

[1] https://source.android.com/docs/core/ota/ab

pankalog commented on My first contribution to Linux   vkoskiv.com/first-linux-p... · Posted by u/vkoskiv
pankalog · 4 months ago
If I had no problem with devoting the time and money, contributing to the kernel (especially in a topic as obscure as making the extra buttons work on a 20-year-old laptop) is at the top of my bucket list, and I am definitely going to be doing it in the near future when my calendar clears up a bit.

Exquisite write-up and OP's simple writing has a motivating ring to it, and I'm now on the local used marketplace looking for pieces of tech like this :-)

u/pankalog

KarmaCake day104September 27, 2024
About
github: https://github.com/pankalog portfolio: pkal[.]dev email: p[at]pkal[.]dev
View Original