Readit News logoReadit News
rslabbert commented on Jujutsu: A Git-compatible DVCS that is both simple and powerful   github.com/martinvonz/jj... · Posted by u/lemper
e12e · 2 years ago
Looks interesting. Unfortunately doesn't support signing commits - apparently it's possible via "jj export" and using classical git:

https://github.com/martinvonz/jj/issues/58#issuecomment-1247...

rslabbert · 2 years ago
The plan for how the add signed commits is there, and the work isn't that hard (especially as gitoxide continues to add functionality), it just has to be pushed over the line and I've been a bit slack on getting that going.

There's definitely nothing foundational blocking it though and it will happen one day if you'd like to give it a go in the meantime.

rslabbert commented on Jujutsu: A Git-compatible DVCS that is both simple and powerful   github.com/martinvonz/jj... · Posted by u/lemper
ephaeton · 2 years ago
my initial reaction, half OT:

Ooof, random "ASCII" (actually: Unicode) art & dev-chosen colors, my bane of the "modern" CLI applications. That drawing you like? Doesn't work for me, give me the raw output please. Those colors you love? Aside of red-green weakness being the most dominant factor, what you're really doing is trying to set things apart, connotating color with semantics as well. It's nice this works fine on your white-on-black terminal. Have you tried this on a white-on-firebrick terminal? Yellow-on-green? Or anything else than _your_ "normative" setup? Man ...

Also not sure the information presented is adequate. E.g. consider commit 76(2941318ee1) - jj makes it look like that was committed to that repository, while it was done to another. The git presentation looks more spot-on (for that particular commit, while the rest of the display is just a mess - ASCII art that does not add semantics, random colors); also where is 1e7d displayed in jj's output? Why is jj's order different? I remain unimpressed by both UIs.

" Create a file at ~/.jjconfig.toml" ... $XDG_CONFIG_HOME ?

When is that working copy committed? When I run jj? Why bother, when it's not working asynchronously and automatically? And if you commit working copies, do you sync under the hood with stuff the other folks you collaborate with? If not, why bother?

Oh nice, a command to fix "stale" workspaces.. how about you don't let workspaces go stale?

This may all seem to make sense to git-minded people, given the comments here. To me, neither jj nor git make sense (as fossil-minded person who has to work with git), so shrug enjoy....

..but please fix that ASCII Art and Color Stuff, thank you very much.

rslabbert · 2 years ago
All the colours can be adjusted or turned off entirely in the config. [1] A number of different graph styles are supported [2], and failing that, you can completely customise the output template [3]

$XDG_CONFIG_HOME/jj/config.toml is supported, that's where I keep mine.

The working copy is updated whenever you run jj by default, but watchman is also supported (recently added). [4]

In my experience, the command to fix the stale workspaces only needs to be run in exceptional cases where a bug got triggered and a command failed to complete or if you're doing manual poking around.

It's a mindset shift, but it's well worth it in my opinion.

[1] https://github.com/martinvonz/jj/blob/main/docs/config.md#ui... [2] https://github.com/martinvonz/jj/blob/main/docs/config.md#gr... [3] https://github.com/martinvonz/jj/blob/main/docs/templates.md [4] https://github.com/martinvonz/jj/blob/main/docs/config.md#fi...

rslabbert commented on Ask HN: Devs with ADHD do you use specialized tooling?    · Posted by u/yuvalsteuer
rslabbert · 3 years ago
On a slightly different track to the other comments but make the computer work for you. Reduced memory retention means using good static types, lots of tests, and libraries/APIs that are hard to use incorrectly are an enormous benefit.
rslabbert commented on Vetting the Cargo   lwn.net/SubscriberLink/89... · Posted by u/tta
hda2 · 4 years ago
This sounds a lot like cargo-crev but without off-line cryptographic signatures, a significant downgrade in my view.

Edit: Yep: https://mozilla.github.io/cargo-vet/design-choice-faq.html#h...

None of the reasons given by Mozilla seem to justify the downgrade in security, especially since most can be worked around with crev which already employs secure and well-tested authentication schemes.

What also makes this situation peculiar to me is that it's being immediately rushed into Cargo proper instead of the usual way these tools are handled by the Cargo team (i.e. allowing multiple ideas to compete as third-party tools and maybe choosing one once a winner is clear). I understand the recent string of security issues might have played a role here, but I wouldn't expect their reaction to be steamrolling an inferior version of crev as a builtin tool.

I would really like to know what happened here.

Disclosure: I use neither tool, but I'm very interested in the security and health of the Rust ecosystem.

rslabbert · 4 years ago
Since the audits are designed to be used at a per project level and contributed directly into the VCS repo (allowing you to using git signing for example) I don't quite understand what additional off-line cryptographic signatures are required here (considering that Cargo's lockfiles already contain a hash of the crate which would prevent the project from getting an altered version of a crate accidentally and that SHA validation is being considered as part of vet as well https://github.com/mozilla/cargo-vet/issues/116).
rslabbert commented on I’m porting the TypeScript type checker tsc to Go   kdy1.dev/posts/2022/1/tsc... · Posted by u/msoad
vosper · 4 years ago
> To this point, many teams are already using subsets of TypeScript to improve compilation times. Often times it’s things like always declaring return types on functions to avoid the compiler having to infer it (especially between module boundaries).

I hadn't heard of this trick - how much improvement does it make? Seems like it might be good for readability, too?

rslabbert · 4 years ago
I haven't found any large scale analyses but here's an example of a simple type annotation halving compilation time: https://stackoverflow.com/questions/36624273/investigating-l...

The Typescript compiler performance wiki might also be of interest: https://github.com/microsoft/TypeScript/wiki/Performance

rslabbert commented on Failing to Learn Zig via Advent of Code   forrestthewoods.com/blog/... · Posted by u/forrestthewoods
stephc_int13 · 4 years ago
I disagree from this part:

"I also think it's partially wrong. No one in the history of the world has ever been confused or upset by a + b calling a function."

It depends. If this is simple math on vectors I think it can be OK but it should probably be a built-in feature of the language as this is common, solved and we all implement it the same way (for short vectors at least)

But the + operator has been abused in the past, especially with strings concatenation and I think this is a huge liability. Such an innocent looking operator, the simplest of all operations, leading to a function call, a memory allocation and thus a very real potential memory leak, all of that hidden from the eyes...

In my opinion, clarity should have priority over anything else. If an operation is computationally complex (especially with side effects) it should be at least hinted to the reader by a function call.

rslabbert · 4 years ago
Interestingly since you can't pass an allocator to an operator call, there is some extra guarantee around whether a hypothetical overloaded operator can allocate. If you implement it for a vector type which doesn't contain an allocator reference, then you're sure that any operators won't allocate.
rslabbert commented on A Review of the Zig Programming Language (Using Advent of Code 2021)   duskborn.com/posts/2021-a... · Posted by u/mkeeter
anatoly · 4 years ago
I also did AoC 2021 in Zig: https://github.com/avorobey/adventofcode-2021

One thing the OP didn't mention that I really liked was runtime checks on array/slice access and integer under/overflow. Because dealing with heap allocation is a bit of a hassle, I was incentivized to use static buffers a lot. I quickly figured out that I didn't have to worry about their sizes much, because if they're overrun by the unexpectedly large input or other behavior in my algorithms, I get a nice runtime error with the right line indicated, rather than corrupt memory or a crash. Same thing about choosing which integer type to use: it's not a problem if I made the wrong choice, I'll get a nice error message and fix easily. This made for a lot of peace of mind during coding. Obviously in a real production system I'd be more careful and use dynamic sizes appropriately, but for one-off programs like these it was excellent.

Overall, I really enjoyed using Zig while starting out at AoC problem 1 with zero knowledge of the language. To my mind, it's "C with as much convenience as could be wrung out of it w/o betraying the low-level core behavior". That is, no code execution hidden behind constructors or overloads, no garbage collection, straight imperative code, but with so much done right (type system, generics, errors, optionals, slices) that it feels much more pleasant and uncomparably safer than C.

(you can still get a segmentation fault, and I did a few times - by erroneously holding on to pointers inside a container while it resized. Still, uncomparably safer)

rslabbert · 4 years ago
For what it's worth, I find a lot of Zig code benefits from switching to u32/u64 indexes into an array instead of using pointers. This is only really doable if your container doesn't delete entries (you can tombstone them), but the immediate benefit is you don't have pointers which eliminates the use after free errors you mentioned.

The other benefit is that you can start to use your ID across multiple containers to represent an entity that has data stored in multiple places.

See [1] for a semi-popular blog post on this and [2] for a talk by Andrew Kelley (Zig creator) on how he's rebuilding the Zig compiler and it uses this technique.

[1] https://floooh.github.io/2018/06/17/handles-vs-pointers.html [2] https://media.handmade-seattle.com/practical-data-oriented-d...

rslabbert commented on Web3 Is Bullshit   stephendiehl.com/blog/web... · Posted by u/luisha
streamofdigits · 4 years ago
if Web2 became a dystopic Zuckerbergia and Web3 is a disguised Cryptobrosia is there maybe a chance for a Web 2.5 that doesn't suck? Like something that isn't fake techno-solutionism on steroids?

What would be its building blocks? IFPS, Activitypub, XMPP, Matrix? but more importantly what would be the governance and incentives that would prevent it from rapidly degenerating as with all other versions

rslabbert · 4 years ago
The alternate path is one the free software community has been pushing towards for decades. The ability to voluntarily associate (and disassociate) from trusted communities, software and communities that works for users and not the other way around, and autonomy coupled with mutual aid/benefit.

Compare the focus of the GNU, Mastodon, Matrix, etc. projects to blockchain world and the fundamental difference is they're not trying to create a world in which we don't trust anyone except a the idea that human nature runs on greed and can be exploited by making us have to pay (spend tokens) for everything.

rslabbert commented on AWS Cloud Control API, a Uniform API to Access AWS and Third-Party Services   aws.amazon.com/blogs/aws/... · Posted by u/lukehoban
nprateem · 4 years ago
I don't know why hashicorp and pulumi are all smug about this. In one go it's destroyed half their moats.
rslabbert · 4 years ago
In addition to the other replies, Terraform has used ARM on Azure for a while now which is similar to this a unified API for all Azure Resource Management (hence ARM) and this hasn't caused any issues for them there.
rslabbert commented on Show HN: Logtail – ClickHouse-Based Log Management   logtail.com/... · Posted by u/jurajmasar
rslabbert · 4 years ago
The pain of not being able to do complex queries in an Elastic world means this is a pretty logical conclusion. I'd love to see the ability to also collect metrics and traces in Clickhouse as well, which would let me easily join across dynamic service boundaries to collate information I need.

For example, being able to correlate a customer ID (stored in a log) to a trace ID (stored in the request trace) to Snowflake warehouse usage (stored as metrics) to a subset of the pipeline (mixed between logs and traces) to get a full understanding of how much each customer cost us in terms of Snowflake usage would be immensely valuable.

u/rslabbert

KarmaCake day30May 8, 2021View Original