Readit News logoReadit News
gaxun commented on Request Node lib used by 48k modules is now deprecated   github.com/request/reques... · Posted by u/lootsauce
svat · 6 years ago
That's interesting, can you go into more detail on those two -- what was the pedagogical improvement, and the typographical item? Just curious...
gaxun · 6 years ago
Sure.

- On page 715 of Volume 4A, he had something like \`a when he meant to have just à.

- In Volume 1, Fascicle 1, there is a convention that the "main" entry point of an MMIX program begins at LOC #100. The convention is established early on and repeated throughout the text. However, at no point is it explained why LOC #100 was chosen (instead of LOC #0, LOC #80, or whatever). It could be gleaned through careful study -- LOC #0-#80 are reserved for trip/trap handling and one more location before #100 is reserved for a special second entry point -- but you basically had to read the entire fascicle to find /all/ of these. A naive user would be likely to try writing a program beginning at LOC #0 and wonder why it didn't seem to behave correctly. My suggestion was to just add a note explaining why LOC #100 was used. He agreed and you can find the added note in the latest errata for Volume 1, Fascicle 1.

gaxun commented on Request Node lib used by 48k modules is now deprecated   github.com/request/reques... · Posted by u/lootsauce
svat · 6 years ago
> when's the last time Knuth wrote a check for TAOCP?

Got one in the mail yesterday (check was written Feb 10). In fact you can figure out how many checks were written in the last month, roughly: compare https://web.archive.org/web/20200110074014/https://cs.stanfo... with https://web.archive.org/web/20200219035903/https://cs.stanfo... I imagine most of these checks (definitely all $7 of mine) were from finding bugs in pre-fascicles, i.e. bleeding-edge drafts he's been putting up online. (See near the bottom of https://cs.stanford.edu/~knuth/news.html)

On the other hand, while you mention Knuth and "done", TeX and METAFONT are better examples. He declared them "done" in 1990 (https://tug.org/TUGboat/Articles/tb11-4/tb30knut.pdf), but still does bug fixes:

> I still take full responsibility for the master sources of TeX, METAFONT, and Computer Modern. Therefore I periodically take a few days off from my current projects and look at all of the accumulated bug reports. This happened most recently in 1992, 1993, 1995, 1998, 2002, 2007, and 2013; following this pattern, I intend to check on purported bugs again in the years 2020, 2028, 2037, etc. The intervals between such maintenance periods are increasing, because the systems have been converging to an error-free state.

(The latest round, in 2013, did surface one bug: the debug string representation of an "empty" control sequence was missing a space.)

gaxun · 6 years ago
Nice to see multiple of us are on here.

> I imagine most of these checks (definitely all $7 of mine)

The 0x$2.00 check I got from 10 Feb 2020 was for one typographical item in Volume 4A and one pedagogical improvement in Volume 1, Fascicle 1. So it is certainly possible to still get checks for material that is already published.

Since it is now 2020, anyone with bug reports from the typography stuff should send them in soon, wouldn't want to miss the deadline.

Deleted Comment

gaxun commented on Request Node lib used by 48k modules is now deprecated   github.com/request/reques... · Posted by u/lootsauce
neurobashing · 6 years ago
Hardly a hot take but a lot of people seem to think that if code isn't actively being worked it, you can't use it, as if there can never be a point where fewer and fewer bugs are reported and fewer and fewer features are requested.

when's the last time Knuth wrote a check for TAOCP? Is it "dead"?

gaxun · 6 years ago
> when's the last time Knuth wrote a check for TAOCP?

Pretty recently. I just got one in the mail today.

gaxun commented on Is Email Making Professors Stupid?   chronicle.com/interactive... · Posted by u/Mistletoe
gaxun · 7 years ago
The author should have written to Donald Knuth with an interview request for this piece. It would have added something special beyond just repeating what's already visible on his website.

I wrote to Professor Knuth about a project I did a year or two ago and was pleasantly surprised to receive a two page handwritten note in response. So it seems like the no-email filter is probably still working well for him.

gaxun commented on Suggest HN: Can we please start tagging links with autoplay media?    · Posted by u/malikNF
jasonkostempski · 8 years ago
If your browser doesn't support disabling autoplay, send them a request for change. They created the problem, it's their job to fix it.
gaxun · 8 years ago
Indeed.

Web browsers are generally free to use and there are several serious contenders and many less popular ones.

So the main thing they should be competing on is user experience.

But it seems to me that browsers frequently fail to deliver a user-first experience.

The browser should only take actions specifically requested by a user, as his agent. Everything about the experience needs to be reframed from that perspective.

Some browsers lately seem to be doing a little better at this, but just adding "advanced flag" features on to an existing product isn't going to help mainstream users at all.

gaxun commented on Keybase launches encrypted Git   keybase.io/blog/encrypted... · Posted by u/aston
malgorithms · 8 years ago
Keybase team member here. Interesting fact: git doesn't check the validity of sha-1 hashes in your commit history. Meaning if someone compromises your hosted origin, they can quietly compromise your history. So even the fears about data leaks aside, this is a big win for safety.

From an entrepreneurial perspective, this is my favorite thing we've done at Keybase. It pushes all the buttons: (1) it's relatively simple, (2) it's filling a void, (3) it's powered by all our existing tech, and (4) it doesn't complicate our product. What I mean by point 4 is that it adds very little extra UX and doesn't change any of the rest of the app. If you don't use git, cool. If you do, it's there for you.

What void does this fill? Previously, I managed some solo repositories of private data in a closet in my apartment. Who does that? It required a mess: uptime of a computer, a good link, and dynamic dns. And even then, I never could break over the hurdle of setting up team repositories with safe credential management...like for any kind of collaboration. With this simple screen, you can grab 5 friends, make a repo in a minute, and all start working on it. With much better data safety than most people can achieve on their own.

gaxun · 8 years ago
> hurdle of setting up team repositories with safe credential management...like for any kind of collaboration

Identity continues to be the key selling point of keybase. I'm excited by this.

I can keep clones of my private repositories here. Things like dotfiles and configurations. That sounds like a good start. And I can also easily share code to people who need to see it.

gaxun commented on JSON-LD and Why I Hate the Semantic Web   manu.sporny.org/2014/json... · Posted by u/DamonHD
Mathnerd314 · 9 years ago
Does anyone on HN use JSON-LD? I hadn't even heard of it before. It looks it's useful in SEO (https://developers.google.com/search/docs/guides/intro-struc...) but that's about it?
gaxun · 9 years ago
I spent some time attempting to work with the W3C Web Annotation Data Model. That data model is serialized as JSON-LD.

After spending about 50 hours reading the documents and attempting to implement some of it, I have a general idea what JSON-LD is.

I wasn't really trying to achieve anything, so I basically quit once something seemed opaque enough I couldn't figure it out in a short period of time. When I visited the JSON-LD Test Suite page to see what implementations are expected to do [0], I found:

> Tests are defined into compact, expand, flatten, frame, normalize, and rdf sections

I had a hard time figuring out what each of these verbs meant, and they were about all that the various implementations I found did. For example, the term normalize doesn't even appear in the JSON-LD 1.0 specification [1]. shrug I'm sure I could have figured out more if I spent the time to actually read the whole thing and all the related documents.

[0]: https://json-ld.org/test-suite/

[1]: https://www.w3.org/TR/json-ld

gaxun commented on Official Keybase extension for Chrome   keybase.io/docs/extension... · Posted by u/endetti
gaxun · 9 years ago
I've been trying out this extension for a few days.

What I would really like to see from this extension is a 1-click way to sign any message I'm writing, anywhere on the internet. Along with that would be the ability to verify that a keybase signature found in the wild belongs to a particular keybase user. Then I can initiate out-of-band discussions with the author of a comment on someone's blog, not just with a Reddit or Hacker News poster.

Having the keybase chat button appear next to posts on sites like Reddit, HN, etc. seems like a great step toward a "metaweb" platform as well. For example, I could let someone know about the typo in their post via keybase chat, rather than polluting the public comment stream.

Very excited.

gaxun commented on Annotation is now a web standard   hypothes.is/blog/annotati... · Posted by u/kawera
foxhedgehog · 9 years ago
I've been wanting to provide a high-fidelity many-to-many commenting system inside of a text editor or browser since I was in college. My thought was that if you could annotate something as complex as Shakespeare:

http://imgur.com/FgsyAco

then you could annotate legal documents, code, and other high-density texts as well.

I've long felt that existing solutions fall down in a few ways:

1. UX -- this is a HARD UX problem because you are potentially managing a lot of information on screen at once. Anybody staring at a blizzard of comments in Word or Acrobat knows how bad this can get.

2. One-to-one -- Most existing exegesis solutions like genius.com only let you mark of one portion of text for threaded commentary, which is not ideal because complex text like the above example can have multiple patterns working in it at the same time:

http://imgur.com/x6BKKQW (a crude attempt to map assonance and consonance)

Really, what a robust commentary system needs is to map many comments to many units of text, so that the same portion of text can be annotated multiply (as this solution attempts) but also so that the same comment can be used to describe multiple portions of text as well.

3. Relationships between comments -- It's great that this solution gives threaded comments as a first-class feature, but you also want to be able to group comments together in arbitrary ways and be able to show and hide them. In my examples above, there are two systems at work: the ideational similarities between words, and the patterns of assonance / consonance. You could also add additional systems on top of this: glossing what words or phrases mean (and in Shakespeare, these are often multiple), or providing meta-commentary on textual content relative to other content, or even social commentary on the commentaries. You need a way to manage hierarchies or groups of content to do this effectively. No existing solution that I am aware of attempts this.

I literally just hired somebody yesterday to start work on a text editor that attempts to resolve some of this, but it's an exceedingly hard problem to solve with technology.

gaxun · 9 years ago
> Really, what a robust commentary system needs is to map many comments to many units of text

This is actually built into this specification. From the Web Annotation Data Model [0]:

  - Annotations have 0 or more Bodies.
  - Annotations have 1 or more Targets.
So one "Annotation" object can have multiple bodies (descriptions) attached to multiple targets.

> 3. Relationships between comments

This sounds more like an implementation detail of a client than part of the protocol or data model put forth by the W3C group.

However, I believe this can kind of be done server-side with the Web Annotation Protocol [1]'s idea of Annotation Containers. Your server can map a single annotation to multiple containers. So perhaps you have an endpoint like `http://example.com/a/` and you want to arrange a hierarchy of comments. You could provide a filtered set of the annotations at `http://example.com/a/romeo/consonance/`, and similar endpoints.

So basically what I'm saying is it seems like the protocol here isn't going to get in your way, it's just incentive to use this particular model for storing and transferring your data.

[0]: https://www.w3.org/TR/annotation-model/#web-annotation-princ...

[1]: https://www.w3.org/TR/annotation-protocol/#container-retriev...

u/gaxun

KarmaCake day294September 16, 2016
About
Check out my site:

https://www.gaxun.net

I share ideas and commentary there. Feedback appreciated!

[ my public key: https://keybase.io/gaxun; my proof: https://keybase.io/gaxun/sigs/Vnwyy8GR22CK-Mp0h1-j4JIS9mc5cRmAQ6ogvu_FsHE ]

View Original