Readit News logoReadit News
rvnx · a year ago
One safety tip: disable SSH Agent Forwarding before you connect, otherwise the remote server can theoretically reuse your private key to establish new connections to GitHub.com or prod servers (though this host is unlikely malicious).

https://www.clockwork.com/insights/ssh-agent-hijacking/ (SSH Agent Hijacking)

fragmede · a year ago
The full command you want is:

    ssh -a -i /dev/null terminal.shop
to disable agent forwarding, as well as to not share your ssh public key with them, but that's just a little less slick than saying just:

    ssh terminal.shop
to connect.

glennpratt · a year ago
I'm curious why you added `-i /dev/null`. IIUC, this doesn't remove ssh-agent keys.

If you want to make sure no keys are offered, you'd want:

  ssh -a -o IdentitiesOnly=yes terminal. Shop
I'm not sure if the `-i` actually prevents anything, I believe things other than /dev/null will still be tried in sequence.

kazinator · a year ago
1. Why is this something that would be enabled by default.

2. Can't you disable agent forwarding in a config file, so as not to have to clutter the command line?

Intralexical · a year ago
I just ran it in a `tmpfs` without any credentials:

    $ bwrap --dev-bind / / --tmpfs ~ ssh terminal.shop

Repulsion9513 · a year ago
Honestly the only thing that you need is -a (and only if you made the bad choice to do agent forwarding by default). Sending your pubkey (and a signature, because the server pretends to accept your pubkey for some reason?) isn't a security risk and you're (in theory) going to be providing much more identifying information in the form of your CC...

(And as the siblings mentioned this won't work to prevent your key from being sent if you're using an agent)

SoftTalker · a year ago
SSH Agent Forwarding does not happen by default. You need to include the -A option in your ssh command, unless maybe you've enabled it globally in your ~/.ssh/config file.

They can't get your private keys, but they could "perform operations on the keys that enable them to authenticate using the identities loaded into the agent" (quoting the man page). This would also only be possible while you are connected.

thih9 · a year ago
This is only a threat if you enable agent forwarding for all hosts.

If you enable agent forwarding for all hosts then yes, data will be forwarded.

Your link says:

> Don’t enable agent forwarding when connecting to untrustworthy hosts. Fortunately, the ~/.ssh/config syntax makes this fairly simple

binkHN · a year ago
Like you noted, ForwardAgent no is the default in /etc/ssh/ssh_config.
bananskalhalk · a year ago
*disable ssh agent FORWARDING.

Which honestly should always be disabled. There are no trusted hosts.

tichiian · a year ago
That's baby+bathwater.

Just use ssh-add -c to have the ssh-agent confirm every use of a key.

contingencies · a year ago
Default for the last 24 years according to https://github.com/openssh/openssh-portable/blame/385ecb31e1...
sva_ · a year ago
I've found myself to be much more comfortable to just define all my private keys in ~/.ssh/config on a host-by-host basis.
derefr · a year ago
> There are no trusted hosts.

...your own (headless) server that's in the same room as you, when you're using your laptop as a thin-client for it?

Deleted Comment

arghwhat · a year ago
Just to be clear, ssh agent forwarding is disabled by default and enabling it is always a hazard when connecting to machines that others also have access to.

Not at all specific to this.

nomel · a year ago
Is it not standard practice to make different keys for different important services?

I have a private key for my prod server, a private key for GitHub, and a private junk key for authenticating to misc stuff. I can discard any without affecting anything else that's important.

If I authenticated with my junk key, would my other keys still be at risk?

n2d4 · a year ago
> If I authenticated with my junk key, would my other keys still be at risk?

Yes, if you authenticate with your junk key (or no key), and SSH agent forwarding is enabled, you are still at risk. It lets the remote machine login to any server with any keys that are on your local SSH agent. Parent's link shows how this can be abused.

Fortunately, it's disabled by default, at least on newer versions.

leni536 · a year ago
It's a good practice, but it's somewhat against the grain of ssh defaults. It's not surprising that many people stick to the defaults.
ShamelessC · a year ago
It’s a practice, but not necessarily a standard one. In any case if even one person sees that, the advice will have served its purpose.
Repulsion9513 · a year ago
The only reason/benefit for using different keys is to prevent someone from correlating your identity across different services... if you're worried about that go ham
hot_gril · a year ago
If anything it's more standard practice to have agent forwarding disabled, since that's the default.
jolmg · a year ago
Default is disabled.
hnarn · a year ago
Exactly, this tip only applies if you reconfigured ssh to automatically forward agent to all hosts, which is absolutely insane.
chuckadams · a year ago
I take it you mean disable ssh agent forwarding — the agent itself is fine. You should never forward your ssh agent to a box you don’t trust as much as your own.
rvnx · a year ago
Message edited, thank you, you are absolutely right.
chrismorgan · a year ago
And for privacy, don’t let it know your identity or username:

  ssh -o PubkeyAuthentication=no -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -a nobody@terminal.shop
Otherwise, the remote server can probably identify who you are on platforms like GitHub.

langcss · a year ago
What I am reading from this there be dragons so don't use SSH to buy coffee!
kazinator · a year ago
This feature is not enabled by default; "ForwardAgent = yes" has to be in the config file.

The article you cited makes it clear that you can turn this on for specific hosts in your private SSH config (and probably should do it that way).

So why wouldn't you?

Turning on forwarding globally and then having to remember to disable it for some untrusted hosts with -a looks silly and error-prone to me.

LeoPanthera · a year ago
"ForwardAgent no" in ~/.ssh/config will do this automatically.
zaik · a year ago
Not having "ForwardAgent yes" in ~/.ssh/config will do this automatically too.
teruakohatu · a year ago
Is "Host * \n AddKeysToAgent yes" acceptable from a security POV or should that also be per host?
orblivion · a year ago
Is it "yes" by default? If so, that seems insane given what the op said about it. But other comments say it's "no" by default. If it's "no" by default, why are people alarming us by bringing this up? And why for terminal.shop in particular?
heavyset_go · a year ago
Using discoverable and non-discoverable keys via FIDO security keys will require PIN + physical confirmation, or just physical confirmation, by default if anyone tries to use your agent's keys.
lrvick · a year ago
If you want to use SSH forwarding reasonably safely, use a yubikey for ssh so you have to tap once for each hop. Now a MITM can't use your key for more hops without you physically consenting to each one.
gowld · a year ago
That's terrifying. I don't understand why the design requires Forwarding to work without more explicit consent from the client at use time. (That is, when the middle tier wants to make a connection, it should forward an encrypted challenge from the server that can only be decrypted, answered, and re-encrypted by the original ssh keyholder on the client, similar to how, you know, ssh itself works over untrusted routers.
acchow · a year ago
AFAIK, that’s exactly how agent forwarding works. The explicit part is that you need to explicitly turn it on
ZiiS · a year ago
It is not the default, you would have to have a silly config for this to matter.
mercora · a year ago
You can configure the agent to confirm each key usage to have your cake and eat it too. :)

It's also good to see if any malicious process tries to make use of the agent locally!

arcanemachiner · a year ago
Thanks for the PSA. It gave me a good opportunity to double check that I hadn't enabled agent forwarding in any of my SSH scripts that don't need it.
raggi · a year ago
You actually want to verify first or someone will mitm you, e.g. mitm.terminal.shop.rag.pub
dartos · a year ago
With this one comment, you’ve convinced me that ssh apps are a bad idea
vrighter · a year ago
i usually just disable ssh agent forwarding globally by default, and only enable it selectively via my ~/.ssh/config
abc_lisper · a year ago
Dang. Didn't know this was a thing. Thank you!

Deleted Comment

amne · a year ago
here we go again. domain and path restricted cookies anyone?

Dead Comment

miki123211 · a year ago
I can't test this due to the product being out of stock, but I wonder what their approach to PCI compliance is.

Processing credit card data has a high compliance burden if you're unwilling to use a secure widget made by an already-authorized provider like Stripe. That's for a good reason, most web and mobile apps are designed such that their backend servers never see your full credit card number and CVV. You can't do this over SSH.

I also wonder whether you could even do this if you had to handle PSD2 2-factor authentication (AKA 3d Secure), which is a requirement for all EU-based companies. This is usually implemented by displaying an embed from your bank inside an iframe. The embed usually asks you to authenticate in your banking app or enter a code that you get via SMS.

You can take the easy way out of course and make the payment form a web page and direct the user to it with an URL and/or a Unicode-art rendition of a QR code.

srinathkrishna · a year ago
They mention in the faq that they use Stripe - https://www.terminal.shop/faq. Stripe does offer integrations that are not natively using their widgets. Ultimately, the PII data is stored at Stripe.

PS: I work at Stripe but I don't really work on the PCI compliant part of the company.

hn_throwaway_99 · a year ago
The fact that the card number data is stored at Stripe doesn't matter that much. As parent commenter says, the card numbers are still visible on terminal.shop's network because it all goes over their SSH connection.

For most websites that use the Stripe widget, the website owner can never see the full card number, because the credit card number entry fields are iframed in on the page. That means website owners in this scenario are PCI compliant just by filling out PCI SAQ A (self assessment questionnaire A), which is for "Card-not-present Merchants, All Cardholder Data Functions Fully Outsourced": https://listings.pcisecuritystandards.org/documents/SAQ_A_v3...

But that questionnaire is only for merchants where "Your company does not electronically store, process, or transmit any cardholder data on your systems or premises, but relies entirely on a third party(s) to handle all these functions;" For e-commerce merchants who CAN see the card number, they need to use SAQ D, https://listings.pcisecuritystandards.org/documents/SAQ_D_v3.... This includes additional requirements and I believe stuff like a pen test to be PCI compliant.

samwillis · a year ago
Interestingly Stripe started life as /dev/payments and I seem to remember the first iteration was an agent on your server that literally processed card payments when you wrote the details to /dev/payments
Cu3PO42 · a year ago
Not just EU companies. Also EU customers. I cannot use my cards in a Card-Not-Present transaction that does not support 3D Secure. This obviously isn't a concern for them yet since they only ship to the US, but it might become one.

In the past one of my banks required me to put in a One-Time Password on the frame I'm shown. While it's different right now, you do need to show that page in the general case. That would really break the immersion of their process :/

notpushkin · a year ago
I remember seeing a 3D Secure screen in some app that didn't use a webview but rendered the form as native controls. It worked with Estonian LHV at least (I think?). If that can be done with Stripe, they could render the form as a TUI.

And if everything fails, they can just render the 3DS page in the terminal! (e. g. using Browsh [1]) Although I'm not sure if that would be compliant with the regulations.

[1] https://www.brow.sh/

zzo38computer · a year ago
I think that a better way (which is protocol-independent, and does not require a web browser, or even necessarily an internet connection), would be a kind of payment specification which is placed inside of a order file. This payment specification is encrypted and digitally signed and can be processed by the bank or credit card company or whatever is appropriate; it includes the sender and recipient, as well as the amount of money to be transferred (so that they cannot steal additional money), and possibly a hash of the order form. A payment may also be made by payphones or by prepaid phone cards (even if you do not have a bank account nor a credit card), in which case you may be given a temporary single-use key which can be used with this payment specification data; if you do not do this, then you can use the credit card instead.
amne · a year ago
I was asking myself the same thing while watching the live stream where they somehat explained how it works.

It's still not clear to me if they are compliant.

To make it work like in the browser it would require some sort of SSH multiplexing where your client is connected to both the shop and Stripe's SSH server and you enter your card data into a terminal region that is being rendered by stripe's ssh server. And then the triangle is completed by Stripe notifying the shop that the payment is ok.

konschubert · a year ago
Wouldn’t it be amazing if there was a simpler way to pay money online.
Perz1val · a year ago
I don't know if this is sarcasm or not, but in Poland we have BLIK and it is amazing. Paying online is as simple as entering a 6 digit code from the app and confirming transaction in the app. Afaik every major bank supports it too
das_keyboard · a year ago
The websites faq says they are still using stripe for payment and ordering - however this may work.

Deleted Comment

fuzzy_biscuit · a year ago
The FAQ says they use Stripe for orders and don't even have their own DB in which to store purchase data, so PCI compliance should be a non-issue
unscaled · a year ago
PCI compliance is never a non-issue.

Even if you're using a third party provider that handles both credit card entry and processing, you need to comply with some subset of the PCI/DSS requirements.

In the case of terminal.shop it's not even true, since they can see the credit card number on their side, even if all they do is to forward that number to Stripe and forget about it.

For small and medium-sized merchants, PCI/DSS classifies different types of handling through the concept of which SAQ (Self-Assessment Questionnaire) you have to fill in. Different SAQ have different subset of requirements that you need to fulfill. For e-commerce use cases, there are generally 3 relevant SAQs, in order of strictness:

- SAQ A: Applicable when the merchant redirects payment requests to the payment processor's page or shows an iframe that is hosted by the processor. This is the level required for Stripe Checkout or Stripe Elements.

- SAQ A-EP: Applicable when the merchant handles input on the browser, but sends the data directly to the processor without letting it pass through the merchant's server. This is equivalent to the classic Stripe.js.

- SAQ D: Applicable when the card data is transmitted, stored or processed on the merchant's own server, even if the merchant just receives the card number and passes that on to the payment provider. Stripe calls this type of usage "Direct API Integration" [1].

The level of compliance required for terminal.shop should be SAQ-D for Merchants, which is quite onerous. It covers almost all of the full set of PCI/DSS requirements.

But even if a merchant just uses Stripe.js, the PCI SSC still cares about the possibility of an attacker siphoning card data from the merchant's site through an XSS vulnerability.

And even if the merchant is using an iframe or a redirect (with something like Stripe Checkout or Stripe Elements) there is still the possibility of hard-to-detect phishing, where an attacker could replace the iframe or redirect target with their own site, made to look exactly like Stripe.

---

[1] https://docs.stripe.com/security/guide

niutech · a year ago
One esy to solve this is to use a terminal web browser like Carbonyl.
thescriptkiddie · a year ago
The burden of PCI compliance is a lot lighter than you might think. You basically just have to fill out a bunch of forms, there's no inspection or anything.
alt227 · a year ago
You obviously havent had to manage PCI compliance for a company which takes credit card numbers directly onto their site or over the phone.
PaulDavisThe1st · a year ago
A lot of people don't know that before Amazon started, there was a company out of Portland, OR called Bookstacks selling books via a telnet interface. In the early days, Bezos was quite worried about their potential to get "there" first (wherever "there" was going to be). It was a fairly cool interface, at least for 1994.

[ EDIT: worried to the point that we actually implemented a telnet version of the store in parallel with the http/html one for a few months before abandoning it ]

mleo · a year ago
There were a few using telnet before the web gained wider traction. For example, CDNow started out that way in 1994.
brk · a year ago
I remember ordering a CD via CDNow and a very rudimentary SMS interface on my phone around 1996. It took about 10 minutes to go through the entire process, but I did it while at the movies with my wife, waiting for the previews to start and we both thought it was just SO advanced.
kloch · a year ago
I bought a CD from CDNOW over Telnet in the early 90's!

I also remember telnet BBS's became popular for a few years when I was in college 91-93.

obruchez · a year ago
That's how I ordered my first CDs online: via a Telnet interface. It sounds crazy 30 years later.
ahazred8ta · a year ago
Yes, they were the original books.com, and I used to buy from them via telnet before they had their www site up.
simantel · a year ago
Do you have more info? I found this article[0] about "Book Stacks" which became Books.com, but it looks like they were based in Cleveland?

[0] https://sbnonline.com/article/visionary-in-obscurity-charles...

PaulDavisThe1st · a year ago
More info is: I was wrong, Ohio is right.
B1FF_PSUVM · a year ago
Yes, books.com was based in Ohio. I bought from them via the mentioned telnet interface.
StableAlkyne · a year ago
> selling books via a telnet interface.

Were people just that trusting back then, or had they figured out some kind of pre-SSL way of securing things?

__s · a year ago
In terms of MITM attacks, yes, they were trusting

Even back in 2010 lots of sites were http, like Facebook, & there was FireSheep which would snoop on public wifi for people logging into sites over HTTP

SoftTalker · a year ago
In 1994? Most of the internet was unencrypted, and it wasn't very commercial yet. https had just been invented, and ssh was a year away. There was no wifi, everything was dial-up unless you were at a university or something, and snooping just wasn't all that big a risk.
hultner · a year ago
I can only talk from personal experience I did not trust most online payments around the turn of the millennium, but I did order quite a few things online. I usually payed either by collect on delivery or by invoice like regular good old fashioned mail-order, or by the early 00s VISA had something called e-card or similar, where you could generate a temporary one time use CC via a Java applet, this card was only valid for a day and could only be charged by a pre-determined amount, making the risk very low.
newsclues · a year ago
A large bookstore was using CLI for their internal inventory management system well into the 2000s.
PaulDavisThe1st · a year ago
amzn was likely doing that too. the original tools that we wrote in 94-96 for store ops were all CLI.
thdxr · a year ago
hey! i'm one of the people who worked on this, we actually launched a few days ago and sold out quite quickly - we'll remove the email capture so you can poke around

we'll be back in a few weeks with proper inventory and fulfillment

we'll also be opensourcing the project and i can answer any questions people have about this

halfcat · a year ago
Oh wow. You’re the guy who knows Adam right? His Laravel video was so inspiring.
Mockapapella · a year ago
oh shit, you're open sourcing this as well? I'd love to use a similar workflow for some of my projects. Love the idea!

Also you guys should post over on Threads -- a bunch of people over there are really into the idea as well: https://www.threads.net/@mockapapella/post/C5_vLdDP0J1

qudat · a year ago
We're doing similar things over at https://pico.sh/

We use: https://github.com/charmbracelet/wish

Dead Comment

d3m0t3p · a year ago
Hey, nice work, how to get updates about the open source release ?
thdxr · a year ago
probably follow the twitter account @terminaldotshop
dwhly · a year ago
"Strong keys, Strong coffee" There, you're welcome. :)
thisisauserid · a year ago
Is it /usr/locally grown and single .'ed? How quickly can they mv it to my ~?
tiptup300 · a year ago
as per chatgpt

This joke is a clever play on words that merges elements of computer programming and coffee culture. Let's break it down:

    New startup sells coffee through SSH: SSH stands for Secure Shell, which is a network protocol that allows for secure communication between two computers. In this context, the joke suggests that this new startup is selling coffee through a secure connection, presumably online.

    Is it /usr/locally grown and single .'ed?: This part of the joke is a play on the directory structure in Unix-like operating systems, where /usr typically contains user-related programs and data. "Locally grown" suggests that the coffee is sourced locally, and "single .'ed" is a wordplay on "single origin," a term used in coffee culture to denote coffee that comes from a single geographic origin. The /usr/locally grown part humorously combines Unix directory structure with the concept of coffee sourcing.

    How quickly can they mv it to my ~?: Here, "mv" is a command in Unix systems used to move files or directories, and "~" represents the user's home directory. So, "mv it to my ~" is a playful way of asking how quickly they can deliver the coffee to the customer's home. It's also a pun on the idea of moving the coffee to the user's home directory.

phone8675309 · a year ago
Pretty good
Y_Y · a year ago
unzip

Deleted Comment

Shakahs · a year ago
I'm curious how they built this. It's SSH but the IP address is Cloudflare's edge network. It could be using CF Tunnel to transparently route all the SSH sessions to some serving infrastructure, but I didn't know you could publicly serve arbitrary TCP ports like that. Building it in serverless fashion on CF Workers would be ideal for scalability, but those don't accept incoming TCP connections.
Scaevolus · a year ago
Yup! Cloudflare naturally advertises HTTP most heavily and it has fancier routing controls, but it supports arbitrary TCP protocols.

> Cloudflare Tunnel can connect HTTP web servers, SSH servers, remote desktops, and other protocols safely to Cloudflare.

https://developers.cloudflare.com/cloudflare-one/connections...

> In addition to HTTP, cloudflared supports protocols like SSH, RDP, arbitrary TCP services, and Unix sockets.

https://developers.cloudflare.com/cloudflare-one/connections...

KomoD · a year ago
Cloudflare Tunnels only open HTTP/S to the internet, you'll need their client to reach the other protocols. More likely that this is Cloudflare Spectrum.
londons_explore · a year ago
That requires the client to install custom tunnelling software.

If you want the client to not require special software, they provide a web based terminal emulator for ssh, and a web based VNC client.

thdxr · a year ago
hey - worked on this it's using Cloudflare Spectrum which can proxy any tcp traffic

will be talking more about this soon

zzo38computer · a year ago
Some protocols do not support virtual hosting; apparently this includes SSH.

It would be possible to support other protocols with a single IP address (either because they are running on the same computer, or for any other reason) if they support virtual hosting.

Of the "small web" protocols: Gopher and Nex do not support virtual hosting; Gemini, Spartan, and Scorpion do support virtual hosting. (Note that Scorpion protocol also has a type I request for interactive use.)

NNTP does not support virtual hosting although depending on what you are doing, it might not be necessary, although all of the newsgroups will always be available regardless of what host name you use (which requires that distinct newsgroups do not have the same names). This is also true of IRC and SMTP.

However, if you are connecting with TLS then it is possible to use SNI to specify the host name, even if the underlying protocol does not implement it.

(This will be possible without the client requiring special software, if the protocol is one that supports virtual hosting. There may be others that I have not mentioned above, too.)

nkcmr · a year ago
Most likely using "Spectrum" which allows Layer 4 TCP+UDP proxying/DDoS protection: https://www.cloudflare.com/application-services/products/clo...
londons_explore · a year ago
Cloudflare workers has support for inbound TCP coming 'soon' [1]. Maybe they have early access?

[1]: https://developers.cloudflare.com/workers/reference/protocol...

Deleted Comment

9front · a year ago

  ┌──────────┬────────┬─────────┬───────┬────────────────────┐
  │ terminal │ s shop │ a about │ f faq │ c checkout $ 0 [0] │
  └──────────┴────────┴─────────┴───────┴────────────────────┘
 
 
  nil blend coffee
 
  whole bean | medium roast | 12oz
 
  $25
 
  Dive into the rich taste of Nil, our delicious semi-sweet
  coffee with notes of chocolate, peanut butter, and a hint
  of fig. Born in the lush expanses of Fazenda Rainha, a
  280-hectare coffee kingdom nestled in Brazil's Vale da
  Grama. This isn't just any land; it's a legendary
  volcanic valley, perfectly poised on the mystical borders
  between São Paulo State and Minas Gerais. On the edge of
  the Mogiana realm, Fazenda Rainha reigns supreme, a true
  coffee royalty crafting your next unforgettable cup.
 
 
  sold out!
 
 
 
  ────────────────────────────────────────────────────────────
  + add item   - remove item   c checkout   ctrl+c exit

xyst · a year ago
this needs some "charm" to it. it's a bit basic
8organicbits · a year ago
Charm here is: https://charm.sh/
tonymet · a year ago
I long for an alternate dimension where terminal-based internet like Minitel dominated .

Something like hypercard implemented with 80x24 ncurses UI

anthk · a year ago
ELisp and Emacs UI tools under the TTY version it's close.

Also, check gopher and gopher://magical.fish under Lynx or Sacc. The news section it's pretty huge for what you can get with very, very little bandwidth.

gopher://midnight.pub and gopher:/sdf.org are fun too.

And, OFC, the tilde/pubnix concept. SDF it's awesome.

fouc · a year ago
I love TUI (as in text-based user interfaces) so much more than GUI. It always felt like a far more peaceful and productive environment.
allknowingfrog · a year ago
I love the idea of TUIs, but I honestly don't have a lot of experience with them. There's a lovely Go library called Wish that I keep looking for reasons to use. https://github.com/charmbracelet/wish
tonymet · a year ago
Responsive, high-contrast, low bitrate, low complexity
tiptup300 · a year ago
As long as I have ctrl+c/v copy and pasting I'm right there with you.
mdgrech23 · a year ago
The real power of the internet all along in my opinion was networked databases. Everything else is fluff and not a particularly great use of resources.
tonymet · a year ago
networked spreadsheets would have been ideal
Justsignedup · a year ago
Command line dominates in quick flexibility. But is awful when it comes to discoverability. Most people can't even find the turn off ads button in windows 11. And people hate that. So what hope do they have at a terminal.
thsksbd · a year ago
I think Ms Dos 6ish TUI integration was very well done, better than Linux today.

Word perfect had good mouse support, as did Editor.

efreak · a year ago
To be fair, would the button isn't hidden away too badly, most people have no reason to go into settings for anything. They go through the wizard at the beginning (if that) to do first-time setup, then when they decide they don't like something they just deal with it or complain incessantly until someone fixes it for them.

Someone complained to me a while back about the size of icons on the windows desktop being too small - I told them they can hold Ctrl and scroll the mouse wheel to change the zoom level. They've complained about the same thing a couple times since, and so far as I can tell have made no effort to fix it.

CalRobert · a year ago
"Most people can't even find the turn off ads button in windows 11"

Perhaps the problem there is incentives.

vinay_ys · a year ago
ncurses!
mindcrime · a year ago
TurboVision!