Readit News logoReadit News
quuxplusone · a year ago
Can someone fill in the missing link in my understanding here? It seems like the post never gets around to explaining why waiting for 14272 years should make the river passable. (Nor why this river in particular, as opposed to any other obstacle.)

The post alludes to a quirk that causes people not to get sicker while waiting; but it says they still get hungry, right? So you can't wait 14272 years there for any purpose unless you have 14272 years' worth of food, right?

IIUC, the blogger goes on to patch the game so that you don't get hungry either. But if patching the game is fair play, then what's the point of mentioning the original no-worsening-sickness quirk?

It kinda feels like asking "Can you win Oregon Trail by having Gandalf's eagles fly you to Willamette?" and then patching the game so the answer is "yes." Like, what's the reason I should care about that particular initial question? care so badly that I'd accept cheating as an interesting answer?

albrot · a year ago
Hi, I'm the guy who discovered the quirk in the first place. You can survive pretty much indefinitely at the river, with or without food. You could cross the river at any point. I just thought it would be a laugh to see if you could get to a five-digit year. Then, upon resumption of the journey, the party very rapidly deteriorates and you can only survive about 5 or 6 days before they're all dead, even if you gather food immediately and wait to restore health. So the unmodded achievement was "I lived for 15,000 years in The Oregon Trail" and then I asked moralrecordings for help in reverse-engineering the game so I could get the satisfaction of a successful arrival.

Just a bit of fun.

edit: And the answer to "Why THAT river?" is simply that it's the last river in the game, and when I was hoping to complete a run without any modding, I thought it might be possible to play normally, get to the final river, wait 15,000 years, and then try to limp my decrepit deathwagon to the finish line before we all expired. This proved impossible, sadly.

Ruthalas · a year ago
Thank you for the context!

I also was a little confused by the goal, but that clears it up.

Hilift · a year ago
Could be the terrain and geology. About 15,000 years ago, after the last glacial maximum subsided, the largest flood in history carved out that part of Oregon. Maybe there is a similar timetable where the Columbia is silted up.

From Wikipedia: "The wagons were stopped at The Dalles, Oregon, by the lack of a road around Mount Hood. The wagons had to be disassembled and floated down the treacherous Columbia River and the animals herded over the rough Lolo trail to get by Mt. Hood."

https://en.wikipedia.org/wiki/Oregon_Trail#Great_Migration_o...

metadat · a year ago
How did the wagons avoid sinking / not take on water through the wood plank edges? Constant bailing while on the water?

Deleted Comment

jandrese · a year ago
The mental image conjured up by this scenario is amusing. Your impossibly patient party waits almost 15,000 years to cross a river in a state of suspended animation. Then they finally cross the river and instantly wither away to dust because they had not had a good meal in 15 centuries.

Something that was very common with BASIC interpreters but still baffling is how they were running on machines with extremely limited memory and fairly limited CPU time, but for some reason decided not to make integer types available to programmers. Every number you stored was a massive floating point thing that ate memory like crazy and took forever for the wimpy 8 bit CPU with no FPU to do any work on. It's like they were going out of their way to make BASIC as slow as possible. It probably would have been faster and more memory efficient if all numbers were BCD strings.

glxxyz · a year ago
BBC BASIC from Acorn in 1982 supported integers and reals. From page 65 of the user guide [https://www.stardot.org.uk/forums/download/file.php?id=91666]

    Three main types of variables are supported in this version of
    basic: they are integer, real and string.

                     integer       real        string
    example          346           9.847       “HELLO”
    typical variable A%            A           A$
    names            SIZE%         SIZE        SIZE$
    maximum size     2,147,483,647 1.7¥1038    255 characters
    accuracy         1 digit       9 sig figs  —
    stored in        32 bits       40 bits     ASCII values
A%, A, and A$ are 3 different variables of different types.

Deleted Comment

CalRobert · a year ago
And to add insult to injury, you write "peperony and chease" on their tombstone.

Edit:

Poor Andy :-(

https://tvtropes.org/pmwiki/pmwiki.php/Trivia/TheOregonTrail

mywittyname · a year ago
> but for some reason decided not to make integer types available to programmers.

Can you expand upon this? All of the research I've done suggest that, not only was it was possible to use integer math in Basic for the Apple II, there are versions of BASIC that only support integers.

KerrAvon · a year ago
Wozniak's original BASIC for the Apple II only supported integers; when Apple decided they needed floating point and Woz refused to spend time on it, they decided to license it from Microsoft, producing Applesoft BASIC. Applesoft was slower than Woz's BASIC, because it performed all arithmetic in floating point.
Salgat · a year ago
https://en.wikipedia.org/wiki/Dartmouth_BASIC

"All operations were done in floating point. On the GE-225 and GE-235, this produced a precision of about 30 bits (roughly ten digits) with a base-2 exponent range of -256 to +255.[49]"

jandrese · a year ago
BASIC doesn't have typing, so most BASIC interpreters just used floating point everywhere to be a beginner friendly as possible.

The last thing they wanted was someone making their very first app and it behaves like:

    Please enter your name: John Doe

    Please enter how much money you make every day: 80.95

    Congratulations John Doe you made $400 this week!

Deleted Comment

Mountain_Skies · a year ago
Most of the 8-bit BASICs of the time share a common ancestor. Perhaps making every number a floating point was a reasonable decision for the hardware that the common ancestor BASIC was written for and it just got carried over through the generations.
jandrese · a year ago
I think it's more likely that the language had no concept of types so number had to "just work". You can do integer math (slowly) using floating point, but you can't do floating point math with integers. Especially since the language is targeted at beginners who don't really understand how their machines work.

Would have been interesting to see a version of BASIC that encoded numbers as 4 bit BCD strings. Compared to the normal 40 bit floating point format you would save memory in almost every case, and I bet the math would be just as fast or faster than the floating point math in most cases as well. The 4 bit BCD alphabet would be the numbers 0-9, as well as -, ., E, and a terminator and a coupld of open numbers if you can think of something useful. Maybe an 'o' prefix for octal and a 'b' for binary?

NikkiA · a year ago
Mostly that ancestor would be MS Basic on the altair.
em3rgent0rdr · a year ago
> "for some reason decided not to make integer types available to programmers...It's like they were going out of their way to make BASIC as slow as possible."

BASIC was well-intention to make programming easy, so ordinary people in non-technical fields, students, so people who weren't "programmers" could grasp. In order to make it easy, you better not try to scare adopters with concepts like int vs float and maximum number size and overflow, etc. The ordinary person's concept of a number fits in what computers call a float. You make a good point though that BCD strings might have done the trick better as a one-size fits all number format that might have been faster.

BASIC also wasn't intended for computationally intense things like serious number crunching, which back in the day usually was done in assembly anyway. The latency to perform arithmetic on a few floats (which is what your typical basic program deals with) is still basically instantaneous from the user's perspective even on a 1 MHz 8-bit CPU.

scarface_74 · a year ago
The 6502 ironically enough did support BCD strings directly

http://www.6502.org/tutorials/decimal_mode.html

RiverCrochet · a year ago
> but for some reason decided not to make integer types available to programmers

They were there, you had to append % to the variable name to get it (e.g. A% or B%, similar to $ for strings). But integers were not the "default."

BASIC is all about letting you pull stuff out of thin air. No pre-declaring variables needed, or even arrays (simply using an array automatically DIM's it for 10 elements if you don't earlier DIM it yourself). Integer variables on BASIC were 16-bit signed so you couldn't go higher than 32767 on them. But if you are going to use your $500 home computer in 1980 as a fancy calculator, just learning about this newfangled computer and programming thing, that's too limiting.

I do remember reading some stuff on the C64 that its BASIC converted everything to floating point anyway when evaluating expressions, so using integer variables was actually slower. This also includes literals. It was actually faster to define a float variable as 0--e.g. N0=0--and use N0 in your code instead of the literal 0.

Floats were 5 bytes in the early 80's Microsoft BASICs, honestly not "massive" unless you did a large array of them. The later IBM BASICs did have a "double precision" float that was 12 bytes maybe?

> It probably would have been faster and more memory efficient if all numbers were BCD strings.

I wouldn't be surprised if Mr. Gates seriously considered that during the making of Microsoft BASIC in the late 70's as it makes it easy for currency calculations to be accurate.

Brian_K_White · a year ago
MS BASIC on TRS-80 model 100

default, a normal variable like N=10, is a signed float that requires 8 bytes

optional, add ! suffix, N!=10, is a signed float that requires 4 bytes

optional, add % suffix, N%=10, is a signed int that requires 2 bytes

And that's all the numbers. There are strings which use one byte per byte, but you have to call a function to convert a single byte of a string to it's numerical value.

An unsigned 8-bit int would be very welcome on that and any similar platform. But the best you can get is a signed 16-bit int, and you have to double the length of your variable name all through the source to even get that. Annoying.

nopakos · a year ago
I remember having integer variables in Amstrad CPC (Locomotive) Basic. Something with the % symbol. edit: ChatGPT says that BBC BASIC and TRS-80 Microsoft BASIC also supported integer variables with % declaration.
canucker2016 · a year ago
The wikipedia page for Microsoft BASIC (which Applesoft Basic is a variant), https://en.wikipedia.org/wiki/Microsoft_BASIC, mentions that integer variables were stored as 2 bytes (signed 16-bit) but all calculations were still done in floating point (plus you needed to store the % character to denote an integer var).

So the main benefit was for saving space with an array of integers.

sedatk · a year ago
Yes, Locomotive BASIC also supported DEFINT command, so, all variables in a given range would be treated as integers without "%" suffix.

Deleted Comment

cjbprime · a year ago
> Something that was very common with BASIC interpreters but still baffling is how they were running on machines with extremely limited memory and fairly limited CPU time, but for some reason decided not to make integer types available to programmers.

To be fair, JavaScript suffers from the same laziness :)

leni536 · a year ago
Maybe it was a consideration of code size. If you already choose to support floats then you might as well only support floats and save a bunch of space by not supporting other arithmetic types.
dcrazy · a year ago
Interestingly enough, the article notes that this program was written for Microsoft’s AppleSoft BASIC, but Woz famously wrote an Integer BASIC that shipped on the Apple II’s ROM.
skissane · a year ago
Woz planned to add floating point support to his Integer BASIC. In fact, he included a library of floating point ROM routines in the Apple II ROMs, but he didn't get around to modifying Integer BASIC to use them. He ended up working on the floppy disk controller instead.

When he finally got around to doing it, he discovered two issues – Integer BASIC was very difficult to modify, because there was never any source code. He didn't write it in assembly, because at the time he wrote it he didn't yet have an assembler, so he hand assembled it into machine code as he worked on it. Meanwhile, Jobs had talked to Gates (without telling him) and signed a deal to license Microsoft Basic. Microsoft Basic already had the desired floating point support, and whatever Integer BASIC features it lacked (primarily graphics) were much easier to add given it had assembly code source.

https://en.wikipedia.org/wiki/Integer_BASIC#History

I was thinking about this the other day, I wonder if anyone has ever tried finishing off what Woz never did, and adding the floating point support to Integer BASIC? The whole "lacking source" thing shouldn't be an issue any more, because you can find disassemblies of it with extensive comments added, and I assume they reassemble back to the same code.

scarface_74 · a year ago
The Apple //e ROMs had AppleSoft BASIC. Integer basic could be loaded from the original DOS disks
paulddraper · a year ago
It’s like knew how popular JS would be someday.

Deleted Comment

davedx · a year ago
Sounds like JavaScript!
em3rgent0rdr · a year ago
around 1999 when I stumbled upon JavaScript I was aghast that numbers were always 64-bit floating point. I thought that language would go nowhere.
arccy · a year ago
that would be 15 millennia or 150 centuries

Dead Comment

csours · a year ago
You board the Generation Ship Oregon Trail with some trepidation. If the scientists are correct you will be in suspended animation for the next 14272 years. You already feel colder somehow. To the West you see a robotic barkeep.
happyopossum · a year ago
Pedantic note - if you have suspended animation, you don’t need Generation ships.
PepperdineG · a year ago
"Excuse the mess. Most unfortunate. A diode blew in one of the life support computers. When we came to revive our cleaning staff, we discovered they'd been dead for thirty thousand years. Who's going to clear away the bodies? That's what no-one seems to have an answer for."
csours · a year ago
First Class tickets get suspended animation tanks
NikkiA · a year ago
ISTR seeing a sci-fi themed Oregon Trail, actually.

Ah yes, Orion Trail

https://store.steampowered.com/app/381260/Orion_Trail/

LeifCarrotson · a year ago
Sometimes, I hate working with code where the developer was either a Basic developer or a mathematician: variable names limited to two characters (like "H" for health and "PF" for pounds of food remaining) work when when manipulating an equation and are a lot better than 0x005E, but the code isn't nearly self-documenting. On the other hand, the variable name could be "MessageMappingValuePublisherHealthStateConfigurationFactory". Naming things is one of the hard problems in computer science, and I'm glad we're past the point where the number of characters was restricted to 2 for performance reasons.

Unrelated, my monitor and my eyeballs hates the moire patterns developed by the article's background image at 100% zoom - there's a painful flicker effect. Reader mode ruins the syntax highlighting and code formatting. Fortunately, zooming in or out mostly fixes it.

parpfish · a year ago
over the years i've had to translate a lot of code from academics/researchers into prod systems, and variable/function naming is one of their worst habits.

just because the function you're implementing used single-character variables to render an equation in latex, doesn't mean you have to do it that way in the code.

a particular peeve was when they make variables for indexed values named`x_i` instead of just having an array `x` and accessing the ith element as `x[i]`

Breza · a year ago
At least I've never seen UTF8 math symbols in the wild. Julia, Python, and other languages will let you use the pi symbol for 3.14... instead of just calling it pi.
harrison_clarke · a year ago
have you seen arthur whitney's code style?

https://www.jsoftware.com/ioj/iojATW.htm

i tried this style for a minute. there are some benefits, and i'll probably continue going for code density in some ways, but way less extreme

there's a tradeoff between how quickly you can ramp up on a project, and how efficiently you can think/communicate once you're loaded up.

(and, in the case of arthur whitney's style, probably some human diversity of skills/abilities. related: i've thought for a while that if i started getting peripheral blindness, i'd probably shorten my variable names; i've heard some blind people describe reading a book like they're reading through a straw)

bluedino · a year ago
40x25 text screens and line-by-line editors encourage short variable names as well
sumtechguy · a year ago
Also some of that older stuff it can be the compiler only let you have 8 chars for a variable name.
hombre_fatal · a year ago
On the other hand, sometimes less descriptive but globally unique names add clarity because you know what they mean across the program, kinda like inventing your own jargon.

Maybe "PF" is bad in one function but if it's the canonical name across the program, it's not so bad.

cjbprime · a year ago
> variable names limited to two characters

(It sounds like there was a justified reason for that here, though -- the variable names are not minimized during compilation to disk.)

xg15 · a year ago
and then there are the people who name their variables Dennis...
nadermx · a year ago
"The game dicks you at the last possible moment by expecting the year to be sensible"

Great read on how to actually hack. Takes you through the walls he hits and then how by hitting that wall it "opens up a new vector of attack"

egypturnash · a year ago
> Several days later, I tried writing a scrappy decompiler for the Applesoft BASIC bytecode. From past experience I was worried this would be real complex, but in the mother of all lucky breaks the "bytecode" is the original program text with certain keywords replaced with 1-byte tokens. After nicking the list of tokens from the Apple II ROM disassembly I had a half-decent decompiler after a few goes.

Applesoft has a BASIC decompiler built in, it's called "break the program and type LIST". Maybe Oregon Trail did something to obscure this? I know there were ways to make that stop working.

bongodongobob · a year ago
Depends on the version. The original was BASIC, but the one with graphics and sound (which I think was more popular?) was assembly.
egypturnash · a year ago
Wikipedia implies this version was mostly BASIC, the hunting minigame was in assembly.
sumtechguy · a year ago
If I remember correctly applesoft also had a few single bytecodes that would decode to other key words. Like PRINT and ?. But I could be remembering badly.
canucker2016 · a year ago
Yes, a few minutes spent reading about Applesoft BASIC or Microsoft BASIC would've reduced the cringe factor in reading a neophyte trying to mentally grapple with old technology.

"bytecode" and "virtual machine", no, no, no. That's not the path to enlightenment...

in this case, print debugging, is your best bet.

bluedino · a year ago
> So 1985 Oregon Trail is written in Applesoft BASIC

This surprised me for some reason, I guess it's been 30-some years but I remember my adventures in Apple II BASIC not running that quickly, but maybe Oregon Trail's graphics was simpler than I remember

I guess I just assumed any "commercial" Apple II games were written in assembly, but perhaps the actions scenes had machine code mixed in with the BASIC code.

Suppafly · a year ago
There are so many different versions of Oregon Trail, you might have played the old version first but substituted the graphics and game play you remember with a later version you also played. Not to mention that imagination fills in a lot of the details when you're playing those games, usually as a child.
Scuds · a year ago
There are two versions of Ultima 1, the original has BASIC is basic with assembly and there is a remake in pure assembly. You can definitely tell the improvements the asm version brings with the overworld scrolling faster and the first person dungeons redrawing very quickly.

So - I'm guessing game logic of MECC Oregon was in Basic with some assembly routines to re-draw the screen. BTW original Oregon Trail was also 100% basic and a PITA to read. You're really getting to the edges of what applesoft basic is practically capable of with games like Akalabeth and Oregon

vidarh · a year ago
That reminds me of finding out Sid Meier's Pirates! on the C64 was a mix of BASIC and assembly. You could LIST a lot of it, but the code was full of SYS calls to various assembly helpers, which I remember was incredibly frustrating as I did not yet have any idea how assembly worked so it felt so close but so far to being able to modify it.
egypturnash · a year ago
Wikipedia tells me that the 1985 version's hunting minigame is in assembly; it does not explicitly say that the rest is in Basic but it definitely implies this.
bluGill · a year ago
Oregon trail was conceptually simple and so well crafted BASIC would be plenty fast. Most other games were move complex and probably needed assembly. Though it was common to call inline assembly (as binary code) in that era as well.
classichasclass · a year ago
Not uncommon, at least on the A2 and C64, to have a BASIC scaffold acting like a script that runs various machine language subroutines and/or the main game loop.
ianbicking · a year ago
I also thought it was interesting that it was actually several BASIC programs with data passed back and forth by stuffing it in specific memory locations.
itslennysfault · a year ago
I find it amusing that the bug in the final screen is essentially the Y2K bug.