One of my early "this is neat" programs was a genetic algorithm in Pascal. You entered a bunch of digits and it "evolved" the same sequence of digits. It started out with 10 random numbers. Their fitness (lower was better) was the sum the difference. So if the target was "123456" and the test number was "214365", it had a fitness of 6. It took the top 5, and then mutated a random digit by a random +/- 1. It printed out each row with the full population. and so you could see it scrolling as it converged on the target number.
Looking back, I want to say it was probably the July, 1992 issue of Scientific American that inspired me to write that ( https://www.geos.ed.ac.uk/~mscgis/12-13/s1100074/Holland.pdf ) . And as that was '92, this might have been on a Mac rather than an Apple ][+... it was certainly in Pascal (my first class in C was in August '92) and I had access to both at the time (I don't think it was turbo pascal on a PC as this was a summer thing and I didn't have a IBM PC at home at the time). Alas, I remember more about the specifics of the program than I do about what desk I was sitting at.
I wrote a whole project in pascal around that time. Analyzing two datasets. It was running out of memory the night before it was due, so I decided to have it run twice, once for each dataset.
That's when I learned a very important principal. "When something needs doing quickly, don't force artificial constraints on yourself"
I could have spent three days figuring out how to deal with the memory constraints. But instead I just cut the data in half and gave it two runs. The quick solution was the one that was needed. Kind of an important memory for me that I have thought about quite a bit in the last 30+ years.
An Aeon ago in 1984, I wrote a perceptron on the Apple II. It was amazingly slow (20 minutes to complete a recognition pass), but what most impressed me at the time was that it did work. Since that time as a kid I always wondered just how far linear optimization techniques could take us. If I could just tell myself then what I know now...
That's funny, pretty sure we used Standard ML on the old oscilloscope Macs in undergrad. Not Apple 2 of course, but still already pretty dated even at that time (late 90s).
That's also what I was thinking. ML predates the Apple II by 4 years, so I think there is definitely a chance of getting it running! If targetting the Apple IIGS I think it would be very achievable; you could fit megabytes of RAM in those.
Likely any early implementation of ML would have been on a mainframe or minicomputer, not a 6502. A mainframe/minicomputer would have had oodles of storage (both durable and RAM), as well as a compiler for a high level language (which fits what I can see in https://smlfamily.github.io/history/ML2015-talk.pdf and other locations).
> Who is using K-means for classification? If you have labels, then a supervised algorithm seems like a more appropriate choice.
The generated data is labeled but we can imagine those labels don't exist when running k-means. There are many applications for unsupervised clustering. I don't, however, think that there are many applications for running much of anything on an Apple ][+.
> K-means clustering is a recursive algorithm
My bad. It's iterative. I'll fix that. Thanks.
> If we know that the distributions are Gaussian, which is very frequently the case in machine learning
Gaussian distributions are very frequent and important in machine learning because of the Central Limit Theorem but, beyond that, you are correct. While many natural phenomena are approximately normal, the reason for the Gaussian's frequent use is often mathematical mathematical convenience. I'll correct my post.
> we can employ a more powerful algorithm: Expectation Maximization (EM)
Excellent point. I will fix that, too. "While k-means is simple, it does not take advantage of our knowledge of the Gaussian nature of the data. If we know that the distributions are at least approximately Gaussian, which is frequently the case, we can employ a more powerful application of the Expectation Maximization (EM) framework (k-means is a specific implementation of centroid-based clustering that uses an iterative approach similar to EM with 'hard' clustering) that takes advantage of this." Thank you for pointing out all of this!
Applesoft BASIC is just so darn readable. Youngsters have nothing comparable these days to learn the basics of expressing an algorithm without having to know a lot more.
And if it ever became too slow, you could reimplement the slow part in 6502 assembler, which has its own elegance. Great way to learn, glad I came up that way.
Looking back, I want to say it was probably the July, 1992 issue of Scientific American that inspired me to write that ( https://www.geos.ed.ac.uk/~mscgis/12-13/s1100074/Holland.pdf ) . And as that was '92, this might have been on a Mac rather than an Apple ][+... it was certainly in Pascal (my first class in C was in August '92) and I had access to both at the time (I don't think it was turbo pascal on a PC as this was a summer thing and I didn't have a IBM PC at home at the time). Alas, I remember more about the specifics of the program than I do about what desk I was sitting at.
That's when I learned a very important principal. "When something needs doing quickly, don't force artificial constraints on yourself"
I could have spent three days figuring out how to deal with the memory constraints. But instead I just cut the data in half and gave it two runs. The quick solution was the one that was needed. Kind of an important memory for me that I have thought about quite a bit in the last 30+ years.
https://codeberg.org/DATurner/miranda
> The final accuracy is 90% because 1 of the 10 observations is on the incorrect side of the decision boundary.
Who is using K-means for classification? If you have labels, then a supervised algorithm seems like a more appropriate choice.
> K-means clustering is a recursive algorithm
It is?
> If we know that the distributions are Gaussian, which is very frequently the case in machine learning
It is?
> we can employ a more powerful algorithm: Expectation Maximization (EM)
K-means is already an instance of the EM algorithm.
> K-means clustering is a recursive algorithm My bad. It's iterative. I'll fix that. Thanks.
> If we know that the distributions are Gaussian, which is very frequently the case in machine learning Gaussian distributions are very frequent and important in machine learning because of the Central Limit Theorem but, beyond that, you are correct. While many natural phenomena are approximately normal, the reason for the Gaussian's frequent use is often mathematical mathematical convenience. I'll correct my post.
> we can employ a more powerful algorithm: Expectation Maximization (EM) Excellent point. I will fix that, too. "While k-means is simple, it does not take advantage of our knowledge of the Gaussian nature of the data. If we know that the distributions are at least approximately Gaussian, which is frequently the case, we can employ a more powerful application of the Expectation Maximization (EM) framework (k-means is a specific implementation of centroid-based clustering that uses an iterative approach similar to EM with 'hard' clustering) that takes advantage of this." Thank you for pointing out all of this!
And if it ever became too slow, you could reimplement the slow part in 6502 assembler, which has its own elegance. Great way to learn, glad I came up that way.