Readit News logoReadit News
vardhanw commented on Mathematics for Computer Science (2024)   ocw.mit.edu/courses/6-120... · Posted by u/vismit2000
fn-mote · 8 months ago
The page listing topics (just like the playlist):

https://ocw.mit.edu/courses/6-1200j-mathematics-for-computer...

Lecture notes:

https://ocw.mit.edu/courses/6-1200j-mathematics-for-computer...

There are a few unusual parts, like the last lecture ("Large Deviations"). I'm not familiar with the entire course, but IMO the lecture on state machines is very good; it discusses invariants and uses an approchable example (the 15-puzzle).

Text (last revised 2018): https://courses.csail.mit.edu/6.042/spring18/mcs.pdf

If you have never looked at it, the problems there are very nice. For example, instead of some dry boolean logic problem about A and Not(B), you have Problem 3.17 on page 81, which begins:

    This problem examines whether the following specifications are satisfiable:
    1. If the file system is not locked, then. . .
    (a) new messages will be queued.
    (b) new messages will be sent to the messages buffer.
    (c) the system is functioning normally, and conversely, if the system is
    functioning normally, then the file system is not locked.

    [...]

    (a) Begin by translating the five specifications into propositional 
    formulas using the four propositional variables [...]

vardhanw · 8 months ago
The Units seem to be independent i.e. could be followed in any order. Can someone knowledgeable confirm this? I know Set theory etc is the basis of many things usually in a formal mathematical setting, hence asking.
vardhanw commented on Agentic patters from scratch using Groq   github.com/neural-maze/ag... · Posted by u/mtrofficus
bbor · a year ago
Wow, thanks for sharing, that’s hilarious. Even in the one about “multi agent” systems theres no reference older than 2023.

I know I shouldn’t be shocked by how arrogant the connectionist got with their (arguably unexpected) success, but I can’t help it! They legit act like “AI” is a new phenomenon, which is especially funny for someone like Ng, who’s been an AI celebrity for at least a decade. No hate—his course was my first intro to real ML & AI, like I’m sure it was for many of us. Just a teeny bit of righteous condescension, I guess.

For anyone interested in this kind of stuff, this would be the super-popular first stop: Marvin Minsky’s Society of Mind

https://en.wikipedia.org/wiki/Society_of_Mind

https://courses.media.mit.edu/2016spring/mass63/wp-content/u...

vardhanw · a year ago
The society of mind is an interesting reference. I remember browsing through it around the late 90's when it came out. It seemed to provide some theory for the basis of some of our cognitive functions in terms of a collection of cooperating agents. But then, I guess, what the agents themselves are made of was not clear/understood? Are today's LLM models capable of taking the form of those agents, and can we take inspiration from SoM to see how they can evolve together towards a more powerful (real/AG?) intelligence?
vardhanw commented on A buried ancient Egyptian port reveals connections between distant civilizations   smithsonianmag.com/histor... · Posted by u/NoRagrets
vardhanw · 2 years ago
Seems like the fact of a large India-Egypt trade link via the red sea was known atleast a year back, and specifically this evidence from Berenike. This [0] link describes the author William Dalrymple talking about it and also about his book [1] which is already out, which presumably covers this in more detail. A lot of Indian scholars are (re)discovering Indic history and we can expect much more of ancient India specific history to come out, which was unknown or has been forgotten over the ages, given the ancient nature of the Indian civilization.

[0] https://www.businesstoday.in/latest/economy/story/indias-anc... [1] https://www.amazon.com/Golden-Road-Ancient-India-Transformed...

vardhanw commented on The Dance of Śiva   asymptotejournal.com/blog... · Posted by u/Caiero
nsenifty · 3 years ago
> Pāṇini, during his twelve-year-long tapas (fervour, ardour) to Śiva, hears the ḍamaru beat fourteen times. Fourteen classes of syllables drop, resonant, into his fervent ears.

    a i u ṇ
    ṛ ḷ k
    e o ṅ
    ai au c
    ha ya va ra ṭ
    la ṇ
    ña ma ṅa ṇa na m
    jha bha ñ
    gha ḍha dha ṣ
    ja ba ga ḍa da ś
    kha pha cha ṭha tha ca ṭa ta v
    ka pa y
    śa ṣa sa r
    ha l
47 syllables in Shiva Sutra by Panini (https://en.wikipedia.org/wiki/Shiva_Sutras), so similar to Japanene Iroha (https://en.wikipedia.org/wiki/Iroha). Coincidence?

> It is famous because it is a perfect pangram, containing each character of the Japanese syllabary exactly once. Because of this, it is also used as an ordering for the syllabary, in the same way as the A, B, C, D... sequence of the Latin alphabet.

vardhanw · 3 years ago
This article [0] proves that the Pāṇinian Shiva Sutras are optimal in some mathematical sense. I'm just learning to get into this, learning Sanskrit grammar, etc. and get a sense of the Asthadyaayi [1] so that I can approach it later in more detail. There are various ways to approach it and some online sites make an attempt, but as per an acquaintance who has been taught under a teacher, it [1] is almost impossible to understand without a teacher. Still, this document [2], seems to give a good understandable overview with pointers for further study.

[0] https://user.phil.hhu.de/~petersen/paper/petersen_jolli_proo... [1] https://en.wikipedia.org/wiki/A%E1%B9%A3%E1%B9%AD%C4%81dhy%C... [2] https://learnsanskrit.org/vyakarana/

vardhanw commented on MS Teams Outage   piunikaweb.com/2023/06/28... · Posted by u/vardhanw
vardhanw · 3 years ago
Looks like a service outage in some regions (again).
vardhanw commented on Show HN: Open sourcing Harmonic, my Android Hacker News client   github.com/SimonHalvdanss... · Posted by u/swesnow
vardhanw · 3 years ago
Thanks for developing, and now open sourcing, Harmonic. It has been my favorite HN client for a few years now, something to it that make you not want to leave!
vardhanw commented on Unlimiformer: Long-Range Transformers with Unlimited Length Input   arxiv.org/abs/2305.01625... · Posted by u/shishy
intalentive · 3 years ago
The ML community keeps rediscovering the work of Steve Grossberg. This is very similar to his decades-old ART model.
vardhanw · 3 years ago
Could you explain in simple terms (if possible) what is the similarity? For context, I worked for a brief time with ART before Y2K in BU CNS, and took a few courses there, but had to leave it for 'reasons'.

u/vardhanw

KarmaCake day175May 20, 2012
About
Engineer with a passion for learning, a sense of responsibility for the greater good and sensitivity to problems faced by humanity and its impact (on itself and on nature).
View Original