Readit News logoReadit News
YeGoblynQueenne commented on The great medieval water myth (2013)   leslefts.blogspot.com/201... · Posted by u/apsec112
YeGoblynQueenne · 8 hours ago
>> Why are people who have little or no firsthand knowledge of the Middle Ages absolutely convinced they know the facts on this issue?

I used to have these arguments with an older relative, some acute, some chronic, about a) whether the edible part of an urchin is eggs or genitals [1], b) whether urchins with little pebbles and bits of seaweed on them are males, ornamented thusly to attract females [2], c) whether cypress tree sex can be determined by how open wide are their branches [3], d) whether the ruins discovered by Heinrich Schleimann on the coast of Asia Minor are really the ruins of the mythical Troy [4], and, e) whether ascent blackout during free-diving is a thing or not [5].

I've given up. People know what they know, either because their mother told them so when they were young, or because everyone knows, or because they know better than you. If someone's made up their mind that they're right and you're wrong, then they're right, you're wrong and you can't change their mind.

_______________

[1] Genitals.

[2] No, sea urchins do not have eyes.

[3] No, cypress trees have both male and female parts.

[4] Undetermined.

[5] It is.

YeGoblynQueenne commented on We put a coding agent in a while loop   github.com/repomirrorhq/r... · Posted by u/sfarshid
jmathai · 12 hours ago
Software takes longer to develop than other parts of the org want to wait.

AI is emerging as a possible solution to this decades old problem.

YeGoblynQueenne · 8 hours ago
Or as a new problem that it will persist for decades to come.

Deleted Comment

YeGoblynQueenne commented on The warning signs the AI bubble is about to burst   telegraph.co.uk/business/... · Posted by u/taimurkazmi
tim333 · 3 days ago
What do I do after I have my blinkers on?
YeGoblynQueenne · 3 days ago
Stopper your ears and scream "NAHNAHNAH".
YeGoblynQueenne commented on Mark Zuckerberg freezes AI hiring amid bubble fears   telegraph.co.uk/business/... · Posted by u/pera
YeGoblynQueenne · 5 days ago
>> Mr Zuckerberg has said he wants to develop a “personal superintelligence” that acts as a permanent superhuman assistant and lives in smart glasses.

Yann Le Cun has spoken about this, so much that I thought it was his idea.

In any case, how's that going to work? Is everyone going to start wearing glasses? What happens if someone doesn't want to wear glasses?

YeGoblynQueenne commented on Understanding Moravec's Paradox   hexhowells.com/posts/mora... · Posted by u/hexhowells
YeGoblynQueenne · 5 days ago
>> At its core, Moravec's paradox is the observation that reasoning takes much less computation compared to sensorimotor and perception tasks. It's often (incorrectly) described as tasks that are easy for humans are difficult for machines and visa versa.

From Wikipedia, quoting Hans Moravec:

Moravec's paradox is the observation that, as Hans Moravec wrote in 1988, "it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility".[1]

https://en.wikipedia.org/wiki/Moravec's_paradox

Note that Moravec is not saying anything about "much less computation" and he's also not talking about "reasoning", particularly since he's talking in the 1980's when AI systems excelled at reasoning (because they were still predominantly logic-based and not LLMs; then again, that's just a couple of years before the AI winter of the '90s hit and took all that away).

In my opinion the author should have started by quoting Moravec directly instead of paraphrasing so that we know he's really discussing Moravec's saying and not his own, idiosyncratic, interpretation of it.

Deleted Comment

YeGoblynQueenne commented on Dyna – Logic Programming for Machine Learning   dyna.org/... · Posted by u/matteodelabre
tannhaeuser · 8 days ago
It's not pedantic at all. Interpreting terms as "themselves" and term ordering is core to Herbrand interpretation and unification, as you know very well.
YeGoblynQueenne · 7 days ago
Yes but not everyone is up for a logic programming lecture on a Sunday :)

u/YeGoblynQueenne

KarmaCake day23015September 26, 2015
About
This a common question on this board:

What is reasoning?

In computer science and AI when we say "reasoning" we mean that we have a theory and we can derive the consequences of the theory by application of some inference procedure.

A theory is a set of facts and rules about some environment of interest: the real world, mathematics, language, etc. Facts are things we know (or assume) to be true: they can be direct observations, or implied, guesses. Rules are conditionally true and so most easily understood as implications: if we know some facts are true we can conclude that some other facts must also be true. An inference procedure is some system of rules, separate from the theory, that tells us how we can combine the rules and facts of the theory to squeeze out new facts, or new rules.

There are three types of reasoning, what we may call modes of inference: deduction, induction and abduction. Informally, deduction means that we start with a set of rules and derive new unobserved facts, implied by the rules; induction means that we start with a set of rules and some observations and derive new rules that imply the observations; and abduction means that we start with some rules and some observations and derive new unobserved facts that imply the observations.

It's easier to understand all this with examples.

One example of deductive reasoning is planning, or automated planning and scheduling, a field of classical AI research. Planning is the "model-based approach to autonomous behaviour", according to the textbook on planning by Geffner and Bonnet. An autonomous agent starts with a "model" that describes the environment in which the agent is to operate as a set of entities with discrete states, and a set of actions that the agent can take to change those states. The agent is given a goal, an instance of its model, and it must find a sequence of actions, that we call a "plan", to take the entities in the model from their current state to the state in the goal. This is usually achieved by casting the planning problem as pathfinding over a graph with a search algorithm like A*. Here, the agent's model is a theory, the search algorithm is the inference procedure, and the plan is a consequence of the theory. Deductive reasoning can be sound, as long as the facts and rules in the theory are correct: from correct premises we can deduce correct conclusions. We know of sound deductive inference rules, e.g. A*, and Resolution, used in automated theorem proving and SAT-Solving, are sound.

The classic example of inductive reasoning is inferring the colour of swans. Most swans are white (apparently) so if we have only seen white swans we have no reason to believe there are any other colours: we are forced to infer that all swans are white. We may only be disabused of our fallacy if we happen to observe a swan that is not white, e.g. a black swan. But who is to say when such a magnificent creature will grace us with its presence, outside of Tchaikovsky's ballets? Induction is thus revealed to be unsound: even given true premises we can still arrive at the wrong conclusions. Another example is the scientific method: imagine an idealised scientist, perfectly spherical, in a frictionless vacuum. She starts with a scientific theory, then goes out into the world and makes new observations about a phenomenon not described by her theory. She constructs a hypothesis to extend her theory so as to explain the new observations. The hypothesis is a set of rules, where the premises are the consequences of the rules in her initial theory. Then, being an idealised scientist, she goes looking for new observations to refute her hypothesis. Science only gives us the tools to know when we're wrong.

Abductive reasoning is the mode of inference exemplified by Sherlock Holmes. We can imagine Sherlock and Watson standing outside a tavern in London, watching as a gentleman of interest steps out of the tavern with egg on his lapel. "Ah, my dear Watson, what can we conclude from this observation?". "Why my dear Holmes, we can conclude that the man had eggs for breakfast". Holmes and Watson can arrive at this conclusion, about a fact that they have not directly observed, because they have a theory with a rule that says "if one eats eggs, one may get some on one's lapels". Working backwards from this rule, and their observation of egg on the man's lapels, they can guess that he had eggs even if they didn't directly observe him doing so. Abduction is also unsound: the man may have swapped coats with an accomplice, who was the one who had eggs for breakfast instead.

And now you know what "reasoning" means. So the next time someone asks: "what is reasoning?", you can let them know and turn the discussion to more interesting, more productive directions.

View Original