Readit News logoReadit News
YeGoblynQueenne commented on CRISPR fungus: Protein-packed, sustainable, and tastes like meat   isaaa.org/kc/cropbiotechu... · Posted by u/rguiscard
Brendinooo · 3 days ago
It looks like supermarket chicken. I tried something more like a heritage breed once but I have young children who want massive white meat chicken breasts, so that’s what I’m doing for now.

But I will say, when you buy chicken at the grocery store, the quality can vary. Mine has always been good.

YeGoblynQueenne · 3 days ago
>> I tried something more like a heritage breed once but I have young children who want massive white meat chicken breasts, so that’s what I’m doing for now.

Heh. Over here (UK and the rest of Europe I reckon) the kids love chicken thighs. Acquired tastes eh?

YeGoblynQueenne commented on CRISPR fungus: Protein-packed, sustainable, and tastes like meat   isaaa.org/kc/cropbiotechu... · Posted by u/rguiscard
swiftcoder · 3 days ago
I suggest reading/listening a little bit outside of the PETA propaganda bubble. For example, here's a good short discussion on the topic with a cattle farmer: https://www.youtube.com/watch?v=n4cHn6NX4wQ
YeGoblynQueenne · 3 days ago
Just for some context, is the guy on the left with the white shirt a vegan who however supports ethical farming practices or did I get totally the wrong impression?
YeGoblynQueenne commented on CRISPR fungus: Protein-packed, sustainable, and tastes like meat   isaaa.org/kc/cropbiotechu... · Posted by u/rguiscard
vintermann · 3 days ago
But of course there is! That's not the point. You could also probably produce reasonable data indicating that food starting with the letter F results in worse health outcomes. But if you then avoid fenugreek, fava beans and fiddlehead ferns, you're not making up for the fried potatoes, fried cheese and fudge sundaes which really carried the correlation!

We want causal correlations. Someone decided that instead they wanted to divide food into categoried in this specific way, and then rank categories. And I don't think all of them were naive about what they were doing. I've read Merchants of Doubt, I don't give harmful industries the benefit of doubt when it comes to things like this.

YeGoblynQueenne · 3 days ago
It's certainly not the food industry that decided to brand some of its own foods as Ultra-Processd and harmful for health. That kind of categorisation is the work of nutrition researchers of various kinds. The way I understand it the food industry's interests trend the opposite way, trying to convince you that everything they sell you is good for you.
YeGoblynQueenne commented on CRISPR fungus: Protein-packed, sustainable, and tastes like meat   isaaa.org/kc/cropbiotechu... · Posted by u/rguiscard
Brendinooo · 3 days ago
I did ~100 chickens last year, and more like 85 this year.

12 weeks is incorrect, you can buy the same Cornish crosses that the big farms use. So they can be ready in as little as 6-7 weeks but I usually stretch it to 8 or 9; my time to process them is fixed so I might as well get a little bit more meat for my efforts.

I use a chicken tractor that is big enough to let me hold about 33 at a time.

So it’s an operation that needs to run for about half the year. If you time it right, you can work around vacations and stuff. Daily operations are actually pretty minimal in terms of time spent, but you do lose three weekends a year to process them if you don’t outsource that.

All of that to say: I’m not sure if I want to agree with your characterization. It’s less of a time commitment than you think. But there is a substantial cost to it all: capital costs are notable and the cost of feed and birds is such that you basically break even against high-end organic products for sale. You’re always going to look at the Costco chicken and wonder why you are doing it. I treat it as a “touch grass” hobby that kinda breaks even.

No real point, just excited to have something to say about this haha

YeGoblynQueenne · 3 days ago
>> You’re always going to look at the Costco chicken and wonder why you are doing it.

It depends. My friend's dad has chickens and the meat is tough and grey-dark, very much not like the supermarket white and soft meat. Also the meat tastes of... chicken; I guess. And you can see even the bones are significantly harder (I can't snap them with my fingers like the supermarket chickens' bones). I always assumed this is because of the way they're raised, allowed to roam freely (within an enclosure, but it's a big one) and feed on scraps and everything they can forage for, in addition to grain.

What does your chickens' meat look and taste like? If it's the same as supermarket chicken then, I don't know, but if it's the other kind then it's definitely worth it. Although it takes a couple hours cooking to soften it :)

YeGoblynQueenne commented on Reselling tickets for profit to be outlawed in UK government crackdown   theguardian.com/money/202... · Posted by u/helsinkiandrew
CyberDildonics · a month ago
Do you mean nobody else made it? Because you did put 'scalpers' and 'slavers' in the same sentence.
YeGoblynQueenne · 24 days ago
You put pokemon and slavery in the same sentence.
YeGoblynQueenne commented on Learn Prolog Now (2006)   lpn.swi-prolog.org/lpnpag... · Posted by u/rramadass
jodrellblank · a month ago
> "how should I feel if you just "agree" with me as a way to get me to stop arguing?"

Triumphant? Victorious? magnificent, successful, proud, powerful, insert any adjective which applies to a situation where someone wanted something, and then got it.

> "And it is very hard to see how carrying out a proof automatically is "not reasoning. The same clearly does not apply to Python, because its interpreter is not an automated theorem prover; it doesn't apply to javascript because its interpreter is not an automated theorem prover"

And that does not stop Python or Javascript from being used to find solutions to e.g. an Einstein Puzzle, something a human might call "a reasoning problem". This means Prolog 'doing reasoning' must not be the thing which solves the 'reasoning problem', something else must be doing that because non-reasoning systems can do it too.

If Prolog 'doing reasoning' meant it could solve 'reasoning problems' that no other programming language could, that would be a strong reason to use Prolog, but that is not something you or the other 'reasoning' commenters have claimed or offered examples of. Clearly the word 'reasoning' has different definitions in the different sentences and that is important here because I am responding to one and youall on the other.

If 'doing reasoning' is not the thing which makes it useful for 'solving reasoning problems' - if that neither compels one to use Prolog when working to 'solve a reasoning problem', nor convinces one to avoid other languages - if the definition does not influence one's decision in any way - it's very hard to see how it is the relevant version of 'reasoning' to focus on, and what point is trying to be made by this insistence on focusing on it, except academic one-upping.

YeGoblynQueenne · a month ago
>> And that does not stop Python or Javascript from being used to find solutions to e.g. an Einstein Puzzle, something a human might call "a reasoning problem". This means Prolog 'doing reasoning' must not be the thing which solves the 'reasoning problem', something else must be doing that because non-reasoning systems can do it too.

To solve an Einstein puzzle in Python et al. you have to code 1) a definition of the problem and 2) a solution that you come up with. In Prolog you only have to code a definition of the problem and then executing the definition gets to the solution.

Other languages indeed can solve problems that Prolog can, but a human programmer must code the solution, while Prolog comes built-in with a universal problem solver, SLD-Resolution, that can solve any problem a human programmer can pose to it.

I looked around for an example of this with real code and found this SO thread on programmatically solving a Zebra puzzle (same as the Einstein puzzle):

https://stackoverflow.com/questions/318888/solving-who-owns-...

There are a few proposed solutions in Python, and in Prolog. The Python solutions pull-in constraint solving libraries, encode the problem constraints and then use for-loops to iterate over the set of solutions that respect the constraints.

The Prolog solutions do not pull in any libraries and do not iterate. They declare the constraints of the problem and then execute the constraints, letting the Prolog interpreter find a solution that satisfies them.

So the difference is that Prolog can solve the problem on its own, while Python can solve it only if you hand-code the solution, which includes importing a constraint solver. Constraint solving is of course a form of reasoning, and that's how you can get Python to do reasoning: by implementing a reasoning algorithm. In Prolog you don't need to do that, because SLD-Resolution is a universal problem solver that can be applied to constraint problems, like any other problem. This is not an academic matter, as you insist that it is; it is a practical matter, of knowing how to code a universal problem solver and getting it to run on real-world hardware.

I say that solving constraints is a form of reasoning. You won't find anyone to disagree with this in the CS and symbolic AI community. While you also won't find an agreed-upon, formal definition of "reasoning", we don't need one because we've been studying reasoning since the time of Aristotle and his "Syllogisms" (literally, "Reasonings" in Greek). In the same way you won't really find an agreed-upon definition of "mathematics", but we don't need one because we've been studying maths since the time of the ancient Babylonians (at least; my memory is hazy).

You argue that what Prolog does isn't reasoning, but that's a very niche view. Not that this means you're wrong, but one reason I insist with this discussion is that your view is so unorthodox. If you're right, I'd like to know, so I can understand where I was wrong. But so far I still only see a misunderstanding of Prolog and a continued unwillingness to engage with the argument that Prolog does reasoning because it has an automated theorem prover as an interpreter.

Note that the Prolog solutions in the SO thread are a bit over-engineered for my tastes. The one in the link below is much more straightforward although it's for a simplified version of the problem. Still, it shows what I mean that you only need to define the problem and then the interpreter figures out how to solve it.

https://www.101computing.net/solving-a-zebra-puzzle-using-pr...

u/YeGoblynQueenne

KarmaCake day23524September 26, 2015
About
This a common question on this board:

What is reasoning?

In computer science and AI when we say "reasoning" we mean that we have a theory and we can derive the consequences of the theory by application of some inference procedure.

A theory is a set of facts and rules about some environment of interest: the real world, mathematics, language, etc. Facts are things we know (or assume) to be true: they can be direct observations, or implied, guesses. Rules are conditionally true and so most easily understood as implications: if we know some facts are true we can conclude that some other facts must also be true. An inference procedure is some system of rules, separate from the theory, that tells us how we can combine the rules and facts of the theory to squeeze out new facts, or new rules.

There are three types of reasoning, what we may call modes of inference: deduction, induction and abduction. Informally, deduction means that we start with a set of rules and derive new unobserved facts, implied by the rules; induction means that we start with a set of rules and some observations and derive new rules that imply the observations; and abduction means that we start with some rules and some observations and derive new unobserved facts that imply the observations.

It's easier to understand all this with examples.

One example of deductive reasoning is planning, or automated planning and scheduling, a field of classical AI research. Planning is the "model-based approach to autonomous behaviour", according to the textbook on planning by Geffner and Bonnet. An autonomous agent starts with a "model" that describes the environment in which the agent is to operate as a set of entities with discrete states, and a set of actions that the agent can take to change those states. The agent is given a goal, an instance of its model, and it must find a sequence of actions, that we call a "plan", to take the entities in the model from their current state to the state in the goal. This is usually achieved by casting the planning problem as pathfinding over a graph with a search algorithm like A*. Here, the agent's model is a theory, the search algorithm is the inference procedure, and the plan is a consequence of the theory. Deductive reasoning can be sound, as long as the facts and rules in the theory are correct: from correct premises we can deduce correct conclusions. We know of sound deductive inference rules, e.g. A*, and Resolution, used in automated theorem proving and SAT-Solving, are sound.

The classic example of inductive reasoning is inferring the colour of swans. Most swans are white (apparently) so if we have only seen white swans we have no reason to believe there are any other colours: we are forced to infer that all swans are white. We may only be disabused of our fallacy if we happen to observe a swan that is not white, e.g. a black swan. But who is to say when such a magnificent creature will grace us with its presence, outside of Tchaikovsky's ballets? Induction is thus revealed to be unsound: even given true premises we can still arrive at the wrong conclusions. Another example is the scientific method: imagine an idealised scientist, perfectly spherical, in a frictionless vacuum. She starts with a scientific theory, then goes out into the world and makes new observations about a phenomenon not described by her theory. She constructs a hypothesis to extend her theory so as to explain the new observations. The hypothesis is a set of rules, where the premises are the consequences of the rules in her initial theory. Then, being an idealised scientist, she goes looking for new observations to refute her hypothesis. Science only gives us the tools to know when we're wrong.

Abductive reasoning is the mode of inference exemplified by Sherlock Holmes. We can imagine Sherlock and Watson standing outside a tavern in London, watching as a gentleman of interest steps out of the tavern with egg on his lapel. "Ah, my dear Watson, what can we conclude from this observation?". "Why my dear Holmes, we can conclude that the man had eggs for breakfast". Holmes and Watson can arrive at this conclusion, about a fact that they have not directly observed, because they have a theory with a rule that says "if one eats eggs, one may get some on one's lapels". Working backwards from this rule, and their observation of egg on the man's lapels, they can guess that he had eggs even if they didn't directly observe him doing so. Abduction is also unsound: the man may have swapped coats with an accomplice, who was the one who had eggs for breakfast instead.

And now you know what "reasoning" means. So the next time someone asks: "what is reasoning?", you can let them know and turn the discussion to more interesting, more productive directions.

View Original