Readit News logoReadit News
wtbdqrs commented on When does knowledge sharing lead to knowledge production?   hbs.edu/faculty/Pages/ite... · Posted by u/squircle
wtbdqrs · 2 years ago
Publishing studies is sharing. All students of a subject should learn study design in their first semester (I know I didn't and didn't have to back in the day). Every study should be peer-reviewed by students in their second semester and in their last. Master students should then be required to review multiple in their last semester. I know they are not skilled enough yet. The point is knowledge production in their brains and circles. All of it should be published. All of it should be translated in multiple variations by AI assisted humans from multiple countries. There should be a HN/ribbon farm types forum where these peer-reviews can then be discussed by the smart and curious mob (including PhDs). Studies should be randomly assigned. All resources necessary should be paid by a direct tax of billionaires. I will stop reading stuff on HN now.
wtbdqrs commented on Simple tasks showing reasoning breakdown in state-of-the-art LLMs   arxiv.org/abs/2406.02061... · Posted by u/tosh
cpleppert · 2 years ago
There isn't any evidence that models are doing any kind of "system 2 thinking" here. The model's response is guided by both the prompt and its current output so when you tell it to reason step by step the final answer is guided by its current output text. The second best answer is just something it came up with because you asked, the model has no second best answer to give. The second best answers always seem strange because the model doesn't know what it means to come up with a second best answer; it 'believes' the output it gave is the correct answer and helpfully tries to fulfill your request. Sometimes the second best answer is right but most of the time its completely nonsensical and there is no way to distinguish between the two. If you ask to choose it will be strongly influenced by the framing of its prior response and won't be able to spot logical errors.

Asking it to do lateral thinking and provide examples isn't really helpful because its final output is mostly driven by the step by step reasoning text, not by examples it has generated. At best, the examples are all wrong but it ignores that and spits out the right answer. At worst, it can become confused and give the wrong answer.

I've seen gpt-4 make all kinds of errors with prompts like this. Sometimes, all the reasoning is wrong but the answer is right and vice versa.

wtbdqrs · 2 years ago
well, the tuning of training data results in at least some predictions that resemble varying models of systems 1 and 2 thinking. there is no reasoning at all. it's all models of reasoning, tokenized by opinionated taxonomical algorithms and degrees of systemic, academic/conventional human interpretation (tags) that are far from capturing the general human experience.
wtbdqrs commented on The 'Dead Internet Theory'   theconversation.com/the-d... · Posted by u/TamTech
wtbdqrs · 2 years ago
well, it all might be circuits giving birth to consciousness-es. like when matter bounced around and formed planets and stuff, except it's information parsed by evolving circuits in as many forms as possible just because there's always this one dude or dudette who's a bit slow on the uptake.

but then again, go outside and holla that fake shit. everyones trained and tutored by of through and for emotional bondage. even, or rather, thanks to the dreaming mind the art itself is disconnected and alienated from the manipulated and corrupt artist. it's not the kids playing music anymore, it's their shadows or their guts if you so will. once the music stops they are all back to being regular bots again. we all have only so much energy for putting up a good show, there's none left for playing our selves, for being honest, which is why it's easier to outsource as much as possible to digital and superego algorithms; just choose a flavor or more and the sim remains stable and fun and the kids will have stuff to digest and remix forever

wtbdqrs · 2 years ago
wow, that sounds bad, sorry. i just get stuck if i write that stuff in a notebook. this forums level and kind of interaction helps keep the chain of thought going at the same temperature and in the intended hue.

i shouldn't do this again, though. apologies, if I annoyed someone or got them into a bad mood.

wtbdqrs commented on Simple tasks showing reasoning breakdown in state-of-the-art LLMs   arxiv.org/abs/2406.02061... · Posted by u/tosh
zeknife · 2 years ago
If you had a prompt that reliably made the model perform better at all tasks, that would be useful. But if you have to manually tweak your prompts for every problem, and then manually verify that the answer is correct, that's not so useful.
wtbdqrs · 2 years ago
the fact that you can manually tweak your prompts for any problem and agent, is still super useful (joke: *and the only reason our civilization still exists)
wtbdqrs commented on Simple tasks showing reasoning breakdown in state-of-the-art LLMs   arxiv.org/abs/2406.02061... · Posted by u/tosh
voxic11 · 2 years ago
Yeah, I think these chatbots are just too sure of themselves. They only really do "system 1 thinking" and only do "system 2 thinking" if you prompt them to. If I ask gpt-4o the riddle in this paper and tell it to assume its reasoning contains possible logical inconsistencies and to come up with reasons why that might be then it does correctly identify the problems with its initial answer and arrives at the correct one.

Here is my prompt:

I have a riddle for you. Please reason about possible assumptions you can make, and paths to find the answer to the question first. Remember this is a riddle so explore lateral thinking possibilities. Then run through some examples using concrete values. And only after doing that attempt to answer the question by reasoning step by step.

The riddle is "Alice has N brothers and she also has M sisters. How many sisters does Alice’s brother have?"

After you answer the riddle please review your answer assuming that you have made a logical inconsistency in each step and explain what that inconsistency is. Even if you think there is none do your best to confabulate a reason why it could be logically inconsistent.

Finally after you have done this re-examine your answer in light of these possible inconsistencies and give what you could consider a second best answer.

wtbdqrs · 2 years ago
I'm not gonna read that book. I started and stopped after few chapters because it is based on and aims at manufacturing minds that follow game theory logic. Science (studies, reviews and application) got damaged quite a bit when too many people started following game theory logic.

We are, from our aware POV, a very young civilization.

And you only ever need game theory logic when you have to survive, got no thing and no skill to trade and you are too pathetic to move back in with your parents to work on your mind and or fuckability. Making money by ways of game theory logic compensates for all that but also diminishes the survival chance of the users' offspring to zero once super-unalligned AGIs start to assess the entire supply chain of wealth and how it impacts the evolution of human organisms and the ones inside them.

wtbdqrs commented on Simple tasks showing reasoning breakdown in state-of-the-art LLMs   arxiv.org/abs/2406.02061... · Posted by u/tosh
daveguy · 2 years ago
> After you answer the riddle please review your answer assuming that you have made a logical inconsistency in each step and explain what that inconsistency is. Even if you think there is none do your best to confabulate a reason why it could be logically inconsistent.

LLMs are fundamentally incapable of following this instruction. It is still model inference, no matter how you prompt it.

wtbdqrs · 2 years ago
isn't any instruction a subclass of inference? and doesn't any phrasing (lexicology) simply translate "down" to the heaviest values which, varying with the fine tuning, are the words that are, consensually and conventionally, the simplest ones that convey the meaning of the original word in the prompt, which should be the least ambivalent/least interpretable (again, fine tuning can broaden the scope) oneS. thus the LLM fulfills the "translated" instructions step by step and comes up with both or either the correct reasoning and answer.

details and technicalities, especially liminal ones, aren't as conventional and consensual as the name of the current set it is to be interpreted in.

so almost all mistakes of LLMs can be blamed on the lack of variety of human translations. multiple translations are only common for subtitles, mangas and manhwa as far as i know, or when some dude or dudette is proficient and passionate in two languages and reads a bad/weak translation of a (usually classic) novel. why the fuck would a human properly retranslate automated documentations or googles dev blog? or books on logic, in any science, books on art and aesthetics and whatnot. technical people don't need to care because, practically, there are no interpretations in algorithms and the rest of the code, except when a programming language does something weird on the (or someones) machine, which isn't that common by design.

wtbdqrs commented on     · Posted by u/isaacfrond
wtbdqrs · 2 years ago
PR. (semi-)internal conflicts giving the impression that a company is maturing/fighting guilty, conscious, good and better, selfish and humane, rational and transcendent intentions.

Dead Comment

wtbdqrs commented on Genes protective during Black Death may now be increasing autoimmune disorders (2022)   health.harvard.edu/blog/g... · Posted by u/indigodaddy
hi-v-rocknroll · 2 years ago
Don't worry, continued meat agriculture with reckless use of the same antibiotics used in humans to keep cows from dying from feeding them over-subsidized corn will inevitably cause another pandemic.
wtbdqrs · 2 years ago
much more likely that the culprits will be people with proper sight and values refusing to occupy the relevant positions in society, economy and politics.
wtbdqrs commented on The 'Dead Internet Theory'   theconversation.com/the-d... · Posted by u/TamTech
shzhdbi09gv8ioi · 2 years ago
> The dead internet theory essentially claims that activity and content on the internet, including social media accounts, are predominantly being created and automated by artificial intelligence agents.

The theory predates the current era of AI generated content:

> The dead Internet theory's exact origin is difficult to pinpoint, but it most likely emerged from 4chan or Wizardchan as a theoretical concept in the late 2010s or early 2020s.

https://en.wikipedia.org/wiki/Dead_Internet_theory

And just my personal 2cents; I believe this was the result of predatory SEO.

wtbdqrs · 2 years ago
well, it all might be circuits giving birth to consciousness-es. like when matter bounced around and formed planets and stuff, except it's information parsed by evolving circuits in as many forms as possible just because there's always this one dude or dudette who's a bit slow on the uptake.

but then again, go outside and holla that fake shit. everyones trained and tutored by of through and for emotional bondage. even, or rather, thanks to the dreaming mind the art itself is disconnected and alienated from the manipulated and corrupt artist. it's not the kids playing music anymore, it's their shadows or their guts if you so will. once the music stops they are all back to being regular bots again. we all have only so much energy for putting up a good show, there's none left for playing our selves, for being honest, which is why it's easier to outsource as much as possible to digital and superego algorithms; just choose a flavor or more and the sim remains stable and fun and the kids will have stuff to digest and remix forever

u/wtbdqrs

KarmaCake day-5June 5, 2024
About
still trying to fix the garbage collector in my brain with no-code tools before noticing that the problem is actually used false references collected over too many years.
View Original