One big area of psychology not mentioned in the article that has been seeing a good amount of success is applied psychology with respect to Human-Computer Interaction.
For example, there's a lot of basic perceptual psychology regarding response times and color built into many GUI toolkits in the form of GUI widgets (buttons, scrollbars, checkboxes, etc). Change blindness (https://en.wikipedia.org/wiki/Change_blindness) is also a known problem for error messages and can be easily avoided with good design. There's also a lot of perceptual psychology research in AR and VR too.
With respect to cognitive psychology, there's extensive work in information foraging (https://en.wikipedia.org/wiki/Information_foraging) which has been distilled down as heuristics for information scent.
With respect to social psychology, there are hundreds of scientific papers about collective intelligence, how to make teams online more effective, how to socialize newcomers to online sites, how to motivate people to contribute more content and higher quality content, how and why people collaborate on Wikipedia and tools for making them more effective, and many, many more.
In past work, my colleagues and I also looked at understanding why people fall for phishing scams, and applying influence tactics to improve people's willingness to adopt better cybersecurity practices.
Basically, the author is right about his argument if you have a very narrow view of psychology, but there's a lot of really good work on applied (and practical!) psychology that's going on outside of traditional psychology journals.
As a counter-argument, HCI was investigated pretty thoroughly in the 80s and 90s, and operating systems of the time actually had the results of that well implemented in them. I feel that modern OS developers seem determined to throw away all these lessons.
Don't get me wrong, I think the modern HCI on mobile phones is remarkably good. But I haven't seen any improvement (except maybe the mouse scroll wheel and having a higher resolution screen) on real computer interfaces since the 90s.
And then you have some real useful psychological theories on attention and user-guiding that are used for evil to create antipatterns. I don't think we're making progress.
I think we should be careful to distinguish the question of if we are growing knowledge and the question of if we are using the knowledge (and if we are using it positively). If we aren't using it, there is an interesting question of why, but I think there should be a clear difference between not finding knowledge and not utilizing the knowledge we find.
I've a theory that most UX/UI developers started in their youth as gamers, especially in "twitch" genres, because many interactions for me are now closer to playing Descent than typing a paper into Wordperfect.
> Don't get me wrong, I think the modern HCI on mobile phones is remarkably good.
One of the challenges of psychology is individual variation. Humans have more in common with one another than we have differences, but individuality is a major factor that forces psychologists to look at things statistically unless they are specifically trying to understand or control for individual variance.
I bring this up because my personal subjective opinion is that HCI on modern mobile phones is absolutely atrocious and I don't use a smart phone as much as most people as a result.
I think that when it comes interacting with a tool, what you are accustomed to makes a huge world of difference. I grew up with Desktop computers and laptops. With keyboards, in other words. As a coder and a *nix "power user", I like command line interfaces. I like being able to tweak and customize and configure things to my liking. When I have to use Macbooks at work, it has been soul crushing to me while for others they absolutely love the UI of MacOS.
I also remember the shift of the mobile revolution. A lot of us at the time were starting to get very annoyed by the creep of mobile design conventions making their way into non-mobile contexts. At the time it was understood that those mobile design decisions were "forced" as a result of the limitations of a mobile device, and it was clear that applying them to non-mobile contexts was a cost-cutting measure (mobile first, in other words).
Although well designed iconography can transcend language barriers and facilitate communication, I find that the limited resolution of a smart phone screen forcing designers to use glyphs instead of written text is very confusing to me. I mean, don't get me wrong, I would love to learn ancient Egyptian, but it is often far from intuitive or obvious what these hieroglyphs on the screen are meant to communicate to me. In other words, the iconography is not well designed IMO. At least not in a way that creates an intuitive experience FOR ME.
But a kid who grew up in a world of smart phones is going to be able to navigate them intuitively because they have years of learning what those esoteric glyphs on the touch screen are. They've had years of "typing" out text messages on a tiny touch screens.
On a good mechanical keyboard I can type upwards of 117wpm before I start making mistakes. When trying to text my wife one sentence I need to put aside an afternoon out of my day to get it written correctly. I could get started on how awful auto-correct is but everyone knows this to the point where it's become a cultural meme. Sorry, auto-correct turned "Can you grab me some milk while you're there?" into "fyi the police are here with a search warrant."
So yeah, big tangent off of "HCI on mobile phones is remarkably good." Maybe it is in a relative sense and is as good as it can get... I mean we've had years to iterate and make improvements. But I suspect that a lot of it has to do with people just learning and getting used to haphazard design decisions that just became the defacto for mobile because the tech industry (and business at large if we're being honest) loves to copy.
This is especially important in industrial settings. If a machine operator makes a mistake it's not just expensive, it can cost lifes. There where instances where operators actively fed fuel into fires because they misunderstood the situation displayed on the HMI.
Some time ago I found a really nice presentation about the ISA 101 standard covering this topic. The basic idea is: The HMI looks boring everthing is okay, if something goes into a dangerous direction colors and other elements are used to draw your attention.
> there are hundreds of scientific papers about collective intelligence, how to make teams online more effective, how to socialize newcomers to online sites,
I'm curious what research there is about how to create better-socialized groups of people in general; obviously some cultures are more successful in certain areas than others, despite starting with basically the same human genetics--is there any evidence that a culture can learn/adapt in intentional pro-social ways? How does a society learn to be less corrupt over time? How do people decide to stop littering/speeding/parking illegally? How does a society develop a respect for their environment, for their neighbors, for future generations, etc.?
This is a really great question, and well beyond my areas of expertise. What I can point you to is this excellent book by my colleague Bob Kraut and several of his colleagues, entitled Building Successful Online Communities: Evidence-Based Social Design. It summarizes a lot of empirical research into design claims, about how to socialize newcomers, increasing contributions, quality of contributions, and more.
One of my favorite books that I learned about from my colleagues is Influence by Robert Cialdini. It looks at how to use known social influence tactics to change people's behaviors. Ideally, these would be used for things that society widely regards as positive (e.g. less littering), though these have also been used for phishing attacks and other dark patterns.
It's really telling how it's much easier to progress when things that you are working on are directly measurable rather than self-reported or estimated through proxies.
Also progress in any science is contingent on progress in technology. There's only so much you can figure out before you'll need new, more precise way of measuring things to go on any further.
That sounds interesting, would you mind sharing where would you point me if I wanted to follow up with latest research?
Something like arxiv but for psychology? Unless it' only in the magazines ("Psychology today")? I'd be happy to hear the magazines names too, if you'd be so keen to share.
Roberta Klatzky is a perceptual psychologist that has done a lot of work on haptics. One of her ongoing projects is augmented cognition through wearables, e.g. giving people instructions in heads up displays based on the current state of things (e.g. it looks like you successfully removed the lug nuts, here's your next step in changing the car tire).
https://scholar.google.com/scholar?hl=en&as_sdt=0%2C39&q=rob...
In the broader category of cognition, I think we understand a bit better how people rationalize their decisions. How many things we do almost entirely on pure reflex and then manufacture a story that explains it without sounding crazy or just saying “I don’t know.”
One HUGE thing it's missing, though, is the deliberate hacking of results to reach statistical significance. I'm willing to bet that the results of a majority of psychology studies are not reproducible.
In another lifetime, I worked as a research assistant at a very large, well-funded, Ivy League psychology lab. Talk about p-hacking. Our PI would go so far as to deny potential candidates entry into our study, as well as the therapy, simply because the PI thought these candidates wouldn't help the therapy our PI developed look good in our study. Note, these candidates did meet all our OFFICIAL study criteria for entry into the study.
"I'm willing to bet that the results of a majority of psychology studies are not reproducible"
Indeed
> Study replication rates were 23% for the Journal of Personality and Social Psychology, 48% for Journal of Experimental Psychology: Learning, Memory, and Cognition, and 38% for Psychological Science. Studies in the field of cognitive psychology had a higher replication rate (50%) than studies in the field of social psychology (25%).
That is appalling, imho you might as well call an area of study, that has less than 50% reproducibility for studies published in “credible” journals, a pseudoscience.
There probably should be room in some of the social sciences for flexibility like this as long as it's called out right at the top as part of the experiment design so that the reader knows this is exploratory initial research being done for directional purposes - and that's it.
Unfortunately as the movement from History, Philosophy, and the other liberal arts disciplines became 'sciencified', the ability to deliberate on something rigorous but still with enough room to explore has been sacrificed in favor of trying to be more like the physical sciences.
Honestly after reading that it seems impossible to really conclude anything...as it's just full of conflicting results...is that innately fraud? No but certainly careers/$ have been made from biased/agenda-driven interpretations which seems fraudulent.
If someone collects data and the study outcome is not preregistered, you can assume p-hacking. It would be implausible not to. And in most fields, preregistration is not common. (And even if there's preregistration, regularly people just switch their outcomes, and nobody cares.)
And to play the devil's advocate: psychology is probably doing better these days than most other fields, because it's been the posterchild example of the replication crisis.
i think that before a science can be "a science" with powerful theories and universal laws, there needs to be a long period of existing as a proto-science where people aren't doing experiments and are just observing and describing.
before darwin, you had to have linneaus just describing and cataloging animals.; before {astronomy theory guy}, you had to have {people just tracking and observing stars}.
psychology may have tried to jump the gun a bit by attempting to become theoretical before there were a few generations of folks sitting around quantifying and classifying human behavior.
this was definitely true in cognitive neuroscience. once folks got their hands on fMRI, this entire genre of research popped up that was "replicate an existing psychology study in the scanner to confirm that they used their brain". imo, a lot more was learned by groups that stepped back from theory and just started collecting data and discovering "resting state networks" in the brain.
I suspect that after 400 years of the scientific method, that we may be reaching the limits of single variable experiments in a number of fields. Statistical methods can find those patterns, and as we advance in those areas I expect us to advance in messy sciences like psychology. We’ll be able to more reliably look at people or other chaotic systems and see how three inputs work together to create a single effect.
There's a feedback loop between technology and science. Without progress in science there can't be progress in technology. But slso without progress in technology there can't be progress in science.
partially disagree with this, every proto-science historically had a bunch of wrong but highly sophisticated theories. medicine, alchemy (as mentioned in the article), physics, biology (Aristotle), astronomy. for some reason it seems you need the wrong theories to organize the empirical data.
I actually think Freud’s elaborate mental structures have some of this feeling to them.
> for some reason it seems you need the wrong theories to organize the empirical data
There's a somewhat well known article on this by Isaac Asimov: the Relativity of Wrong
The scientific process is really misunderstood. People think you use it to find truth, but actually you use it to reject falsehoods. The consequence of this is that you narrow in on the truth so your goals look identical, but the distinction does matter (at least if you want to understand why that happens and why it's okay that science is wrong many times -- in fact, it's always wrong, but it gets less wrong (I'm certain there's a connection to that website and this well known saying).
He's well known for his Sci-Fi but he got a PhD in chemistry, taught himself astrophysics, and even published in the area. He even had written physics texts. I found Understanding Physics quite enjoyable when I was younger but yeah, it isn't the same level of complexity I saw while getting my degree, but it's not aimed at University students.
Anyways, I'm just saying, he's speaking as an insider and I do think this is something a lot more people should read.
> there needs to be a long period of existing as a proto-science where people aren't doing experiments and are just observing and describing.
I think you misunderstand science.
> before darwin
And this strengthens my confidence.
There was an understanding of natural selection even back to antiquity. How could there not be? Did people not tame the animals and plants? These are experiments, and they saw the results.
There were great contributions to astronomy long before Kepler. There were many experiments that influenced the whole field. There was a lot of important chemistry that happened long before Lavoisier (conservation of mass) and Dalton (atomic model).
The proto-sciences are nothing to scoff at. They aren't useless and they weren't ill-founded. They were just... noisy (and science is naturally a noisy process, so I mean *NOISY*). There's nothing inheriently wrong with that. The only thing wrong is not recognizing the noise and placing unfounded confidence in results. That famous conversation between Dyson and Fermi discussing von Neumann's elephant wasn't saying that Dyson didn't do hard work or that the work he did had no utility, it was that you can't place confidence in a model derived from empirical results without a strong underlying theory. You'd never get to that if you only observed because you'd only end up making the same error Dyson did.
Science, in its nature, is not about answers, it is about confidence in a model that approximates answers. These two things look identical but truth is unobtainable, there is always an epsilon bound. So it is about that epsilon! Your confidence! So experiments that don't yield high confidence results aren't useless, but they are rather just the beginning. They give direction to explore. Because hey, if I'm looking for elephants I'd rather start looking where someone says they saw a big crazy monster than randomly pick a point on the globe. But I'm also not going to claim elephants exist just because I heard someone talking about something vaguely matching the description. And this is naturally how it works. We're exploring into the unknown. You gotta follow hunches and rumors, because it is better than nothing. But you won't get anywhere from observation alone. Not to mention that it is incredibly easy to be deceived by your observations. You will find this story ring true countless times in the history of science. But better models always prevail because we challenge the status quo and we take risks. But the nature of it is that it is risky (noisy). There's nothing wrong with that. You just have to admit it.
> There was an understanding of natural selection even back to antiquity. How could there not be? Did people not tame the animals and plants? These are experiments, and they saw the results.
Isn't this OP's point, though? People saw results, and even worked with what they saw, but underlying theories were all over the place and it wasn't until the time of Mendel that we started to have even the most rudimentary sense of rigor or scientific method when it came to the field that we now know as genetics. And the contention is that what came before Darwin and Mendel wouldn't stand up as rigorous science in our eyes, but was nevertheless the crucial foundation for what became the field of genetics.
> There was an understanding of natural selection even back to antiquity. How could there not be? Did people not tame the animals and plants? These are experiments, and they saw the results.
No, people did not know natural selection before Darwin. He spent decades collecting and then analyzing data collected in Galapagos Islands before he made his breakthrough.
It's pure hindsight bias to think that you can go from "I bred the fattest chickens together, who made a fatter chicken" to "Humans evolved from apes who evolved from single-cellular organisms". For millennia, people from all cultures believed that God created humans from the void. In the absence of data, that's as good a guess as you can have. If Darwin concocted his theory of natural selection before he had his data, no one would have believed him. By dismissing the theory of natural selection as something that was "obvious" pre-Darwin you are dismissing his life's work.
By forcing formal study of the mind into the constrained methods used for studying the physical world, it allows the government and profit/power seekers to be the only actors free to use the methods that work best.
We've learned that it hasn't produced much research that holds up to replication, that the vast majority of research never gets properly replicated at all anyway, and that despite the endless meta-analysis of glorified internet surveys people's mental health hasn't been improving.
We're certainly learning how to use psychology to manipulate people though. Advertising, dark patterns, propaganda, and behavioral conditioning just wouldn't be the same without psychology research. We're performing research on children to learn the youngest age they can recognize a brand name (age 3 last I checked) or how best to keep them hooked playing a video game/child casino though and that research is making companies money hand over fist.
> I recently read The Secrets of Alchemy by Lawrence Principe, which I loved, especially because he tries to replicate ancient alchemical recipes in his own lab. And sometimes he succeeds! For instance, he attempts to make the “sulfur of antimony” by following the instructions in The Triumphal Chariot of Antimony (Der Triumph-Wagen Antimonii), written by an alchemist named Basil Valentine sometime around the year 1600. At first, all Principe gets is a “dirty gray lump”. Then he realizes the recipe calls for “Hungarian antimony,” so instead of using pure lab-grade antimony, he literally orders some raw Eastern European ore, and suddenly the reaction works! It turns out the Hungarian dirt is special because it contains a bit of silicon dioxide, something Basil Valentine couldn’t have known.
> No wonder alchemists thought they were dealing with mysterious forces beyond the realm of human understanding. To them, that’s exactly what they were doing! If you don’t realize that your ore is lacking silicon dioxide—because you don’t even have the concept of silicon dioxide—then a reaction that worked one time might not work a second time, you’ll have no idea why that happened, and you’ll go nuts looking for explanations. Maybe Venus was in the wrong position? Maybe I didn’t approach my work with a pure enough heart? Or maybe my antimony was poisoned by a demon!
> An alchemist working in the year 1600 would have been justified in thinking that the physical world was too hopelessly complex to ever be understood—random, even. One day you get the sulfur of antimony, the next day you get a dirty gray lump, nobody knows why, and nobody will ever know why. And yet everything they did turned out to be governed by laws—laws that were discovered by humans, laws that are now taught in high school chemistry. Things seem random until you understand ‘em.
Well, this example doesn't just fail to support the argument, but undercuts it. Basil successfully identified the kind of antimony that would work, -despite- having no concept of sulfur dioxide. He did not write down something like "not all kinds of antimony work for this recipe, so get a bunch of different kinds and try them all" -- that, or a stronger version ("sometimes the recipe fails, we don't know why"), would support the author's point.
So we're left with the author trying to argue that this alchemist thought the world was "too hopelessly complex to ever be understood" on the basis of ... the alchemist correctly identifying the ingredient that would make the recipe work.
I’m floored by the suggestion that professional training as a therapist does not produce a statistically significant improvement in ability to treat mental health conditions.
It’s interesting that one comparison they offered was between advice from a random professor versus a session with a therapist. I can remember several helpful conversations with kind, older professors during difficult times. Maybe we should identify people whose life experiences naturally make them good counselors and encourage them to do more of it, instead of making young adults pay $200k for ineffective education and a stamp saying they can charge for therapy.
Speaking as a tenured professor of clinical psychology, this part kind of irked me a bit. It's not exactly false but it's a little misleading (like some other parts of the essay).
Lots to say about it but this is a finding that has been reported intermittently for decades. However, it's being spun a little misleadingly.
Note that the author says that untrained professors were selected for their ability to be warm and empathetic. It's not everyone (we all know not everyone is warm and empathetic), and even trainees learn very very early (like immediately in their first term) the basics of therapy. Not everyone is warm and empathetic, and people going into clinical psychology are sort of self-selected in their empathy to start with.
This research is kind of being taken out of context too. Wampold, one of the authors cited (who I have the greatest respect for) is very big on "nonspecific factors", meaning things like empathy, good social skills, and so forth. His studies in general tend to be focused not on "does training matter?" but "do specific therapy protocols matter, or is it about the clinician's social/relationship skills?"
If you want some kind of medical standards, you can't just say "oh it's ok, everyone can just be warm and empathetic". You have to train on it, grade it, hold it to some standard. Otherwise you get manipulative, self-serving therapists who do harm in the long run (the length of a study versus real settings is another issue).
Another issue is that many of these issues are not unique to psychology. In lots of medical scenarios it's been shown that the amount of training needed to competently do a wide variety of procedures is lower than current standards in the US require. Experienced clinicians in many fields have acquired biases that interfere with practice, young trainees are much more worried about performance and are more open-minded and so forth (on average a little; not trying to stereotype).
A huge, enormous volume of studies over many years have shown that therapy works compared to all sorts of placebos and controls; that some therapists are reliably better than others; but that what makes therapy "work" overall is not what protocol-driven therapies (CBT etc) assert. It's not so much that training isn't necessary, it's that the field has has been obsessed with scientific details that, although well-intended, don't matter, and healthcare in general is full of phenomena that we'd rather not admit.
Thanks for writing this, appreciate the point of view of someone who knows what they’re talking about. I guess my gripe is that the time-consuming and expensive training process isn’t able to reliably elevate a random young practitioner to the helpfulness level of “wise and patient professor who is offering their time to mentor and counsel even though it’s not in their job description”… but that is in fact a very high bar to hit.
It’s not surprising that some people are naturally good therapists just from a lifetime of observing people, and also not surprising that some of those people end up in teaching-focused academic jobs.
I guess you can train people to be empathetic if they’re motivated in the right ways but just lacking the skill. It makes sense that it’s a big part of counselor training.
There was another research that indicated that the therapeutical framework that the therapist uses has no influence on the probability of postive outcome. What mattered more is whether a therapist was able to form a meaningful connection with a patient.
> professional training as a therapist does not produce a statistically significant improvement in ability to treat mental health conditions.
It produces a statistically significant improvement, just not with people who are already gifted at it. You can get not gifted people and teach them to be not worse than gifted. It is not much, but it is not nothing either.
It seems that gifted people with zero trainig are just as good in this activity as people with years of formal training on average who self-selected to undergo that training. I'm not sure if that's true of any other activity.
Where did it say the education was ineffective? There are reasons to believe it is not the only path to being effective at helping others, but that does not invalidate that if you spend a few years learning tools and techniques and pattern matching to behaviors, you have a valid toolkit in front of you for being a therapist.
Now, it is a valid argument whether or not it should be required (and there is no requirement to label yourself as a "coach"), and the price tag on it is of course always a consideration. But being dismissive of higher education is just as silly as being overly dependent on it.
Part of the problem is the therapists (and medical practitioners in general) are often forbidden from doing the thing they were trained to do for a variety of reasons: risk and liability, patient turnaround, standardization. These things can get in the way of doing the right thing in the times where that is known. That’s before considering the ambiguous cases.
I think a lot of people just never find the right therapist and then assume all therapists are terrible.
It’s interesting because even the most staunch opponents of mental health talk therapy have people in their life they talk to, they just don’t consider them therapists.
Well, sure, but "people in their life that they talk to" aren't really therapists. They're functioning quite differently - they can have a personal involvement that a therapist, ethically, isn't permitted to have. The sorts of things someone talks to with their friends overlaps with but is also often quite distinct from the sort of thing a therapist is probing for. There's no direct financial incentive to keep the "patient" coming. And they're making no claim to, broadly, help someone improve their overall mental health - people vent to their friends because it feels nice, not because it's necessarily constructive.
Although I agree it's a matter of finding the right therapist, I think that undersells the problem a fair bit.
There are large barriers to trialing a lot of therapists, and finding the right one can be like finding a needle in a haystack. Therapy is quite expensive, and many therapists already have a full caseload. And the pool of therapists is very homogeneous: essentially, a ton of well-off white women who might not have the tools or shared experiences to facilitate a helpful therapeutic alliance with individuals coming from a broader background than they're comfortable with.
But this begs the same question: if mental illness really is what psychologists say it is, and if treatment is a learnable skill, then the practitioner shouldn't matter that much assuming his training was good.
But most evidence suggests that some "je ne sais quoi" has to exist in the therapeutic relationship.
In other words, Freud was right about Transference as a necessary ingredient to psychotherapy (and probably about a lot else that is still too controversial to talk about or pass IRB muster).
In my experience, most staunch opponents of mental health talk therapy are people who have serious issues and really do not want them to be talked about and fixed. Issues like bad anger management when they want to keep the anger out of anger, eating disorder which makes you not want to heal, because you might get cured and fat.
There is such a thing as being unhappy about actual therapy that did nothing or harmed you. But you see the staunch opponents who never been at therapy and have only movie understanding of it having tons of strong opinions or fear.
"Just one more therapist bro" is what defenders of modern psychology use. It is always your fault the therapist didn't work out. Always your fault you aren't trying hard enough. There can never be systemic issues.
There was another research that indicated that the therapeutical framework that the therapist uses has no influence on the probability of postive outcome. What mattered more is whether a therapist was able to form a meaningful connection with a patient.
I don't mind this idea at all! I'm the abyss staring into itself.
That said I don't think digging into skulls until we identify the neurons that cause the big sad or teaching people ways to cope with their awful lives is worth much. I want psychology to help me understand (a maybe terrible) existence, not to solve it. Something like overturning our intuitions is perfect. If tomorrow they make a flawless anti-depressant that will let me endure misery I argue we'll be worse off.
> There’s a thought that’s haunted me for years: we’re doing all this research in psychology, but are we learning anything?
Advancements in PTSD, dissociation, treatment resistant depression and attachment disorders is astounding. We know a lot more about how people work.
Psychology has always been a person centered field - humans are complex, and what it does is more akin to QA than coding. It’s individualized. It doesn’t love studies because the underlying mechanism or traumas can be different even for people who went through the same things.
Unfortunately advancements are not evenly distributed. There is an army of CBT therapists who work in one method that works for some but not the majority. Finding a practitioner is a crapshoot even when looking for specialists.
The DSM is functionally treated as a billing manual, and to be paid practitioners need to jump through a long series of hoops. The medical billing side can’t deal with the complexity.
All these aside, there are people who are really truly healing in ways they wouldn’t without the field. There are ideas that propagate through human culture make human behavior more understandable.
Given that “humans are complex” and “it’s individualized”, would advancements be greater and faster by just allowing clinicians and scientists to just talk things out instead of coming up with “studies” which pretend to be “science” with a low reproducibility rate (and non-publishing on null results)?
You may be looking for qualitative data and reporting which is on the rise!
The term "evidence based" is bandied about all the time because insurance companies don't want to cover treatments that aren't considered standard. The problem is everyone is different on some level, and we often don't have the resources to get to the root of any problems. So treatments that may work extremely effectively for some may be thrown out because they don't work effectively for everyone, and can be contraindicated. Somatic therapists especially have to deal with this. Effective treatments are often outside of the "evidence based" tests, which can be based entirely around showing symptom improvement. This creates a catch-22 where if you lessen the restrictions you get a lot of crackpot providers, where if you keep them tight you keep people from being able to access treatments that may work well for them.
There's also competing models for mental problems and approaches - the psychiatric model is similar to a doctor giving treatment for an illness. They tend to have a belief in biological determinism, IE if a parent had an illness then its likely you will have one too. The Biopsychosocial model is a little bit more holistic around the experiences of people and their physical environment and upbringing. The Trauma model is one I personally ascribe more to which conceptualizes mental health problems as understandable reactions to traumatic events that are conditioned within us.
There are a lot of people who get real relief from outside the mainstream providers, and there are a lot of people for whom the standard providers have not been able to help. I think that is part of why there's so much activity around finding better models right now.
For example, there's a lot of basic perceptual psychology regarding response times and color built into many GUI toolkits in the form of GUI widgets (buttons, scrollbars, checkboxes, etc). Change blindness (https://en.wikipedia.org/wiki/Change_blindness) is also a known problem for error messages and can be easily avoided with good design. There's also a lot of perceptual psychology research in AR and VR too.
With respect to cognitive psychology, there's extensive work in information foraging (https://en.wikipedia.org/wiki/Information_foraging) which has been distilled down as heuristics for information scent.
With respect to social psychology, there are hundreds of scientific papers about collective intelligence, how to make teams online more effective, how to socialize newcomers to online sites, how to motivate people to contribute more content and higher quality content, how and why people collaborate on Wikipedia and tools for making them more effective, and many, many more.
In past work, my colleagues and I also looked at understanding why people fall for phishing scams, and applying influence tactics to improve people's willingness to adopt better cybersecurity practices.
Basically, the author is right about his argument if you have a very narrow view of psychology, but there's a lot of really good work on applied (and practical!) psychology that's going on outside of traditional psychology journals.
Don't get me wrong, I think the modern HCI on mobile phones is remarkably good. But I haven't seen any improvement (except maybe the mouse scroll wheel and having a higher resolution screen) on real computer interfaces since the 90s.
And then you have some real useful psychological theories on attention and user-guiding that are used for evil to create antipatterns. I don't think we're making progress.
One of the challenges of psychology is individual variation. Humans have more in common with one another than we have differences, but individuality is a major factor that forces psychologists to look at things statistically unless they are specifically trying to understand or control for individual variance.
I bring this up because my personal subjective opinion is that HCI on modern mobile phones is absolutely atrocious and I don't use a smart phone as much as most people as a result.
I think that when it comes interacting with a tool, what you are accustomed to makes a huge world of difference. I grew up with Desktop computers and laptops. With keyboards, in other words. As a coder and a *nix "power user", I like command line interfaces. I like being able to tweak and customize and configure things to my liking. When I have to use Macbooks at work, it has been soul crushing to me while for others they absolutely love the UI of MacOS.
I also remember the shift of the mobile revolution. A lot of us at the time were starting to get very annoyed by the creep of mobile design conventions making their way into non-mobile contexts. At the time it was understood that those mobile design decisions were "forced" as a result of the limitations of a mobile device, and it was clear that applying them to non-mobile contexts was a cost-cutting measure (mobile first, in other words).
Although well designed iconography can transcend language barriers and facilitate communication, I find that the limited resolution of a smart phone screen forcing designers to use glyphs instead of written text is very confusing to me. I mean, don't get me wrong, I would love to learn ancient Egyptian, but it is often far from intuitive or obvious what these hieroglyphs on the screen are meant to communicate to me. In other words, the iconography is not well designed IMO. At least not in a way that creates an intuitive experience FOR ME.
But a kid who grew up in a world of smart phones is going to be able to navigate them intuitively because they have years of learning what those esoteric glyphs on the touch screen are. They've had years of "typing" out text messages on a tiny touch screens.
On a good mechanical keyboard I can type upwards of 117wpm before I start making mistakes. When trying to text my wife one sentence I need to put aside an afternoon out of my day to get it written correctly. I could get started on how awful auto-correct is but everyone knows this to the point where it's become a cultural meme. Sorry, auto-correct turned "Can you grab me some milk while you're there?" into "fyi the police are here with a search warrant."
So yeah, big tangent off of "HCI on mobile phones is remarkably good." Maybe it is in a relative sense and is as good as it can get... I mean we've had years to iterate and make improvements. But I suspect that a lot of it has to do with people just learning and getting used to haphazard design decisions that just became the defacto for mobile because the tech industry (and business at large if we're being honest) loves to copy.
Some time ago I found a really nice presentation about the ISA 101 standard covering this topic. The basic idea is: The HMI looks boring everthing is okay, if something goes into a dangerous direction colors and other elements are used to draw your attention.
I'm curious what research there is about how to create better-socialized groups of people in general; obviously some cultures are more successful in certain areas than others, despite starting with basically the same human genetics--is there any evidence that a culture can learn/adapt in intentional pro-social ways? How does a society learn to be less corrupt over time? How do people decide to stop littering/speeding/parking illegally? How does a society develop a respect for their environment, for their neighbors, for future generations, etc.?
https://direct.mit.edu/books/monograph/2912/Building-Success...
You might also look into research on pro-social behaviors. https://en.wikipedia.org/wiki/Prosocial_behavior
One of my favorite books that I learned about from my colleagues is Influence by Robert Cialdini. It looks at how to use known social influence tactics to change people's behaviors. Ideally, these would be used for things that society widely regards as positive (e.g. less littering), though these have also been used for phishing attacks and other dark patterns.
https://en.wikipedia.org/wiki/Robert_Cialdini
Also progress in any science is contingent on progress in technology. There's only so much you can figure out before you'll need new, more precise way of measuring things to go on any further.
That sounds interesting, would you mind sharing where would you point me if I wanted to follow up with latest research?
Something like arxiv but for psychology? Unless it' only in the magazines ("Psychology today")? I'd be happy to hear the magazines names too, if you'd be so keen to share.
Thank you very much!
David Lindlbauer is a faculty at CMU who applies a lot of perceptual psych to his research on VR. https://scholar.google.com/scholar?hl=en&as_sdt=0%2C39&q=dav...
Roberta Klatzky is a perceptual psychologist that has done a lot of work on haptics. One of her ongoing projects is augmented cognition through wearables, e.g. giving people instructions in heads up displays based on the current state of things (e.g. it looks like you successfully removed the lug nuts, here's your next step in changing the car tire). https://scholar.google.com/scholar?hl=en&as_sdt=0%2C39&q=rob...
But I feel that anyone that think psychology will be fully predictable, or even up to the standards of medicine today will be for a disappointment
(but oh well, they can still run their experiments on Grad Students or Amazon MK workers and get another grant)
One HUGE thing it's missing, though, is the deliberate hacking of results to reach statistical significance. I'm willing to bet that the results of a majority of psychology studies are not reproducible.
In another lifetime, I worked as a research assistant at a very large, well-funded, Ivy League psychology lab. Talk about p-hacking. Our PI would go so far as to deny potential candidates entry into our study, as well as the therapy, simply because the PI thought these candidates wouldn't help the therapy our PI developed look good in our study. Note, these candidates did meet all our OFFICIAL study criteria for entry into the study.
Indeed
> Study replication rates were 23% for the Journal of Personality and Social Psychology, 48% for Journal of Experimental Psychology: Learning, Memory, and Cognition, and 38% for Psychological Science. Studies in the field of cognitive psychology had a higher replication rate (50%) than studies in the field of social psychology (25%).
https://en.wikipedia.org/wiki/Replication_crisis
Unfortunately as the movement from History, Philosophy, and the other liberal arts disciplines became 'sciencified', the ability to deliberate on something rigorous but still with enough room to explore has been sacrificed in favor of trying to be more like the physical sciences.
Here's an infamous example: https://en.wikipedia.org/wiki/Milgram_experiment#Validity
Honestly after reading that it seems impossible to really conclude anything...as it's just full of conflicting results...is that innately fraud? No but certainly careers/$ have been made from biased/agenda-driven interpretations which seems fraudulent.
If someone collects data and the study outcome is not preregistered, you can assume p-hacking. It would be implausible not to. And in most fields, preregistration is not common. (And even if there's preregistration, regularly people just switch their outcomes, and nobody cares.)
And to play the devil's advocate: psychology is probably doing better these days than most other fields, because it's been the posterchild example of the replication crisis.
before darwin, you had to have linneaus just describing and cataloging animals.; before {astronomy theory guy}, you had to have {people just tracking and observing stars}.
psychology may have tried to jump the gun a bit by attempting to become theoretical before there were a few generations of folks sitting around quantifying and classifying human behavior.
this was definitely true in cognitive neuroscience. once folks got their hands on fMRI, this entire genre of research popped up that was "replicate an existing psychology study in the scanner to confirm that they used their brain". imo, a lot more was learned by groups that stepped back from theory and just started collecting data and discovering "resting state networks" in the brain.
Modern industry would not exist without it.
The problem with psychology experiments is that the mind has many hidden variables which cannot be easily accounted for.
I actually think Freud’s elaborate mental structures have some of this feeling to them.
The scientific process is really misunderstood. People think you use it to find truth, but actually you use it to reject falsehoods. The consequence of this is that you narrow in on the truth so your goals look identical, but the distinction does matter (at least if you want to understand why that happens and why it's okay that science is wrong many times -- in fact, it's always wrong, but it gets less wrong (I'm certain there's a connection to that website and this well known saying).
He's well known for his Sci-Fi but he got a PhD in chemistry, taught himself astrophysics, and even published in the area. He even had written physics texts. I found Understanding Physics quite enjoyable when I was younger but yeah, it isn't the same level of complexity I saw while getting my degree, but it's not aimed at University students.
Anyways, I'm just saying, he's speaking as an insider and I do think this is something a lot more people should read.
https://hermiene.net/essays-trans/relativity_of_wrong.html
I believe there's a copy of Understanding Physics here but currently offline: https://archive.org/details/asimov-understanding-physics
[0]: https://en.wikipedia.org/wiki/Galileo_Galilei
There was an understanding of natural selection even back to antiquity. How could there not be? Did people not tame the animals and plants? These are experiments, and they saw the results.
There were great contributions to astronomy long before Kepler. There were many experiments that influenced the whole field. There was a lot of important chemistry that happened long before Lavoisier (conservation of mass) and Dalton (atomic model).
The proto-sciences are nothing to scoff at. They aren't useless and they weren't ill-founded. They were just... noisy (and science is naturally a noisy process, so I mean *NOISY*). There's nothing inheriently wrong with that. The only thing wrong is not recognizing the noise and placing unfounded confidence in results. That famous conversation between Dyson and Fermi discussing von Neumann's elephant wasn't saying that Dyson didn't do hard work or that the work he did had no utility, it was that you can't place confidence in a model derived from empirical results without a strong underlying theory. You'd never get to that if you only observed because you'd only end up making the same error Dyson did.
Science, in its nature, is not about answers, it is about confidence in a model that approximates answers. These two things look identical but truth is unobtainable, there is always an epsilon bound. So it is about that epsilon! Your confidence! So experiments that don't yield high confidence results aren't useless, but they are rather just the beginning. They give direction to explore. Because hey, if I'm looking for elephants I'd rather start looking where someone says they saw a big crazy monster than randomly pick a point on the globe. But I'm also not going to claim elephants exist just because I heard someone talking about something vaguely matching the description. And this is naturally how it works. We're exploring into the unknown. You gotta follow hunches and rumors, because it is better than nothing. But you won't get anywhere from observation alone. Not to mention that it is incredibly easy to be deceived by your observations. You will find this story ring true countless times in the history of science. But better models always prevail because we challenge the status quo and we take risks. But the nature of it is that it is risky (noisy). There's nothing wrong with that. You just have to admit it.
No, people did not know natural selection before Darwin. He spent decades collecting and then analyzing data collected in Galapagos Islands before he made his breakthrough.
It's pure hindsight bias to think that you can go from "I bred the fattest chickens together, who made a fatter chicken" to "Humans evolved from apes who evolved from single-cellular organisms". For millennia, people from all cultures believed that God created humans from the void. In the absence of data, that's as good a guess as you can have. If Darwin concocted his theory of natural selection before he had his data, no one would have believed him. By dismissing the theory of natural selection as something that was "obvious" pre-Darwin you are dismissing his life's work.
I wonder if this is purely a coincidence.
We're certainly learning how to use psychology to manipulate people though. Advertising, dark patterns, propaganda, and behavioral conditioning just wouldn't be the same without psychology research. We're performing research on children to learn the youngest age they can recognize a brand name (age 3 last I checked) or how best to keep them hooked playing a video game/child casino though and that research is making companies money hand over fist.
> No wonder alchemists thought they were dealing with mysterious forces beyond the realm of human understanding. To them, that’s exactly what they were doing! If you don’t realize that your ore is lacking silicon dioxide—because you don’t even have the concept of silicon dioxide—then a reaction that worked one time might not work a second time, you’ll have no idea why that happened, and you’ll go nuts looking for explanations. Maybe Venus was in the wrong position? Maybe I didn’t approach my work with a pure enough heart? Or maybe my antimony was poisoned by a demon!
> An alchemist working in the year 1600 would have been justified in thinking that the physical world was too hopelessly complex to ever be understood—random, even. One day you get the sulfur of antimony, the next day you get a dirty gray lump, nobody knows why, and nobody will ever know why. And yet everything they did turned out to be governed by laws—laws that were discovered by humans, laws that are now taught in high school chemistry. Things seem random until you understand ‘em.
Well, this example doesn't just fail to support the argument, but undercuts it. Basil successfully identified the kind of antimony that would work, -despite- having no concept of sulfur dioxide. He did not write down something like "not all kinds of antimony work for this recipe, so get a bunch of different kinds and try them all" -- that, or a stronger version ("sometimes the recipe fails, we don't know why"), would support the author's point.
So we're left with the author trying to argue that this alchemist thought the world was "too hopelessly complex to ever be understood" on the basis of ... the alchemist correctly identifying the ingredient that would make the recipe work.
It’s interesting that one comparison they offered was between advice from a random professor versus a session with a therapist. I can remember several helpful conversations with kind, older professors during difficult times. Maybe we should identify people whose life experiences naturally make them good counselors and encourage them to do more of it, instead of making young adults pay $200k for ineffective education and a stamp saying they can charge for therapy.
Lots to say about it but this is a finding that has been reported intermittently for decades. However, it's being spun a little misleadingly.
Note that the author says that untrained professors were selected for their ability to be warm and empathetic. It's not everyone (we all know not everyone is warm and empathetic), and even trainees learn very very early (like immediately in their first term) the basics of therapy. Not everyone is warm and empathetic, and people going into clinical psychology are sort of self-selected in their empathy to start with.
This research is kind of being taken out of context too. Wampold, one of the authors cited (who I have the greatest respect for) is very big on "nonspecific factors", meaning things like empathy, good social skills, and so forth. His studies in general tend to be focused not on "does training matter?" but "do specific therapy protocols matter, or is it about the clinician's social/relationship skills?"
If you want some kind of medical standards, you can't just say "oh it's ok, everyone can just be warm and empathetic". You have to train on it, grade it, hold it to some standard. Otherwise you get manipulative, self-serving therapists who do harm in the long run (the length of a study versus real settings is another issue).
Another issue is that many of these issues are not unique to psychology. In lots of medical scenarios it's been shown that the amount of training needed to competently do a wide variety of procedures is lower than current standards in the US require. Experienced clinicians in many fields have acquired biases that interfere with practice, young trainees are much more worried about performance and are more open-minded and so forth (on average a little; not trying to stereotype).
A huge, enormous volume of studies over many years have shown that therapy works compared to all sorts of placebos and controls; that some therapists are reliably better than others; but that what makes therapy "work" overall is not what protocol-driven therapies (CBT etc) assert. It's not so much that training isn't necessary, it's that the field has has been obsessed with scientific details that, although well-intended, don't matter, and healthcare in general is full of phenomena that we'd rather not admit.
It’s not surprising that some people are naturally good therapists just from a lifetime of observing people, and also not surprising that some of those people end up in teaching-focused academic jobs.
I guess you can train people to be empathetic if they’re motivated in the right ways but just lacking the skill. It makes sense that it’s a big part of counselor training.
What kind of things are you talking about?
It produces a statistically significant improvement, just not with people who are already gifted at it. You can get not gifted people and teach them to be not worse than gifted. It is not much, but it is not nothing either.
Now, it is a valid argument whether or not it should be required (and there is no requirement to label yourself as a "coach"), and the price tag on it is of course always a consideration. But being dismissive of higher education is just as silly as being overly dependent on it.
you forgot to add `insurance company rules` to your list.
It’s interesting because even the most staunch opponents of mental health talk therapy have people in their life they talk to, they just don’t consider them therapists.
There are large barriers to trialing a lot of therapists, and finding the right one can be like finding a needle in a haystack. Therapy is quite expensive, and many therapists already have a full caseload. And the pool of therapists is very homogeneous: essentially, a ton of well-off white women who might not have the tools or shared experiences to facilitate a helpful therapeutic alliance with individuals coming from a broader background than they're comfortable with.
But most evidence suggests that some "je ne sais quoi" has to exist in the therapeutic relationship.
In other words, Freud was right about Transference as a necessary ingredient to psychotherapy (and probably about a lot else that is still too controversial to talk about or pass IRB muster).
There is such a thing as being unhappy about actual therapy that did nothing or harmed you. But you see the staunch opponents who never been at therapy and have only movie understanding of it having tons of strong opinions or fear.
Deleted Comment
I don't mind this idea at all! I'm the abyss staring into itself.
That said I don't think digging into skulls until we identify the neurons that cause the big sad or teaching people ways to cope with their awful lives is worth much. I want psychology to help me understand (a maybe terrible) existence, not to solve it. Something like overturning our intuitions is perfect. If tomorrow they make a flawless anti-depressant that will let me endure misery I argue we'll be worse off.
Advancements in PTSD, dissociation, treatment resistant depression and attachment disorders is astounding. We know a lot more about how people work.
Psychology has always been a person centered field - humans are complex, and what it does is more akin to QA than coding. It’s individualized. It doesn’t love studies because the underlying mechanism or traumas can be different even for people who went through the same things.
Unfortunately advancements are not evenly distributed. There is an army of CBT therapists who work in one method that works for some but not the majority. Finding a practitioner is a crapshoot even when looking for specialists.
The DSM is functionally treated as a billing manual, and to be paid practitioners need to jump through a long series of hoops. The medical billing side can’t deal with the complexity.
All these aside, there are people who are really truly healing in ways they wouldn’t without the field. There are ideas that propagate through human culture make human behavior more understandable.
The term "evidence based" is bandied about all the time because insurance companies don't want to cover treatments that aren't considered standard. The problem is everyone is different on some level, and we often don't have the resources to get to the root of any problems. So treatments that may work extremely effectively for some may be thrown out because they don't work effectively for everyone, and can be contraindicated. Somatic therapists especially have to deal with this. Effective treatments are often outside of the "evidence based" tests, which can be based entirely around showing symptom improvement. This creates a catch-22 where if you lessen the restrictions you get a lot of crackpot providers, where if you keep them tight you keep people from being able to access treatments that may work well for them.
There's also competing models for mental problems and approaches - the psychiatric model is similar to a doctor giving treatment for an illness. They tend to have a belief in biological determinism, IE if a parent had an illness then its likely you will have one too. The Biopsychosocial model is a little bit more holistic around the experiences of people and their physical environment and upbringing. The Trauma model is one I personally ascribe more to which conceptualizes mental health problems as understandable reactions to traumatic events that are conditioned within us.
There are a lot of people who get real relief from outside the mainstream providers, and there are a lot of people for whom the standard providers have not been able to help. I think that is part of why there's so much activity around finding better models right now.