“The Last Word on Nothing”

I love the tagline of this blog, a quote I’ve never heard before:

“Science says the first word on everything, and the last word on nothing” – Victor Hugo”

Looking forward to following.

Advertisements

“Neuroscience will become something that is apprehensible to ordinary people”

So says the recently deceased Sherwin Nuland in an episode of On Being.

He says that, initially, the general public turned away from genetics.  But “bit by bit people began to understand DNA.”

I’m not sure how well most people understand DNA.  I presume most people have some vague understanding that DNA is somehow linked to all that stuff we inherit from our parents.  But how many people understand the DNA codes for proteins?  Or that it consists of a paired code?  Or even that it resides in the nucleus of our cells?

Neuroscience seems even more difficult to understand on an intuitive level.  Will we ever understand how our brains work?  Even if we believe – rationally – that our everyday experiences have a basis in the biology of our brains, won’t the connection between the two remain inscrutable?  Even for those of us who can draw realistic pictures of the human brain and rattle of the names of multiple neurotransmitters, won’t it always seem magical to be alive and conscious?

“I am always disappointed by the media coverage on my research area.”

Faye Flam gets upset that some obviously conceptual charts are not labelled as such in a post at Poynter.

Image

 

I have no problem with conceptual graphs like this, though it’s probably a best practice to label them as such.

This one is useful because it tries to articulate the varied incentives of scientists, journalists, and readers.  What’s most amusing is that the chart reveals several key assumptions of the Swedish physicist Sabine Hossenfelder who created it:

1.  The optimal amount of “accuracy” wanted by readers is 0!

2.  Both scientists and journalists are willing to sacrifice accuracy in order to get press attention.  Scientists – of course – have the purest motives of anyone depicted on the graph, showing a greater willingness to sacrifice readers for greater accuracy.

3.  There’s an inverse correlation between accuracy and readership.  As Flam notes, there are probably lots of cases in which there isn’t such a straightforward trade-off between these two variables.  This relationship could vary by scientific field, by type of study, by type of media outlet…or even by the skill level of the science writer.

4.  If we’re letting the scientist define accuracy, then it’s likely going to reflect whatever concerns they have about getting funding or scoring points against the folks who disagree with them.

 

“When to shoot down… bad work and when to ignore it.”

Andrew Gelman recounts a call he got from a journalist about a pretty shoddy looking study, examines the dilemma that journalists face in deciding what to write about, and then nicely ties this to the dilemma that editors face in deciding what studies to publish:

“The problem, as I see it, is when a claim presented with (essentially) no evidence is taken as truth and then treated as a stylized fact. And the norms of scientific publication, as well as the norms of science journalism, push toward this. If you act too uncertain in your scientific report, I think it becomes harder to get it published in a top journal (after all, they want to present ‘discoveries,’ not ‘speculations’). And science journalism often seems to follow the researcher-as-Galileo mold.”

How many journalists call up a respected skeptical scientist to get a take on a potential story before writing the story?  How many ignore the skeptic because it takes the fun or the fear out of the story?

“Why do men want to be smarter than their women.”

Althouse and Instapundit go back and forth on this, and Althouse takes down Reynolds’ “just-so” story:

“There’s no end to the stories one can generate to explain whatever science report happens to pop up in the press and inspire us to think of reasons why it’s true (if it’s true).”

I consider this as the son of a mother who is demonstrably smarter than my dad (on several different dimensions).  It – um – caused some difficult issues to pop up.

But intelligence is not a single, easy-to-define characteristic.  I wonder if men and women might prefer different kinds of smarts.  And I wonder why Althouse and Reynolds are focusing on what the men want.  Perhaps men want to be “smarter” than their mates because they know that a smarter woman will always be looking for the smarter male.  Maybe it’s the females’ preferences that matter most in the mate market.

Peer review “does not mean the science is good”

In Psychology Today, George Mason professor Todd Kashdan publishes a brutally honest critique of some of his published work:

“Just because research is published in a peer-reviewed journal by a reputable publisher does not mean the science is good.”

“I had to read through 40 articles to find one that suggested hands-free phones are not that hazardous to driving…but I did it, and now you too can tout scientific evidence that the hazards are overblown!”

“Often our research program starts off slow and I am not confident about each finding that comes out of my laboratory (or from other laboratories). I stay attuned to the main objective of why I am a psychologist…understand some of the mysteries of human behavior and in some small way, reduce the amount of suffering and increase the amount of well-being in the world. This cannot be done with a premature commitment to being right. This cannot be done by blindly accepting theories, research, and treatments that other people promote. But the key is to be skeptical, not cynical. Be curious, keep experimenting, keep learning, and most importantly, keep asking questions. And part of this storyline is to be naked, exposed, and vulnerable every once in awhile.”