Tuesday, March 29, 2011

- My thoughts on the 2011 APS March meeting

This was my first APS march meeting and I have to say: Wow! I had never seen so many physicists roaming around and enjoying life! I liked the fact that it is a great opportunity to catch up with your old friends (undergrad, grad school, postdoc, etc), I saw people I hadn't seen in many years! I don't think I have complaints about the social part of the APS meeting, except maybe that there isn't (or at least I couldn't find it in the program) a big party with music and drinks for all attendants. Maybe the women-to-men ratio is just to small to have a good party, but I don't think physicists would notice the difference :)

My comments on the scientific side are different. Not everything was good. For one, I am not sure approving a talk for pretty much everyone who submits an abstract is that good. It leads to too many bad talks, and while each one is only 10 mins long, the chances of having back to back bad talks is high enough. On the other hand, allowing a student present in front of an audience is a great experience for us. I just wished the feedback time was longer than 2 mins. I feel like that's not enough to really learn what you did right and what you did wrong as far as presenting.

In addition, having 10 mins talks means that there is likely no time to give an introduction for the non-expert. Most talks are only valuable to people working in that field, which is good for people in the field, but not so much as an educating tool for people in different areas. Maybe a little background info, or at least a well-stated question or reason for the research could go a long way as far as making me, the non-expert, understand where your field is going.

As far as location, Dallas is a great city, but downtown Dallas kinda sucks. There aren't many places to eat at around the convention center, and at night it can be somewhat scary if you're walking alone and you go a few blocks off in the "wrong" direction. There wasn't any tourist attraction (at least I couldn't find one) in the neighborhood either, typically cities that hold conferences have something "nice" by the convention center but not this time.

Next year it looks like it'll be Boston, I hope I can go. That sounds like a cool place to spend some time!

Monday, March 28, 2011

-When does it become lying?

One of the things that bother me about scientific progress these days is that survival is so dependent on research grants, and competition for these grants is to fierce, that some scientists are willing to twist around and play with the interpretations of the data to such extents as to state their view as true even when they don't have data that supports that view.

In my particular field, there is one scientific question that has remained unanswered for more than 10 years and which is the purpose of my dissertation. The idea is that when we look at the system of interest, we observe certain features (let's call all of these, X) of it and the general consensus is that there's a mechanism that controls the system and gives rise to those observed features. After many years of research, the field has narrowed it down to 2 possible options (let's say A and B), but no one has conclusive data for either one of them. The problem with figuring it out is simply to state. The two remaining potential mechanisms can explain the observed features, therefore, looking at the features alone cannot tell you which one it is. It's like having different engineering designs for, say, clocks. All of which look the same from outside, but are very different inside and in order to figure out which one it is you kinda have to open the clock and take a look at the components.

Most people in the field believe that the mechanism that controls the system is A, mainly because it would make more sense based on other things we know about the system but because we're scientists we remain open to the other possibility since we have no experimental data to support either one. Recently though, a small group of strong believers of mechanism B have published a series of papers "supporting" their mechanism. I write supporting in quotations because the first paper of that series was a computer simulation that showed that mechanism B could indeed work. Notice the difference, showing that it could explain what we see does not, in any way, imply that it is what's there. From there on, their subsequent papers refer to the first one and usually write something like:

It is thought that features X arise from having mechanism A at work. However, mechanism A has not been identified to be true in this system. On the other hand, features X can arise from mechanism B (refer to our simulation in the first of our papers) and since it's our proposal we will assume it is true.

I don't have a problem with them stating there's no direct data in support of A but that's the same case for B, but why not be equally honest about that? There's gotta be a point where misleading becomes lying. Where's that line? Does not having experimental data for nor against your idea give you the right to state your idea as the real deal? Who is supposed to be watching out for this sort of stuff? I thought journal referees would catch this kind of stuff, looks to me, the system has failed but I am opened to be proven wrong! :)