Monday, April 29, 2013

What Lamar Smith doesn't understand about research

There's currently a bill circulating in Congress over funding requirements for the NSF. Lamar Smith (whose name sounds so familiar, but I just can't remember when I've mentioned him before) wants to ensure that the tax payers are getting their money's worth when it comes to science. Derek Lowe, over at In The Pipeline has already written about this today...twice. This means, of course, that a much more competent and experienced blogger has already weighed in on this subject, but I'm going to add my thoughts anyways. The bill requires that each funded project be:
1) "…in the interests of the United States to advance the national health, prosperity, or welfare, and to secure the national defense by promoting the progress of science;
2) "… the finest quality, is groundbreaking, and answers questions or solves problems that are of utmost importance to society at large; and
3) "…not duplicative of other research projects being funded by the Foundation or other Federal science agencies."
Which may seem pretty innocent. After all, it sounds reasonable that if tax payers are footing the bill for a project they should expect to directly benefit from their investment. Unfortunately, these new requirements probably won't lead to higher quality research being funded. The NSF already had a requirement that grants show academic merit and broader impacts. New requirements will most likely just lead to scientists fluffing their proposals to ensure they meet the new requirements.

I can only assume that this legislation is a reaction to what I lovingly call "Duck-penis-gate", the controversy that began about a month ago when conservatives criticized the NSF for funding research into duck penises. Giving $400,000 to let someone study duck penises sounds wasteful, but it's a necessary part of science.

Applied Research
Applied research asks the question: What practical uses are there for this? Applied research is important, and in most cases it's what the private sector does. Developing the newest iPhone advances our knowledge, but it's applied research. It seems to me that Lamar Smith sees all research as applied research. Either that, or he thinks that only applied research should be funded (I strongly disagree).

Basic Research
Basic research asks a much broader question than applied research. Instead of developing a single application researchers are simply extending the base of our knowledge. Things like duck-penis-gate are basic research. There are fundamental questions about the nature of the universe that can only be answered by basic research.

The point is, we don't know what research is going to lead to major advances. To quote Derek Lowe, we can't just "work on the good stuff" because, frankly, we don't know what will end up being the good stuff. If we already knew where to look to find interesting science we wouldn't be doing research at all. There's a quote by Isaac Asimov that sums up the importance of basic research nicely:
"The most exciting phrase to hear in science, the one that heralds new discoveries, is not Eureka! (I found it!) but rather, 'hmm... that's funny...'"
Basic research can't promise to "[solve] problems that are of utmost importance to society at large" because we don't know what any one line of research will (or won't) bring us the next major discovery.



Edit:
After writing this I found this statement by President Obama:
"In order for us to maintain our edge, we’ve got to protect our rigorous peer review system"
It's good to see that the President understands that the question of what research is "important" should be decided by a process of scientific review, not a political review.

Edit #2:
On Reddit I came across the following question:
"Someone explain to me why getting rid of expensive government research grants in favor of privatization is a bad thing."
Which I think is a legitimate question (though a very unpopular one, based on the number of downvotes received). Here's the answer I gave:
"Private companies are great at applied research, but lousy at basic research.  
Applied research seeks to develop technology and other applications based around known science.  
Basic research seeks to push the limits of our understanding, but results are often more difficult or slower to get. That's because at the edge of our understanding we don't know what we don't know so we can't say what research will yield the "good" results. Private companies are not likely to invest in something without a direct, immediate, monetary gain available. However, humanity benefits when basic research is done. Therefore it should be done."

Sunday, April 28, 2013

Reflections on #RealTimeChem week

This past week has been #RealTimeChem week on Twitter. I've really been blown away by how many amazing science bloggers are out there. I suppose I should be intimidated by them. After all, aren't they taking up part of my "market share"? If I really want people to read, share, and talk about my blog aren't they just competition?

The truth is, I'm not intimidated. Just the opposite. This past week I've felt like part of the community (here's a visual representation of that community via ). And it's been a great week with lots of interesting chemistry to read about. For example:
  • Andrew at Behind NMR Lines brought us a different "classic" journal article every day, which I thought was a very cool contribution.
  • Joaquin had some really cool input from the computational side of chemistry.
  • While I gave a basic tour of my lab, Kat gave us a play by play of her entire day (which sounded exhausting to me. It made me realize how much I actually sit at a desk).
  • told us some of his favorite chemicals.
  • @azmanam challenged us to a game of .
  • gave us a test tube version of the Mario Bros theme song.


  • There was a bit of a compound drawing war on Twitter (see Jess the chemist's post about it here). See Ar Oh decided to show up at the end to win it all (though looking the structure up may have disqualified him...)
  • At The Interface told us about his favorite result from the lab.
  • Chemically Cultured brought us a great poem about ITC.
  • Marc Leger, from Atoms and Numbers gave a good review/introduction to HPLC.
  • Chemicals are your friends did "chromatography flowers" - a great community outreach idea!
  • And of course who didn't love See Ar Oh's #ChemMovieCarnival (Part 1, Part 2 and Part 3). I always love reading/writing about bad science in the movies and on TV, so 23 posts about good/bad chemistry in the movies was awesome. Thank you for organizing it, See Ar Oh!
Thank you to everyone that participated. You made it a great week for me. If I didn't mention your post it's not because I didn't read it and it's not because I didn't like it. There was just too much to keep up with!

Wednesday, April 24, 2013

The sinc(x) function (as seen on IFLS)

Last Thursday, this image was posted on the Facebook page "I F***ing Love Science".

Photo: This is how mathematicians do arts and crafts. Via: andiejulie on Imgur


I immediately recognized it as the sinc function. It's a function that is extremely important in digital signal processing, and one that I see just about every day I'm in the lab. The sinc function is defined as:

{\rm{sinc}}(r) = \frac{{\sin (r)}}{r}

In the case of the picture above, r is equal to:

r = \sqrt {{x^2} + {y^2} + {z^2}}

and therefore, sinc(r) is equal to:

{\rm{sinc(}}r) = \frac{{\sin \left( {\sqrt {{x^2} + {y^2} + {z^2}} } \right)}}{{\sqrt {{x^2} + {y^2} + {z^2}} }}

This is the partially obscured equation that you see in the picture. I noticed there was a bit of confusion about this equation in the comment section of IFLS, so I thought I'd clear a few things up.

Many people assumed that the formula written on the front is incorrect. They noticed that plotting a function with x, y, and z variables would require a 4-dimensions. While that is true, it doesn't mean that the function is incorrect as written. The function is still correct as long as the z is defined as a variable that is not connected to the cartesian (x,y,z) coordinate system. In other words, z could be a time dependent variable or even a constant. When z = 0, the plot will look like this:


Which looks like the plot from the picture on IFLS. When we let z vary between 1 and 100 you can get a time dependent plot that looks like this:


This is a good example of why it's important to define your variables. If you don't define your variables they could mean anything. A variable can look familiar and mean something completely different. In this case, z is not the Cartesian coordinate z that you're used to seeing. Remember, a variable means nothing until you've defined it. c does not always mean the speed of light, r does not always mean radius, and so on. 

Now to the fun part! 
I used matlab to make a printable version of this craft. I suggest you print these pictures out on cardstock or the paper will droop down. Print each of the pictures (in order) and have fun putting them together! 

Send me a picture of your finished product and I'll post it for everyone to see.

(If I get at least 5 submitted pictures I'll post pictures of the sad attempt that my 6 year old and I made)
Aaaaaannnnnnd GO!





























This final picture is a possible "base"  for your plot. It's only half done (there should be blue lines along the entire axis). Cut two or three of these out and space them equal apart. Each tic mark along the x axis is where you should cut to place the cross sections you already cut out. This way the sinc(r) wave will be equally spaced in both directions (if you don't use this you could get an oval instead of a circle).


Tuesday, April 23, 2013

Lab Tour (#RealTimeChemCarnival)

Since this week is #RealTimeChem week on Twitter, I thought I'd give a peak (get it, since this is a mass spec lab?) into my lab.

This is how I started my day off:

Dilution: the only wet lab technique I have used in the last three years...


Again, not a whole lot of chemistry going on here. Just dilutions...



This is our electrospray source. Ions are formed here before being guided into the cell (pictured from above).



The belly of our instrument. In total we have 2 turbo pumps, 2 turbo/drag pumps, and 6 mechanical pumps.



 This is my magnet (yes, I am that possessive). This is a 4.7 T Fourier transform ion cyclotron resonance mass spectrometer (My mass spec > your mass spec).



First signal of the day. Hello little ions, it's good to see you again.




Command center: Mid-experiment


So that's what my lab looks like (at least the functional parts of it). I'll be doing another post later this week to explain how my mass spec works and why it's awesome. 

Monday, April 22, 2013

Living with chemicals: Cucurbituril

It's #RealTimeChem Week, so I thought I'd start it off with another "Living with chemicals" post. Usually with these posts I try to highlight a chemical that is very common; something that is a major part of our day to day life. This time, however, I'm going to be talking about a chemical that's part of my everyday life. It's the compound I spend most of my time in the lab studying.

The Chemical: Cucurbit[n]uril
Systematic name: Dodecahydro-1H, 4H, 14H, 17H-2, 16:3, 15-dimethano-5H, 6H, 7H, 8H, 9H, 10H, 11H, 12H, 13H, 18H, 19H,20H, 21H, 22H, 23H, 24H, 25H, 26H-2, 3, 4a, 5a, 6a, 7a, 8a, 9a, 10a, 11a, 12a, 13a, 15, 16, 17a, 18a, 19a, 20a, 21a, 22a, 23a, 24a, 25a, 26a-tetracosaazabispentaleno[1’’’, 6’’’:5’’, 6’’, 7’’]cycloocty[1’’, 2’’, 3’’:3’,4’]pentaleno (1’, 6’:5, 6, 7) -cycloocta (1, 2, 3-gh:1’, 2’, 3’-g’h’) cycloocta (1, 2, 3-cd:5, 6, 7-c’d’) dipentalene-1, 4,6, 8, 10, 12, 14, 17, 19, 21, 23, 25-dodecone
You probably know it as: Well...you probably don't know it.
The structure:


Cucurbit[n]urils, or CB[n] for short, are shaped sort of like a pumpkin, which is actually how they get the name (pumpkins belong to the family cucurbitaceae). They are, for lack of a better word, molecular cages - they can trap other molecules (we call trapped molecules "guests" and the CB[n] a "host"). They don't just bind to anything and everything, though. They're very selective about their binding (and it's not always easy to predict what will bind). For example, CB[7] will trap fluorescent dyes in its cavity. The dye won't fluoresce as long as it's in the cavity, but as soon as it's displaced it lights up. Phenylalanine will also bind to CB[7] and displaces the fluorescent dye. Some clever scientists realized that phenylalanine is found at the end of the insulin protein. They exploited this to create (or at least show a proof of concept for) a rapid analysis for insulin using CB[7].

Over and over cucurbiturils amaze me. They affect how molecules interact with light, they trick liquid phase molecules into thinking they're in the gas phase, they have been proposed for drug delivery, catalysis, waste management, and molecular architecture to name only a few applications. What's most amazing to me is how unpredictable each new derivative can be. With each new derivative that we study we find that the chemistry is vastly different, even when the compound itself is nearly the same (I'll insert some new results from my lab here once they're published). It's not a chemical you'll run into every day, but studying this chemical is what keeps me awake at night and gets me out of bed in the morning.

Thursday, April 18, 2013

Bad Science in the Movies: Iron Man 2

I've done several articles on "Bad Science in the Movies". This one was inspired by See Ar Oh, from the blog Just Like Cooking. It's my entry to his #chemmoviecarnival. I hope he's ok with a bit of physics getting mixed in (I'm a physical chemist, so it's a pretty gray area for me...)

The scene we're talking about is this one, from Iron Man 2:



Honestly, this may be one of the most baffling examples of bad science in the movies. The set up is simple: the palladium inside Tony Stark's miniature magic energy device is leaking into his body and slowly killing him. He needs a new energy source and, frankly, the rest of the periodic table has failed him. Stupid universe. It never creates that exact element you need, right? Of course this is Tony Stark, the engineering prodigy that built a flying suit that talks to you like a snarky butler, so he won't let the laws of physics get in the way of a good energy source - he's going to build one.

It turns out that his father already secretly designed the new element and hid it in plans for the 1974 Stark Expo. About 39 seconds into the video above, Tony realizes that the nucleus is at the center of an atom - a novel discovery. Tony's "ah-ha!" moment happens at about 1:20.
"The structure of the protons and neutrons is in the pavilions...as a framework."
And there you have it. Making a new element is simple. All you need to know is how many protons, how many neutrons, and how they are arranged in the nucleus. Ignore the circular logic that an element is in fact defined by the number of protons it has. Forget that this new element will likely decay the moment it is made. This new element is just what Tony needs, and he's going to build it. Jarvis (the snarky butler) even verifies that this new element "should serve as a viable replacement for palladium". This is particularly amazing, since this element has never been experimentally studied. I suppose Jarvis could have done extensive computational studies on the element (in which case I would like to request a few minutes of wall time on that server, please).

So then Tony has to build his new element. This requires a particle accelerator and that's what he builds, right?  It looks like one at least. It's a giant metal ring. Tony turns it on and grabs a wrench to steer the beam into some other clear material and BAM! - you've got a new element! Here are the problems I see with his process:

  1. Tony didn't build a particle accelerator. He dropped in a prism to steer the beam, so apparently these are photons he's accelerating. Read that last sentence again if you didn't notice the problem. Photons. Tony is accelerating light. Light that is already traveling as fast as it can (or will ever) go. Whatever his source of light is he could have just aligned it directly and saved himself the remodeling expenses. 
  2. Tony didn't need the metal ring at all. The purpose for the metal ring would be to create a low pressure environment (necessary in particle accelerators), but the first thing he does is steer the beam into the lab. Not only is this a serious safety violation but now his beam is at atmospheric pressure so why did he need the vacuum to begin with?
  3. But let's forget these two problems and examine what he was actually doing. The light hew was steering is visible to our eyes as a nice crisp blue, but it was also aimed at a clear target. I'll give you another moment to think about the contradiction in that sentence. A visible beam was absorbed by a clear target. The target is clear precisely because it wouldn't absorb any visible light. Even ignoring that, if visible light were energetic enough to rearrange nuclear structure I think life would be just a little bit different.
But, we'll give him the benefit of the doubt. After all, he does have a pretty cool suit.

Wednesday, April 17, 2013

What is the difference between science and pseudoscience?

Tonight at 6:30 MDT I'm going to be hosting a discussion of the question: "What is the difference between science and pseudoscience?"

We'll be doing a Google "On Air" hangout, and you can tune in right here to watch. If you want to add a thought to our discussion you can do it using the Twitter hashtag #PsiDiscuss or leave a comment on this
page.


 

Tuesday, April 16, 2013

Traditional chinese medicine sneaks into peer-reviewed journal

In the current issue of the Journal of Chemical Information and Modeling you'll find an article entitled "Chemogenomics Approaches to Rationalizing the Mode-of-Action of Traditional Chinese and Ayurvedic Medicines". The paper seeks to "[reduce] the gap between Western medicine and traditional medicine" by describing the mode-of-action for a list of compounds found in common Traditional Chinese Medicine (TCM). In other words, this paper seeks to describe how TCM works (though there are only a few cases where it does). In the introduction, the authors state that:
"Traditional medicines have been connected to efficacy in man for thousands of years (though admittedly often not in controlled clinical trial settings)"
Which to me doesn't really mean anything. Of course there has been a correlation between using these medicines and feeling better, why would anyone keep using a medicine they didn't think was working? But it's not helpful to ask if people thought a medicine worked, it's useful to ask if they actually do work. For that we need to do a controlled clinical trial. Although the authors admit that many TCM compounds fail phase II and III clinical trials because they don't work, they insist that these failures prove that chemical derivatives are therefore needed to improve clinical efficacy. In other words, we know that compound X doesn't really work, so we'll change it a little bit and see if compound X' works any better (this is a common technique in drug discovery, and if they want to pursue this path that's fine). Two sentences later, almost in the same breath that they admit a lack of clinical efficacy, they claim that nature has evolved a multitude of chemical compounds with desirable properties. They later claim that:
"...natural products as well as traditional medicines have been an undervalued resource of lead structures in the current practice of drug discovery."
A statement that I don't feel is supported by the literature. If anything natural product synthesis has been overvalued by many researchers. Natural product synthesis has been a pretty huge field of research. Even so, many of our most effective drugs on the market today are derived from natural products - Asprin, Digoxin, and Premarin to name a few. Obviously nature has something to offer us. The study even mentions Artesunate, a derivative of a compound originally used in TCM that is now the gold standard for treating malaria.

I'm not saying that this paper should have been rejected outright. Quite the opposite, the technique itself seems fine to me. The computational modeling of protein/compound interactions could allow researchers to quickly determine which compounds are worth studying further. Yes, TCM compounds were used in the study, but that's actually pretty unimportant to the study itself; the method could have been applied to any arbitrary list of compounds. 

And I'm not saying that we should discount all alternative medicine. Artesunate (the malaria drug) is one example of why that would be foolish. We can't refuse to study TCM compounds based on principle; we don't know a priori where we'll find relevant compounds. However, I find it completely unnecessary to discuss alternative medicine to the degree that is found in this article. Entire paragraphs are dedicated to the idea of balance as defined by TCM and throughout the article you'll find mention of "synergistic medicines" and other alternative medicine ideas. The authors seem to take every opportunity they can to connect this study to alternative medicine, a connection that I don't think is warranted given that the journal's focus is computational modeling

Furthermore, proof that some compounds used in TCM are effective does not imply that TCM as a whole is effective at all, which seems to be the general theme of the paper. It seems to me that the authors are trying to sneak alternative medicine in as legitimate science. The paper even contained a diagram like the one above, explaining the principles of balance in TCM - an addition I thought was completely unnecessary and unrelated. Sure, you could say it belongs in the introduction as historical background of the topic, but to me it detracts from the real purpose of the paper - the description of a novel computational method to screen for new target compounds. Leave out the nonsense and get back to the chemistry already.

Saturday, April 13, 2013

Being wrong isn't just okay, it's required!

At the end of my second year of graduate school I was standing in front of my PhD committee, the five people that will one day decide whether or not I get my PhD, presenting my research. When I had finished presenting I didn't breathe a sigh of relief - the hardest part was yet to come: Questions. Difficult questions. Questions whose sole purpose is to find something you don't know and ask you more questions on that thing. Just to watch you squirm.1 I was doing pretty well until I was asked a problem that sent me to the blackboard with some chalk to do a quick derivation and calculation. My mind went blank. I hate doing math in front of people.2 I made all sorts of mistakes over the next 15 minutes. Big mistakes. Giant fundamental flaws in my thinking that a freshman chemistry student could easily correct me on.

So there I was, in a meeting to decide whether or not I deserved another year in the program for my PhD, and I was failing. Why, then, wasn't I kicked out of the program? That may be a question that only my committee can answer, but I'll take a stab at it. I wasn't kicked out because it's okay to be wrong. It's okay to have misunderstandings, to not know something even if almost everyone else does. What's not okay is refusing to admit that you're wrong.

Scientists are wrong. A lot. It's really part of our process. An unanswered question makes me feel stupid, which leads to searching for the answer, learning, knowledge, and moving on to the next question. After doing this for several years I've actually begun to crave feeling stupid - it means I'm about to learn something awesome. A few years ago there was a great essay in The Journal of Cell Science called "The importance of stupidity in scientific research" in which the author postulates that feeling stupid is an integral part of science. In my first years as a graduate student I would sometimes ask my advisor something like: "Ok, so when I do x what will happen?" His response was always "I don't know, that's why it's called research." Stupidity will always exist at the edge of human knowledge.

Opponents of many scientific disciplines will often often point to errors scientists have made in the past as a reason not to believe what they say. After all, aren't scientists just going to change their mind again? Why should I believe what they are saying right now? The fact that science is ever changing should never be seen as a weakness. In fact, it's one of the greatest strengths that science has. Science helps us see where our understanding of the universe is wrong and correct accordingly.3

More importantly than being okay, being wrong is expected. Let's face it, you don't know everything. Admitting you don't know something is probably one of the most important aspects of science. It's only when you admit you don't know that you can actually learn. Just as there is an edge to current human knowledge, there is an edge to your own personal knowledge. If you're afraid of being stupid, then what you're actually afraid of is expanding your own knowledge. Likewise, if you mock someone else's stupidity, you're really just mocking their attempts to learn. The universe is awesome, why not take the time to tell someone about it?

And, as usual, Randall Munroe (XKCD) can say it better than I can, and in fewer words:



Notes
[1] There are other reasons to ask these questions, of course, but sometimes you can see a little smile on a committee member's face when they know they've found your educational weak spot.
[2] Since some of my committee members may actually read this, it's probably unwise to admit that. They'll probably use it to their advantage at our next meeting.
[3] There seems to be two different definitions of "truth" used in science and religion. Religion usually begins with a statement of absolute truth and builds a world view around that truth. Science on the other hand makes observations and builds a truth to match those observations. This truth is never purported to be absolute truth.  The big difference is that "truth" in the scientific sense is allowed (and required) to change while "truth" from a religious viewpoint is used as a logical premise and, therefore, cannot be changed. 

Thursday, April 11, 2013

Come join me for AskScience Live, happening RIGHT NOW!

Add your questions to the discussion using Twitter (#AskSciLive)





Wednesday, April 10, 2013

Hello visitors from SMBC Comics!

If you're on my site today there is a strong chance that you were sent here from SMBC Comics. Zach is a great guy, isn't he?

Anyways, you're no doubt looking for the post that he wrote, right? Well, calm down. I'll point you there soon enough. Let me give you a tour of my site first. Here are a few of the things I write about:

Bad Science in the Movies

Science Myths and Misconceptions

Philosophy of Science

Quackery (like homeopathy)

Evolution

"Living with Chemicals" - a series on why you shouldn't be scared of chemicals because chemicals are everywhere and they're awesome.

If you like what you see you can follow me on  or like my page.

Also, come listen to AskScience Live with me an some of the panelists from reddit's /r/askscience tomorrow at 6 pm EDT.











And here's that post by Zach that you want to read...

Thursday, April 4, 2013

Science goes BOOM!

Over the last 24 hours on Twitter there has been some interesting discussion on chemistry teaching methods. Specifically, explosions.




Most of the comments that I saw  were of the opinion that demos should be more positive - which, to me, means that they think a balloon exploding is negative (which I just don't see). Sure, you can overdo it. You don't need an explosion every day and there are certainly plenty of other really interesting demos to help teach. But I just don't see what's wrong with a little bang every now and then. For example, filling a balloon with hydrogen and oxygen will produce a loud bang. A balloon with just hydrogen will not be as loud. Why? Both balloons are explosive by the reaction:

2{H_2} + {O_2} \to 2{H_2}O

However, the balloon that is filled with hydrogen and oxygen reacts much faster, and thus the louder explosion (it reacts faster because hydrogen and oxygen are already mixed). There are plenty other examples (butane balloons compared to propane balloons, for one), but the idea is the same - sometimes an explosion can help you teach.

I can see both sides to this story. My undergrad education was very slim on explosions. In fact, I can't remember a single explosion during a lecture. I don't feel like I missed out. Chemistry was interesting to me for more reasons than the explosions. On the other hand, in graduate school I've seen more than my fair share of explosions (seriously, every professor here is like some deranged pyromaniac). I've seen students become interested because of the demos (though I don't think I can say with any certainty it was because of the explosions).

Of course I can see the point that Dr. Smith is making. If we're just blowing something up, or using an explosion to supplement poor teaching ability we're obviously doing something wrong. An explosion shouldn't be a crutch to keep students interested - Chemistry can be awesome without the explosions. A demonstration should have a specific purpose and should be tied to a specific concept you're trying to teach. Sure, there are times when luminol, a howling gummy bear, or elephant toothpaste are the demos you should be using, but every now and then you just need a good explosion. 

Tuesday, April 2, 2013

Women in STEM

About three weeks ago I read this tweet by , of MythBusters fame, about women with STEM (Science, Technology, Engineering, Mathematics) careers:

Which I promptly retweeted to my close friend, Angela. Not only has she written for this blog in the past, but she works in the lab across the hall from me. It seemed obvious to me that she would have something interesting to add to the conversation, what with her being a woman and all...

Well, the next day when I saw her I asked about the tweet. I don't think I can adequately describe the evil look she gave me, but our conversation went a little bit like this:
"I don't know, Chad" she said, "why did you choose this career?" 
"Ummm...because...I don't know. I guess because science is awesome and it was something that interested me. It was something I could imagine myself happily doing as a career." 
"So why do you think my response would be any different, just because I'm a woman?"
And for some reason I was shocked. It seems that Angela wasn't the only one to think this way, either. Later that day Kari followed up her first tweet with this one:
For some reason I had never realized (or never put much thought into) this glaringly obvious point: Women choose a STEM career for the same reason that men do, so why do we market it so differently? At my university there is an annual conference for "Women in Science". I'm sure you can guess what Angela's feelings are - Why even have a conference for women in science? After all there isn't a "men in science" conference, right?

But the fact remains - there are fewer women than men that choose a STEM career path. The US Department of Commerce, in a 2012 report stated that the STEM workforce is a shocking 76% men, while the total workforce is split pretty evenly between men and women.


And the problem doesn't end there. Not only are women underrepresented in STEM fields, but there seems to be a double standard from within our own community. As scicurious points out in this great article, it's common to see an accomplished male professor sporting an unkempt beard, Hawaiian shirt, and a holey pants at a scientific conference, but you won't often see women dress as casually. I have never felt pressured to dress formally at any conferences, nor have I even taken a second look at any women to know how they are dressed. I suppose I'll have to believe scicurious that women feel this pressure.

But the double standard isn't just about dress. Look at this obituary, published just a few days ago:

Screen capture from NYT website provided by  BuzzFeed
The obituary has been revised, but the original began: "She made a mean beef stroganoff...and oh yeah, and she also invented the propulsion system that became the gold standard for keeping satellites in orbit." (I may have paraphrased a bit). Now, I don't doubt her ability to make beef stroganoff. In fact, if this article had been written by her children or grandchildren her stroganoff may have been placed appropriately at the beginning. But I can't think of a scenario when it would seem suitable for the obituary of a male scientist to open with something so absolutely trivial. To compare, let's look at what the same author says about a male scientist just fourteen days earlier

So what do we do now?
After researching and writing this article two things are obvious to me:
  1. We need more women to choose STEM careers.
  2. Women don't need special treatment or special reasons to make that choice. 
But number 2 seems to conflict with number one, doesn't it? If we want more women in STEM careers don't we need to appeal directly to them? How can we get more women involved in science while at the same time not making a fuss about women in science? Do women need/deserve special treatment to choose a career in science?

Probably not. Instead, they just need to know from a young age that the option is available. PBS has a 30 minute program that I think has a pretty good job of attracting young girls to STEM careers without propagating any female stereotypes. SciGirls stars a cartoon teenager, Izzie, who leads a group of real life girls on adventures in biology, engineering, and other STEM related fields. Throughout the show the girls are mentored by real life female scientists and engineers. I watched a couple of episodes, and I think it's a great example of how to get more young girls excited in science. They don't need a special reason to pursue science; science is awesome just the way it is.
Newer Posts Older Posts Home