The difference between basic, translational and clinical research

When I started as a researcher, I had no idea that there were different types of research.  I don’t mean that some scientists study cancer and some scientists study Alzheimer’s disease.  I mean entirely different kinds of research that have fundamentally different methods, sources of funding, and purposes. Today’s post is going to outline three main types of research in the biological sciences: basic, translational and clinical research.

Basic Research:

science_image

By en:User:AllyUnion, User:Stannered (en:Image:Science-symbol2.png) [CC BY 3.0 or GFDL], via Wikimedia Commons

 Right off the bat, I need to be super clear that basic research is NOT research that’s easier to do or simpler than any other type of research.  It is just as complex and just as hypothesis-oriented as other types of research.  However, the goal of basic research is to  understand at a very basic level some aspect of biology.  Also called fundamental research, basic research doesn’t require that the outcome of the research can cure a disease or fix a problem.  That being said, basic research often does create the foundation that is required for other researchers to apply to solving a problem. I like how basic research is described on WIkipedia as “Basic research generates new ideas, principles, and theories, which may not be immediately utilized but nonetheless form the basis of progress and development in different fields”  This research can be in biology, physics, math, environmental sciences or any other scientific field. So what are some examples of basic research in biology?

  • Understanding the proteins and pathways that result in cells dying by apoptosis
  • Developing technology to better determine the 3D structure of proteins.
  • Creating mathematical models representing population growth in cities over time
  • Studying how leaf litter affects the ecosystem (an actual active funded grant at TGen here in Arizona)

This research is often funded by the government, specifically the National Institutes of Health, which funds 50,000 grants to more than 300,000 researchers at more than 2,500 institutions around the world, and the National Science Foundation, which funds 24% of all federally-funded basic science research in the United States.

Translational Research:

mouse_for_research

By Maggie Bartlett, NHGRI. [Public domain], via Wikimedia Commons

Translational research is how basic research and biological knowledge is translated into the clinic.  Often called “bench-to-bedside” or research (referring to the research bench and the patient’s bedside) or “applied” research (of applying basic research to solve a real-world problem), this research is needed to show that a drug or device works in some living system before it is used on humans. This is the research that happens after the results from basic research are obtained and before clinical research.

For example, if a drug is found in the lab that targets a protein that is thought to cause a disease like cancer, the drug will first be tested on animal models.  The animal model may be a mouse that has been genetically altered so that it develops that specific kind of cancer or a mouse that has human cancer cells injected into it (like the patient derived xenografts I described in a previous post). The drug will then be used on the animal to see if it is safe or if low doses are so toxic that the animal dies. Whether or not the drug hits the targeted protein or cell type can also be tested in mice.  For example, if the drug is supposed to kill brain tumor cells, researchers would want to make sure the drug was able to pass the blood brain barrier of the mouse.  Finally, if the drug is supposed to kill tumor cells, researchers would want to check that the tumor shrinks, the cells die, and/or that the survival of the mouse is extended from using this treatment. Often, drugs are “weeded out” at the translational research stage saving millions of dollars and years worth of time and effort in clinical trials.

Translational research isn’t just for drug development.  It is also useful for devices. For example, to develop a device that can diagnose diseases in third world countries, where access to electricity and high tech labs is more difficult.

Clinical Research:

blood_tube_for_research

By Tannim101 [CC BY 3.0, GFDL or CC BY 3.0], via Wikimedia Commons

Clinical research is what is performed in a healthcare environment to test the safety and effectiveness of drugs, diagnostic tests, and devices that could be used in the detection, treatment, prevention or tracking of a disease.  The cornerstone of clinical research is the clinical trial.  There are 4 basic phases to a clinical trial.  Each phase is performed sequentially to systematically study the drug or device.

  • Phase I: This is the first time the drug or device has been in humans and it is used on a small number of patients in low doses to see whether or not it is safe and what the side-effects may be. At this point, the clinicians are not trying to determine if the treatment works or not.
  • Phase II: In this phase, more patients are treated with the device or drug to test safety (because more side effects may be identified in a larger, more diverse population) and whether the drug or device is effective (in other words, does it work?).
  • Phase III: This is the phase that focuses on whether the drug or device is effective compared to what is typically already used to treat patients.  It’s used on a large group of people and “end points” like increase in survival or decrease in tumor size are used to evaluate its effectiveness.
  • Phase IV: These trials are done after the drug has gone to market to see if it works in various populations .

There are several different types of clinical trials depending on who is funding them. Some clinical trials can be initiated by a doctor or group of doctors.  These are call “physician-initiated” or “investigator -initiated”  studies and are often used to determine which type of treatment works better in patient care.  For example, there may be two treatments that are commonly used to treat a disease. Investigators may initiate a study to figure out what treatment works better in what patient population.

The kind of clinical research you may be more familiar with are drug companies who are working to develop a drug or device.  These companies will “sponsor” (aka “pay for”) a clinical trial.  They work with clinicians at one or more medical institutes to use their drug or device in a particular way (depending on the phase of the trial) and the clinicians report back the results, including whether there were any side effects to the treatment. At the end of the clinical trial, if the treatment or device was a success, the drug company can apply to the Food and Drug Administration (FDA) for approval to use the drug in the general population.  Bringing a drug to market is a timely and extremely expensive process estimated at over 10 years and $1.3 Billion dollars per drug. Much of this time and cost is due to high cost of conducting the clinical trials.

If you are interested in what clinical trials are currently available in the United States, all clinical trials are registered on ClinicalTrials.gov.  Anyone can search this database to see if trials are available for them to participate in.

Overall, each type of research needs to understand the other, and researchers need to work together to successfully understand our world and to come up with solutions to prevent, diagnose and cure disease.

What’s it like getting a science PhD?

By AdmOxalate (Own work) [CC BY 3.0 (http://creativecommons.org/licenses/by/3.0)], via Wikimedia Commons

Cold Spring Harbor Laboratory by AdmOxalate (Own work) CC BY 3.0, via Wikimedia Commons. This is where I went to grad school.

In my last post, I talked about how to get into graduate school.  This post will be about how PhD programs in the sciences are structured and how they work, because I’ve realized from lots of conversations with my non-scientist friends and family – no one really knows much about this!

There are fundamental differences between getting a PhD in the sciences and getting one in anything else. The first main difference is that you don’t have to pay for a PhD in the sciences, and in fact, they pay you.  Don’t get excited – they don’t pay much. The current NIH stipend rate is $22,920 per year (only about $2900 more in 2015 than what I received in 2001).  Tuition and this stipend are paid for in different ways depending on the school.  Some schools have endowments that support graduation positions. For example, I was supported by an institutional endowment made by the Beckman Foundation for my first two years of graduate school. Some schools rely on the students working as Teaching Assistants (TAs) helping to teach undergraduate courses to support some or all of their tuition or stipend.  In many cases, the research laboratory that the student works in pays for the tuition and stipend using their grants. Graduate students themselves also can apply for funding, which along with helping fund their position, is a prestigious resume entry.  I applied for and was awarded a National Science Foundation (NSF) Graduate Research Fellowship that supported my last few years of graduate school.

The second main difference between a science and non-science PhD is that there is NO WAY that you can work and get your PhD at the same time. Don’t get me wrong, you work. You work your butt off every day all day, but not while making money at another job. With the nature of scientific research, there isn’t time to have another job, and in most cases, it isn’t allowed by the institution anyway.

What is a graduate student so busy doing?  The graduate program at the WSBS, where I went to school, was designed to be very different from the traditional American graduate school model.  I’ll start by describing, generally (since all grad schools are different) traditional programs and then describe my program. Most PhD programs are expected to last between 4-7 years. The first two years are filled with a few key activities:

  • First two years: Traditional classes at the graduate level that cover scientific topics more deeply than an undergraduate program
  • First year: Rotations. These are short (usually 3 month) stints in a laboratory to figure out if you like what the research that lab is doing and whether or not you’d want to do your PhD thesis research there. This is also the chance for the head of that lab (also called the Principal Investigator or PI) to figure out if they want to have you in the lab for the next 4-6 years.
  • End of second year: Qualifying Exam. This exam, also called the comprehensive exam at some schools, is an enormous exam that is like the trigger for the institution to determine if you go forward in the PhD program or not. Usually held at the end of the second year, if you pass, you move on to nearly exclusively doing research in the lab to complete your thesis.  If not… well, I don’t think I know anyone who didn’t pass after at least a few tries.
  • Third year until you graduate: After the first few years, most of the time is spent in the lab. There may be required Teaching Assistant responsibilities or other required seminar classes (like Journal Club), but this varies by school. Then there are the thesis committee meetings.  Pretty early on in each student’s research project, a committee of 3-5 faculty at the university are invited to participate on your thesis committee.  Their job is to provide a set of eyes (other than the PI of your lab) to make sure you’re moving in the right direction. They approve the thesis proposal and meet with you regularly (in a traditional program, this might be yearly) to keep you on track. They are also the committee that reads and evaluates your thesis dissertation and holds your defense (more on that shortly).

As I mentioned, this traditional system is a bit different from what I went through at CSHL.  The philosophy of WSBS is to shorten the time frame from matriculation to graduation to 4 years while also maintaining academic excellence.

  • First semester (4 months): This is the only time I took core courses – what my mom called “Science Boot Camp”.  These classes were unique because instead of learning facts out of textbooks we learned how to critically think about, write about, and present science. The classes focused on reading journal articles, scientific exposition and ethics, and particular scientific topics in depth like neuroscience and cancer.
  • Second semester (4 months): After the first semester, we had three one month rotations that allowed us to explore our scientific interests to help decide on a thesis laboratory or just allow us to try something new. I did rotations in a lab that used computers to understand lots of scientific data, a lab that used microscopy to figure out how a cell worked, and a lab that studied apoptosis (where I ended up doing my thesis research). Also during this time, we did our one required teaching experience at the DNA Learning Center. Here we taught middle and high school students about biology and DNA.  The idea was that if we could explain science to kids, we could explain it to anyone.
  • End of year one:  After the first year, we took the Qualifying Exam.  For my QE, I had two topics assigned to me (Cancer and Cell-Cell Communication) and I had to learn everything about these two topics in one month. A panel then grilled me for nearly 2 hours on these topics, and fortunately, I passed.
  • Years 2-4: The classes are only held in the first semester and the rotations only held in the second semester so that we could focus on what we were doing at all times. No excuses. So after the qualifying exam we were expected to focus on all research all the time. The one exception being the Topics in Biology courses held each year.  The Topics in Biology courses were held for an entire week (7am-11pm) and gave you the chance to interact with experts in various fields both to extend your scientific knowledge and to critically think about new problems.
Photo Nov 15, 9 24 23 PM

My thesis. It’s about 1.5 inches thick. Or as my hubby said “That’s your thesis? Impressive, baby”

Doing research was intense lab work punctuated by intense meetings.  FYI – intense lab works mean 8am-7pm (or later) Monday through Friday and usually the weekend too (and by weekend, I do mean both Saturday and Sunday).  And let’s not forget the 4am time points when you have to go into the lab just to check on your experiments every 4-6 hours for 24 hours straight. But back to the intense meetings…The first intense meeting was the thesis proposal defense, which was held in the second year. This was where you told a committee of 4-5 researchers what you were going to research for the rest of grad school, they quizzed you for 1-2 hours and then gave you the go ahead (or not) to do that work. The next set of intense meeting were the thesis committee meetings every 6 months to keep each student was on track. Again, 1-2 hours of presenting and critical evaluation of your work by committee.  At some point, the committee gives you the “green light” to start writing your thesis, you take all of the work from the past 3-4 years and put it in a massive document called a dissertation. The thesis committee reads it, you present the work in front of them and all of your family and friends, and then again, you spend 2 hours in a room with your committee answering every question they can think of – aka “defending” your thesis.

Cathy_graduation

My PhD graduation day with two of my classmates. I’m in the center

As I write this, I realize that my thesis defense was 9 years ago next week. How time flies. After the defense, you have your PhD and officially graduate whenever the ceremony is held – in my case in May of 2007. I graduated 5 years after I started – just slightly longer than the expected 4 years for the Watson School. Was it easy? Nope, not even a little bit (ask my mom). Would I do it again? In a heartbeat.

This post is dedicated to my classmates and my friends in graduate school – you know who you are.  Without you, I wouldn’t have made it. And to my mom, who convinced me at least twice, not to quit.

How do you get into a PhD program in science?

When I was very young, my uncle died from lung cancer. I wasn’t allowed to see him before he died (his wishes). There was a part of me that thought it was my fault that he dies because he didn’t listen to my pleas that he should stop smoking. That’s when I decided that I should cure cancer. At the time, I had no idea how to do that, but by the time I was in high school, I realized it would involve getting a PhD.  Other than a great uncle (on the other side of the family) that I barely knew, no one else in my family had a PhD, so I was the trailblazer in figuring out how it all works. In this post and my post on Thursday, I’ll write about how to get into graduate school and then what the program is like once you get there. More accurately, I’ll write about how I got  into grad school and what grad school was like for me since I know that everyone’s experience is different.

So how do you get into a PhD program? Let’s skip the fact that you’ll need an interest in science, good grades in college and likely do undergraduate research. Also, one difference between science PhDs and other PhDs is that you aren’t expected to get your Master’s degree first. You can apply straight from undergrad, and the idea is that you get your Master’s degree on your way towards the PhD.  If you leave the PhD program at a certain point (usually after you take a qualifying exam), you’ll leave with a Master’s degree. In fact, other than maybe having more research or other experience, there isn’t much of an advantage to getting a Master’s before your PhD degree versus not.

The first step needed before applying for grad school is to take the general GREs exam along with a subject-based GRE exam.  These are standardized tests like the SAT or ACT but for graduate school.  The subject-based exam feels like the biggest and longest test you’ve ever taken for a particular subject.  I took the Biology subject test (I could have taken the Biochemistry subject test, but I heard it was a lot harder, so I just studied by butt off for the Biology one instead). For most grad schools, these exam scores are critical.  Just like if you get a good score on the SAT you can get into high ranking colleges, high GREs scores help you get into grad programs at the Harvards and Yales of the world.

Just like undergrad, you have to send in your applications with the ever-important personal statement.  This statement has to talk about why you want to go to grad school, but also why that school and the researchers at that institution are of interest to you.  When I advise current undergrads about choosing a PhD program, the most critical part is to apply to schools that have research labs that do the research that you are interested in.  Once you get into the graduate program, as I’ll talk about in detail in my post on Thursday, you spend years of your life in this research lab so if there isn’t a research lab you like, don’t even bother applying to that school.

phdAfter applying, the graduate schools interested in you invite you for an interview.  This isn’t a one hour, chat with a guidance counselor type of interview.  This is a weekend of interviews with distinguished faculty grilling you about your undergraduate research (assuming you had some) and asking critical questions to determine how clever you are and whether you’d be a good fit for the school. I went on three interview weekends at Harvard Medical School, Johns Hopkins and the Watson School of Biological Sciences (WSBS) at Cold Spring Harbor Laboratory (CSHL)(where I eventually attended). The CSHL interview by far was the most intense with over a dozen interviews in one day including one with Nobel Laureate Jim Watson who was the chancellor of the lab at the time. My favorite “words of wisdom” from Dr. Watson at that interview were to always select research projects with a 30% chance success. Less than that, you’d be wasting your time and more than that, the project is too obvious and wouldn’t make a big impact on the field. This may sound a bit masochistic – setting yourself up for likely failure – but this is the life of a scientist!

Usually there are dozens of candidates invited for the interview weekends so the schools also plan bonding time among the candidates and the current grad students. This could be a dinner out, a party thrown by one of the current grad students, or a trip to NYC to see a Broadway show.  To this day I’m still friends with people that I interviewed with even though we both chose other grad schools.

After the interview, the waiting game begins. I remember the evening that I received the call saying that I was accepted into the CSHL program (the one I really wanted to attend). I was in my dorm room at Boston University and I get a phone call – keep in mind this is before cell phones so they called the landline in my room. I thought it was a prank call from my friend Greg and I told him (more than once) that this wasn’t a funny joke. No joke – the Dean of the school was called to let me know about my acceptance. I received the official acceptance letter in an email minutes later.

wsbs_2001

My WSBS Class entering in 2001. I’m the one sitting on the double helix

I actually got into all of the graduate programs that I applied to, which caused a bit of a problem because my dream had always been to attend Harvard. My decision, then, to attend the Watson School was confusing to my parents, who had heard of Harvard but never Cold Spring Harbor Laboratory.  Why was this my choice? The research at CSHL was incredible  – every scientist was engaged with their work like I had never experienced in my undergraduate career. It was inspirational to think about being a part of that. CHSL had also just started their graduate program – I would be in the third entering class – and their program focused on learning how to learn and how to think in a way that was different than any other graduate program out there (more on that in the next post). I wanted to be a pioneer in this program. And finally, the culture suited me. I went to a large undergraduate institution with classes of 300 people and anonymity amongst thousands of classmates. In graduate school, I wanted to be part of a small class where I could really be challenged and learn from a close-knit group of peers. My WSBS class had six students, including myself, that constantly challenged me to think faster and smarter and become the best scientist that I could be.

 

The Cancer Genome Atlas Project (TCGA): Understanding Glioblastoma

TCGAIn 2003, Cold Spring Harbor Laboratory (CSHL) and researchers around the world celebrated the 50th Anniversary of the discovery of the structure of DNA by Jim Watson and Francis Crick.  I was a graduate student in the Watson School of Biological Science at CSHL, named after James Watson who was the chancellor of the CSHL, and in 2003, I participated in (and planned!) some of the 50th anniversary events. Coinciding with this celebration was a meeting about DNA that brought world-renowned scientists and Nobel Prize winners from around the world to CSHL to celebrate how much had been accomplished in 50 years (including sequencing the human genome) and to look to the future for what could be done next. That meeting was the first time I had heard about the Cancer Genome Atlas Project. At this point, the TCGA (as the project was affectionately called) was just a pipe dream – a proposal by the National Cancer Institute and the National Human Genome Research Institute (two institutes in the National Institutes of Health – the NIH).  The idea was to use DNA sequencing and other techniques to understand different types of cancer at the genome level. The goal was to see what changes are happening in these cancer cells that might be exploited to detect or treat these cancers.  I remember that there was a heated debate about whether or not this idea would work. I was actually firmly against it, but now with the luxury of hindsight, the scientific advances of the TCGA seem to be clearly worth the time and cost.

The first part of the TCGA started in 2006 as a pilot project to study glioblastoma multiforme, lung, and ovarian cancer. In 2009, the project was expanded, and in the end, the TCGA consortium studied over 33 cancer types (including 10 rare cancers).  All of the data that was made publically available so that any results could be used by any scientist to better understand these diseases. To accomplish this goal, the TCGA created a network of institutions to provide the tissue for over 11,000 tumor and normal samples (from biobanks including the one that I currently manage).  These samples were analyzed using techniques like Next Generation Sequencing and researchers used heavy-duty computing power to put all of the data together. So what did they find? This data has contributed to hundreds of publications, but the one I’m going to talk about today is the results from the analysis of the glioblastoma multiforme tumors.

Title: Comprehensive genomic characterization defines human glioblastoma genes and core pathways published in Nature in October 2008.

Authors: The Cancer Genome Atlas Network

gbmBackground: Glioblastoma is a fast-growing, high grade, malignant brain tumor​ that is the most common brain tumor found in adults.  The most common treatments are surgery​, radiation therapy​, and/or chemotherapy (temozolomide​). Researchers are also testing new treatments such as NovoTFF, but these have not yet been approved for regular use. However, even with these treatments the median survival for someone diagnosed with glioblastoma is only ~15 months.  At the time that this study was published, little was known about the genetic cause of glioblastoma – a small handful of mutations were known, but nothing comprehensive. Because of the poor prognosis and lack of understanding of this disease, the TGCA targeting it for a full molecular analysis.

Methods: The TCGA requested tissue samples from glioblastoma patients from biobanks around the country. They received 206 samples that were of good enough quality to use for these experiments.  143 of these also had matching blood samples.  Because the DNA changes in the tumor only happen in the tumor, the blood is a good source of normal, unchanged DNA to compare the tumor DNA to. To these samples, the study sites did a number of different analyses:

  • They looked at the number of copies of each piece of DNA. This is called DNA copy number, and copy number is often changed in tumor cells (see more about what changes in the number of chromosomes can do here)
  • They looked at gene expression.  The genes are what makes proteins, which do all of the stuff in your body.  If you have a mutation in a gene, it could change the protein so that it contributes to the development of cancer.
  • They also looked at DNA methylation.  Methylation is a mark that can be added to the DNA telling the cell to turn off that part of DNA.  If there is methylation on gene that normally stops a cell from growing like crazy, that methylation would turn that gene off and the cell could grow out of control.
  • In a subset of samples, they performed next generation sequencing to know the full sequence of the tumor genomes.

Results and Discussion: From all of this data, the researchers found  quite a bit.

  • Copy number results: There were many differences in copy number including deletions of genes important for slowing growth and duplications of genes the told the cell to grow more.
  • Gene expression results: Genes that are responsible for cell growth, like the gene EGFR, were expressed more in glioblastoma tumor cells.  This has proven to be an interesting result because there are drugs that inhibit EGFR.  These drugs are currently being tested in the clinic to see if this EGFR drug is a good treatment for patients with a glioblastoma that expresses a lot of EGFR.
  • Methylation results: They found a gene called MGMT that is responsible for fixing mutated DNA was highly methylated.  This mutation was actually beneficial to patients because it made them more sensitive to the most common chemotherapy, temozolomide.  However, this result also suggests that losing MGMT methylation may cause treatment resistance.
  • Sequencing results: From all of the sequencing they created over 97 million base pairs of data! They found mutations in over 200 human genes. From statistical analysis, seven genes had significant mutations including a gene called p53, which usually prevents damaged cells from growing, but when mutated the cell can more easily grow out of control
glioblastoma_pathways

This is the summary figure from this paper that shows the three main pathways changed in glioblastoma and the evidence they found to support these genes’ involvement. Each colored circle or rectangle represents a different gene. Blue means that the gene is deleted and red means that there is more of that gene in glioblastoma tumors.

Bringing all of this data together, scientists found three main pathways that lead to cancer in glioblastoma (see the image above for these pathways).  These pathways provide targets for treatment by targeting drugs to specific genes in these pathways. Scientists also identified a new glioblastoma subtype that has improved survival​. This is great for patients who find out that they have this subtype!  Changes in the methylation also show how patients could acquire resistance to chemotherapy. Although chemotherapy resistance is bad for the patient, understanding how it happens allows scientists to develop drugs to overcome the resistance based on these specific pathways.

Although this is where the story ended for this article, the TCGA data has been used for many more studies about glioblastoma.  For example, in 2010, TCGA data was used to identify four different subtypes of glioblastoma: Proneural, Neural, Classical, and Mesenchymal that have helped to tailor the type of treatments use for each group. For example proneural glioblastoma does not benefit from aggressive treatment, whereas other subtypes do. Other researchers are using the information about glioblastoma mutations to develop new treatments for the disease

To learn more about the Cancer Genome Atlas Project, check out this article “The Cancer Genome Atlas: an immeasurable source of knowledge” in the journal or watch this video about the clinical implications of the TCGA finding about glioblastoma

How do we know the genome sequence?

Imagine someone asked you to explain how a car works. Even if you knew nothing about cars, you could take the car apart piece by piece, inspect each piece in your hand and probably draw a pretty good diagram of how a car is put together.  You wouldn’t understand how it works, but you’d have a good start in trying to figure it out.

Now what if someone asked you to figure out how the genome works? You know it’s made of DNA, but it’s the ORDER of the nucleotides that helps to understand how the genome works (remember genes and proteins?). All the time in the news, you hear about a scientist or a doctor who looked at the sequence of the human genome and from that information could conclude possible causes of the disease or a way to target the treatment. DNA sequencing forms a cornerstone of personalized medicine, but how does this sequencing actually work? How do you take apart the genome like a car so you can start to understand how it works?

As a quick reminder – DNA is made out of four different nucleotides, A, T, G, and C, that are lined up in a specific order to make up the 3 billion nucleotides in the human genome.  DNA looks like a ladder where the rungs are made up of bases that stick to one another: A always sticking to T and G always sticking to C.  Since A always sticks to T and G always sticks to C, if you know the sequence that makes up one side of the ladder, you also know the sequence of the other side.

DNA_ladder

The first commonly used sequencing is called Sanger sequencing, named after Frederick Sanger who invented the method in 1977. Sanger sequencing takes advantage of this DNA ladder – this method breaks it in half and using glowing (fluorescent) nucleotides of different colors, this technique rebuilds the other side of the ladder one nucleotide at a time. A detector that can detect the different fluorescent colors creates an image of these colors that a program then “reads” to give the researcher the sequence of the nucleotides (see image below to see what this looks like).  These sequences are just long strings of As, Ts, Gs, and Cs that the researcher can analyze to better understand the sequence for their experiments.

sanger_sequencing

This was a revolutionary technique, and when the Human Genome Project started in 1990, Sanger Sequencing was the only technique available to scientists. However, this method can only sequence about 700 nucleotides at one time and even the most advanced machine in 2015 only runs 96 sequencing reactions at one time.  In 1990, using Sanger sequencing, scientists planned on running lots and lots of sequencing reaction at one time, and they expected this effort would take 15 years and cost $3 Billion. The first draft of the Human Genome was published in 2000 through a public effort and a parallel private effort by Celera Genomics that cost only $300 million and took only 3 years once they jumped into the ring at 2007 (why was it cheaper and fast, you ask? They developed a fast “shotgun” method and analysis techniques that sped up the process).

As you may imagine, for personalized medicine where sequencing a huge part of the genome may be necessary for every man, woman, and child, 3-15 years and $300M-$3B dollars per sequence is not feasible. Fortunately, the genome sequencing technology advanced in the 1990s to what’s called Next Generation Sequencing. There are a lot of different versions of the Next Gen Sequencing (often abbreviated as NGS), but basically all of them run thousands and thousands of sequencing reactions all at the same time. Instead of reading 700 nucleotides at one time in Sanger sequencing, NGS methods can read up to 3 billion bases in one experiments.

How does this work? Short DNA sequences are stuck to a slide and replicated over and over. This makes dots of the exact same sequence and thousands and thousands of these dots are created on one slide. Then, like Sanger sequencing, glowing nucleotides build the other side of the DNA ladder one nucleotide at a time. In this case though, the surface looks like a confetti of dots that have to be read by a sophisticated computer program to determine the millions of sequencing.

NGS

So what has this new technology allowed scientists to do? It has decreased the cost of sequencing a genome to around $1000. It has also allowed researchers to sequence large numbers of genomes to better understand the genetic differences between people, to better understand other species genomes (including the bacteria that colonize us or the viruses that infect us), and to help determineexomee the genetic changes in tumors to better detect and treat these diseases. Next Generation Sequencing allows doctors to actually use genome sequencing in the
clinic. A version of genome sequencing has been developed called “exome sequencing” that only sequences the genes.  Since genes only make up about 1-2% of the genome, NGS of the exome takes less time and money but provides lots of information about what some argue is the most important part of the genome – the part that encodes proteins.  Much of the promise of personalized medicine can be found through this revolutionary DNA sequencing technique – and with the cost getting lower and lower, there may be a day soon when you too will have your genome sequence as part of your medical record.


For more information about the history of Sequencing, check out this article “DNA Sequencing: From Bench to Bedside and Beyond” in the journal Nucleic Acids Research.

Here is an amusing short video about how Next Generation Sequencing works described by the most interesting pathologist in the world.

Personalized Medicine: A Cure for HIV

Personalized Medicine – finding the right treatment for the right patient at the right time – is quickly becoming a buzzword both in the medical field but also to the public. But is it just hype? No!  I discussed a number of examples of how personalized medicine is currently be used in breast cancer in a previous post. In this and future posts, I’ll talk about a few fascinating emerging examples of the promise of personalized medicine.  These are NOT currently being used for patient treatment as part of standard of care, but could be someday.

HIV

HIV lentivirus

The Human Immunodeficiency Virus (HIV), the cause of AIDS, is a virus that attacks the immune system.  This attack prevents immune cells from fighting other infections.  The result of this is that the patient is more likely to acquire other infections and cancers that ultimately kill them.  When first discovered in the early 1980s, HIV infection was a death sentence. Untreated, survival is 9 to 11 years.  In the past 30 years, antiviral treatments have been developed that, when taken as prescribed, essentially make HIV infection a chronic disease, extending life to 25-50 years. But there is no cure for HIV, and as of 2012, over 35.3 million people were infected with the virus.

The lack of a vaccine to prevent the disease or of a cure to treat those infected isn’t because no one is trying. Since the virus was identified as the cause of the disease, scientists have been working to find a prevention or cure (along with developing all of the antiretroviral drugs that delay/treat the disease). I’m not going to discuss all of this interesting research (though it is worthy of discussion), instead I’m going to talk about one patient, Timothy Ray Brown, who was cured of HIV/AIDS through a stroke of genetic understanding and luck!

Brown was HIV positive and had been on antiretroviral therapy for over 10 years when he was diagnosed with leukemia in 2007. His leukemia – Acute Myeloid Leukemia (AML) – is caused by too many white blood cells in the bone marrow, which interferes with the creation of red blood cells, platelets and normal white blood cells. Chemotherapy and radiation are used to treat AML by wiping out all of the cells in the bone marrow – both the cancer cells and the normal cells. Brown’s doctors then replaced the cells in the bone marrow with non-cancerous bone marrow cells of a donor.  This is called a stem cell transplant, and it is commonly used to treat leukemia – often resulting in long term remission or a cure of the disease.

But the really cool part of this story isn’t the treatment itself.  Rather it’s that that Brown’s doctor selected bone marrow from a donor that had a mutation in the gene CCR5. So what? The CCR5 protein is found on the outside of the cells that the HIV virus infects. CCR5 is REQUIRED for the virus to get inside the cell, replicate, and kill the cell. Without CCR5, HIV is harmless. There is a deletion mutation in CCR5 called delta32 that prevents HIV from binding to the cell and infecting it.  Blocking HIV from getting into the cell prevents HIV infection.  In fact, it’s been found that some people are naturally resistant to HIV infection because they have this deletion. Two copies of the gene are found in 1% of the Caucasian population, and it’s thought that this mutation was selected for because it also prevents smallpox infection.
HIV_ccr5So Brown’s doctors repopulated his bone marrow with cells that had the CCR5-delta32 mutation.  This didn’t just cure his leukemia but it also prevented the HIV from infecting his new blood cells, curing his HIV. He is still cured from HIV today!

What does this mean for others who are infected with HIV? Is a stem cell transplant going to work for everyone?  Unfortunately, no. This mutation is very rare, so finding donors with this mutation isn’t feasible.  Plus, this is a very expensive therapy that comes with risks such as graft-versus-host disease from the mismatch between the person receiving the transplant and the transplanted cells themselves. However, there are possible options to overcoming these challenges, including “gene editing.” In this method, T cells from HIV-positive patients would be removed from the body and then gene editing would be used to to make the CCR5-delta32 mutation in these cells.  These cells could then be re-introduced into the patient.  With the mutation, HIV won’t be able to infect these T cells, which would hopefully cure the disease, while avoiding some of the major graft-versus-host side effects. A small clinical trial tested this idea in 2014 (full article can be found in the New England Journal of Medicine), and HIV couldn’t be detected in one out of four patients who could be evaluated. Although this is a preliminary study using an older gene-editing technique, it shows promise for “personalized gene therapy” to potentially cure HIV.

What are eye boogers?

Indy

Indy doing the “thinking man”

My dog Indy is a boxer mastiff mix.  We named him after the scene in Indiana Jones and the Last Crusade when Indiana’s dad, played by Sean Connery, was pointing out that Jones’ actual name was “Henry Jones Junior. We named the dog Indiana.”  So we named the dog Indiana too! He’s a huge hunk of a dog at 65 pounds, but it’s balanced out by him being sweet tempered and a total snuggle-bug. We rescued him nearly a year ago from Boxer Luv Rescue (support them, they are awesome!). He was found on the border of Arizona and Mexico with a skin condition (likely mange) and entropion in both eyes.  This entropion causes his the eyelid to roll inward and irritate the eye, so he had surgeries on both eyes before we got him to try to fix this problem.  the surgery didn’t fix it 100%, so we use some lubricating gel to help soothe his eye when it’s needed. The only way we know there’s something wrong with his eyes is that he wakes up every morning with enormous eye boogers!  I’m not talking about the normal crusties that you have in your eye and quickly pick away each morning (my family used to call them “sleepy seeds”).  I’m talking about globs and globs of gunk that I carefully wipe out every morning and sometimes again in the afternoon with a wet washcloth.

Because of my now intimate and frequent involvement with eye boogers, I started thinking about what these actually are, what they are made of, and why they are sometimes crusty, sometimes gunky and sometimes just plain disgusting. To answer these most important questions, I went and did some research!

Thanks to http://www.refreshbrand.com/dryeye/dry-item/tear-film for the image

Thanks to Refresh for the image

Let’s start by talking about what is in your eyes besides your eyeballs.  Your eyes are protected by the “tear film,” which is made up of an outer oily layer, a middle water layer, and an internal mucus layer.  This is actually really cool if you think about it. Imagine you have a cup of water with a thin layer of oil over it.  The oil will slow down evaporation of the water just like the oily layer of the tear film.  It will also prevent stuff from getting into the water. These are two of the tear film’s major jobs: keep the eye moist and remove debris, which occurs while blinking.  The oil also acts as a lubricant to make blinking your eyes easier.  The mucus layer, closest to the eye, also helps prevent debris from getting to the eye because it’s super sticky (think of it as the flypaper of the eye) and traps foreign particles so that they can be removed from the eye by tears. The tear film also helps make your vision clear by making the surface of the eye smooth it refracts light properly, and it protects against infection (because it contains antibacterial substances).

The official name for eye boogers is “rheum“.  More specifically, eye boogers are one type of rheum, which is the discharge that comes from the eyes, nose and mouth during sleep. This discharge is made up of mucus, oil, dead skin cells and other debris (like dust) – so in other words, the discharge is made up of that tear film that protects your eyes throughout the day. So why does it gunk up at night?  Because at night, you aren’t blinking and the rheum isn’t being washed away by your tears. Instead the rheum collects in the eye and crust/gunk up in the corner of the eye.

Why are eye boogers sometimes crusty and other times (like in Indy’s case) completely goopy?  The wetness or dryness of the eye boogers can be different depending on how much of the moisture has evaporated from the tear film.  So with Indy, his eyes have more discharge because of the eye irritation and this accumulates into a goopy mess because the water isn’t evaporating from the rheum when he sleeps.

Even though eye boogers change from night to night, there are medical reasons why you may have more or less eye boogers.  These changes in color or consistency could be an indication of a problem – such as dry eyes, an eye infection, a clogged tear duct, allergies or other eye irritation. So if your eyes are crusted shut or if your eye boogers are green, you can first tell your friends how eye boogers are made, and then get yourself to a doctor.

When I talked to my Mom about this post (she gets sneak previews because she’s my mom), her main question was whether dog eye boogers and people eye boogers are the same.  From some limited research, I found that dogs also have a tear film composed of the same layers the human’s have and has the same purpose.  Different parts of the dog eye anatomy create these layers compared to humans, but otherwise, it seems like it’s similar in concept. There are lots of articles about dog eye boogers (usually officially referred to as “eye discharge”) and many of the same problems, plus a few others, affect both dogs and people to cause abnormal eye boogers.  The same advice applies to people and dogs – if there’s more eye boogers or they start changing color, bring the dog to the vet to check it out.

Some of the other articles that have covered this extremely important topic:
Are yours crusty or wet? The truth behind eye boogers (ew)
Why do we get sleep in our eyes?
Where do eye boogers come from?
What are ‘eye crusties’ made of?

Book Club: The Immortal Life of Henrietta Lacks

The_Immortal_Life_Henrietta_Lacks

Thanks to Wikipedia for the image

In 2002, one of my first set of experiments in graduate school was treating the prostate cancer cell line (named DU145) with a chemotherapeutic drug and comparing how these cells responded to how HeLa cells responded to this chemotherapy. Little did I realize at the time that 51 years earlier, these cells were removed from a poor black woman named Henrietta Lacks without her even knowing. She subsequently died, but her cells have lived on for over 60 years being used by researchers around the world to better understand cancer. It’s estimated that over 60,000 research papers have used HeLa cells (I just searched the literature for “HeLa” and found over 83,000 results). HeLa cells helped to develop the polio vaccine (HeLa cells were easily infected by polio, and therefore ideal to test the vaccine).  In 2013, HeLa cells were the first cell line to have its genome fully sequenced (the genome of HeLa cells is a hot mess with more than 5 copies of some chromosomes – likely caused by the number of times that the cells have divided over the past 60 years).  In fact, HeLa cells are so popular and so widespread that they have been found to be contaminating a large percentage of the OTHER cell lines that researchers are using (for example, the bladder cancer cell line KU7 was found to exclusively be HeLa cells in one research lab).

With all of this activity surrounding HeLa cells, you may think that she is famous and her family has received recognition from her donation.  However, as so artfully described in Rebecca Skloot’s “The Immortal Life of Henrietta Lacks” these cells were taken and grown without her consent and her family had no idea that Henrietta was was “immortal” through her cells growing in las around the world. Skloot describes the moral and ethical issues surrounding how these cells were obtained while weaving a story about Henrietta Lacks and her family’s life and discovery of HeLa cell’s fascinating rise to prominence.  Although the story is interesting to a scientist and a biobanker, the book is definitely written in such a way that the public will completely understand the scientific significance.

Growing tumors outside the body to kill the tumor still inside

To understand how to kill a tumor, you have to study the tumor. Historically, much of how scientists understand tumors comes from removing a tumor from a patient’s body, putting Cell_Cultureit in a plastic dish (called a petri dish), and studying whatever cells are grown in this dish. You may be familiar with the book “The Immortal Life of Henrietta Lacks” by Rebecca Skloot. This book talks about HeLa cells, which are cells that were taken from Henrietta’s cervical cancer, grown in a dish, and propagated for the past 60+ years as what is called a “cell line“.  These cells grow and divide indefinitely, and have been propagated and transferred from lab to lab to be studied.  HeLa cells are one of the most famous and most-researched cells that have helped scientists better understand cancer. HeLa cells are not the only cell line that exists or has been used to study cancer.  There are cell lines from lung cancer tumors, prostate cancer, brain cancer, and most other major cancers. However, there are a few problem with using cell lines to understand and treat cancer.

  1. Cell lines are EXTREMELY hard to create.  As you may imagine, a plastic dish is nothing like the environment inside the body that the tumor was removed from.  In the petri dish the cells are put into “media,”t he liquid that is used to feed the cells in the petri dish, and this media is also nothing like the nutrients and other growth factors feeding the tumor inside the body. Because of this unnatural environment, some of the tumor cells die – and in many cases mostor all of the tumor cells die.
  2. The cells that are left in the petri dish do not accurately represent the tumor anymore. A tumor isn’t a whole bunch of identical cells, but rather a tumor contains a lot of genetically different cells.  Scientists call this tumor heterogeneity. This is one of the reasons why drug resistant cells emerge after treating a tumor with drugs (like in the case of melanomadescribed in a previous post).  There are already drug resistant cells inside the tumor that don’t die when treated with drug.  Unfortunately, not all of these different cells in the tumor will live in a petri dish, so only a selected type or types of cells will live and can be studied.
  3. Even though cell lines had been the most useful tool in the past to understand cancer biology, they are not at all useful in understanding the EXACT tumor from a particular person. What does this mean? For example, drugs that kill HeLa cells in a petri dish might not work to kill another person’s cervical cancer because the genetic cause of that cervical cancer is different. In personalized medicine, the goal is to identify the drugs that will work to kill a particular patient’s tumor. Because of this, cell lines just aren’t good enough.

Scientists have been working on a number of solutions, and I’ll talk about four:

  1. Biobanking. A biobank collects excess tumor tissue from patients who are having a
    liquidnitrogenfreezers

    Where tumor tissue is stored in a biobank before researchers use it

    tumor removed as part of a surgery.  This tissue is immediately preserved by freezing and can then be used by researchers to study that particular tumor or many tumors of a particular type (e.g., lung cancer).  The disadvantage to this is that the tumor sample isn’t an unlimited resource. Once the tissue has been used up – it’s gone. The remaining examples all focus on growing the tumor tissue so that it can be propagated and used for many experiments.

  2. Modified cell line growth. HeLa cells were not grown in any special way, but researchers at Georgetown Universityhave found ways to grow tumor cells in a petri dish  that are identical to the tumor and nearly all tumors can grow under these conditions. So what are these conditions?  The researchers grow cells on top of a layer of mouse cells called feeder cells because they provide the cell-based nutrients to “feed” the tumor and allow it to grow.  They also use a particular inhibitor that allows the cells to grow indefinitely. They have created these modified cell lines from different types of tumors, from frozen biobanked tumors, and from as few as 4 live cells.  Even though this system, is better, it still doesn’t replicate the 3D architecture of a tumor…
  3. cancer organoids

    Cancer organoids. Notice the 3D clumps of cells after 217 days of growth. Thanks to the Kuo lab for the image

    Organoids. As you would expect the word to mean, an organoid is a mini 3D organ bud grown in a dish. Don’t imagine a teeny tiny beating heart.  These organoids are just clumps of cells, but an organized clump of cells that can help better understand cells and organs. The discovery of how to create organoids was so interesting that it was a 2013 Big Advance of the Year by The Scientists magazine. Scientist have also found a way to grow cancer cells into these 3D organoid structures. With tumor organoids, researchers can both study the genetics of the tumor (like you can with cell lines) as well as how the tumor behaved in a 3D environment that is more similar to what the tumor encounters in the body.  But what if we could do even better?

  4. Patient-derived xenograftsare when tumor tissue is taken directly from a patient’s tumor and put directly into a mouse.  Why would this be so awesome? The environment inside a mouse is more similar to the environment that the tumor is used to inside a person’s body.  The cells are less likely to die because they aren’t living in unnatural plastic. Also, a whole piece of tumor can be implanted into the mouse, maintaining the tumor cells connections to neighboring cells, which are critical for the tumor cells to communicate with one another for survival.

With all of these systems available to study tumors from a specific patient, what are scientists actually doing with these cells? In some cases, they are being used to sequence the genomes of the tumors to identify mutations that may be causing the tumor. If a tumor can be grown so that there is a lot of it, the tumor cells themselves can also be used to test treatments either in a dish or inside of a mouse. Imagine a cancer patient getting their tumor removed, part of the tumor is grown in one of the ways described above. Then the tumor is exposed to the top 10, or 50 or 100 anti-tumor drugs or combination of drugs to see what kills the tumor. This drug or combo of drugs can then be used to treat the patient. There are companies that are currently working on doing exactly this (check out Champions Oncology) so this “big dream” may soon become a cancer patient’s more promising reality.

 

Why do nerds wear glasses?

me_inlab_glasses

Me in the lab 8 years ago

I remember my junior year of high school, leaning over the boy who sat in front of me in class to copy his notes because I couldn’t see the board. Yes, I did think he was cute, but in this case, I just really wanted to know what the teacher was writing.  That’s when I first got glasses.  My vision dive-bombed from there and by the time I got LASIK two years ago, I had -4 prescription with astigmatisms in both eyes. My Dad has always had glasses, and my sister had glasses but  her eyes corrected themselves over time (lucky duck). My mom just started wearing glasses full time last Friday.  She’s had readers for a while, but now she needs them to see both far and close.  Because glasses were so new to my mom, we spent a lot of time on our daily phone call Friday talking about them, and she ended our discussion by exclaiming “Cathy, you should find out why so many smart people wear glasses. Is there a reason for it or is it just a stereotype that nerds always wear glasses? You should write about this on your blog.” I didn’t have high hopes when I started looking into this, but actually, there are a lot of scientific papers looking into this topic.  Is there anything to the stereotype of nerds wearing glasses and if so, why?

One of the more recent studies exploring this topic was from the Gutenberg Health Study out of Germany.  Started in 2007, they are studying cardiovascular diseases, cancer, eye diseases, metabolic diseases, diseases of the immune system and mental diseases in over 15,000 German subjects.  They want to understand how genetics and the environment contribute to these diseases. From looking at 4,800 of these subjects between the ages of 35 and 74, they found that nearsightedness correlated with the amount of time spent in school: 53% of college graduates were nearsighted versus only 24% of people who’d dropped out of high school. This result was mirrored in a study in the United Kingdom studying over 100,000 subjects: 27% of the people they studied has nearsightedness and it was more common amongst those with higher education. Same results from a study looking at people throughout Europe. So there is actual scientific evidence that people with more education are more likely to wear glasses.

nerd

Thanks Wikipedia for the image

Does this mean that nearsightedness makes you smarter? Or does a person develop nearsightedness because they are studying? Or is it genetic?  Let’s start with the last option first.  The Gutenberg study looked at 45 genetic markers and found that they were only weakly associated with nearsightedness. So there likely is a genetic component but it’s not well understood. What has been shown more conclusively is that lack of light is highly correlated with nearsightedness.  In other words – more time indoors, the more likely that you’ll need glasses.  Studies have looked at whether adding more light to classrooms decreases nearsightedness, and in fact it does! (see this study in China and this one from Australia). Lack of light may actually cause a person to have to wear glasses!

But why would the lack of outdoor light cause nearsightedness? One option may be that it’s because kids are inside looking at screens or reading.  This could then stress the eyes or affect proper eye development. One study did find that the more time spent doing “close work” like reading or writing correlates with the need for glasses.  In this same study, they didn’t find any correlation with being indoors and staring at the TV or iPad and nearsightedness – so it’s not the screen time, it’s the studying!

Another study looked at Vitamin D levels (Vitamin D is created by interaction of ultraviolet B and other chemicals in the skin) and found that people who were nearsighted were also more likely to have lower vitamin D levels.  This is something you might expect if these folks also don’t go outside as often (because they’re too busy inside reading or doing homework???) But whether or not Vitamin D deficiency causes nearsightedness or if taking more Vitamin D could “cure” nearsightedness is another matter – and totally unknown.

So why do nerds wear glasses? Scientists can’t exactly say yet, but it’s likely a combination of genetic and environmental factors that are just beginning to be understood. Until then, you may want to look at the studies that are saying that it doesn’t matter if you are nearsighted or not or smart or not.  You should still wear glasses because in a British study, over 40% of people perceive that people wearing glasses makes a person look smarter and more professional.  I wonder now that I got LASIK if I should get myself a pair of fake glasses?  Just in case I need to “look smart.”