The Cancer Genome Atlas Project (TCGA): Understanding Glioblastoma

TCGAIn 2003, Cold Spring Harbor Laboratory (CSHL) and researchers around the world celebrated the 50th Anniversary of the discovery of the structure of DNA by Jim Watson and Francis Crick.  I was a graduate student in the Watson School of Biological Science at CSHL, named after James Watson who was the chancellor of the CSHL, and in 2003, I participated in (and planned!) some of the 50th anniversary events. Coinciding with this celebration was a meeting about DNA that brought world-renowned scientists and Nobel Prize winners from around the world to CSHL to celebrate how much had been accomplished in 50 years (including sequencing the human genome) and to look to the future for what could be done next. That meeting was the first time I had heard about the Cancer Genome Atlas Project. At this point, the TCGA (as the project was affectionately called) was just a pipe dream – a proposal by the National Cancer Institute and the National Human Genome Research Institute (two institutes in the National Institutes of Health – the NIH).  The idea was to use DNA sequencing and other techniques to understand different types of cancer at the genome level. The goal was to see what changes are happening in these cancer cells that might be exploited to detect or treat these cancers.  I remember that there was a heated debate about whether or not this idea would work. I was actually firmly against it, but now with the luxury of hindsight, the scientific advances of the TCGA seem to be clearly worth the time and cost.

The first part of the TCGA started in 2006 as a pilot project to study glioblastoma multiforme, lung, and ovarian cancer. In 2009, the project was expanded, and in the end, the TCGA consortium studied over 33 cancer types (including 10 rare cancers).  All of the data that was made publically available so that any results could be used by any scientist to better understand these diseases. To accomplish this goal, the TCGA created a network of institutions to provide the tissue for over 11,000 tumor and normal samples (from biobanks including the one that I currently manage).  These samples were analyzed using techniques like Next Generation Sequencing and researchers used heavy-duty computing power to put all of the data together. So what did they find? This data has contributed to hundreds of publications, but the one I’m going to talk about today is the results from the analysis of the glioblastoma multiforme tumors.

Title: Comprehensive genomic characterization defines human glioblastoma genes and core pathways published in Nature in October 2008.

Authors: The Cancer Genome Atlas Network

gbmBackground: Glioblastoma is a fast-growing, high grade, malignant brain tumor​ that is the most common brain tumor found in adults.  The most common treatments are surgery​, radiation therapy​, and/or chemotherapy (temozolomide​). Researchers are also testing new treatments such as NovoTFF, but these have not yet been approved for regular use. However, even with these treatments the median survival for someone diagnosed with glioblastoma is only ~15 months.  At the time that this study was published, little was known about the genetic cause of glioblastoma – a small handful of mutations were known, but nothing comprehensive. Because of the poor prognosis and lack of understanding of this disease, the TGCA targeting it for a full molecular analysis.

Methods: The TCGA requested tissue samples from glioblastoma patients from biobanks around the country. They received 206 samples that were of good enough quality to use for these experiments.  143 of these also had matching blood samples.  Because the DNA changes in the tumor only happen in the tumor, the blood is a good source of normal, unchanged DNA to compare the tumor DNA to. To these samples, the study sites did a number of different analyses:

  • They looked at the number of copies of each piece of DNA. This is called DNA copy number, and copy number is often changed in tumor cells (see more about what changes in the number of chromosomes can do here)
  • They looked at gene expression.  The genes are what makes proteins, which do all of the stuff in your body.  If you have a mutation in a gene, it could change the protein so that it contributes to the development of cancer.
  • They also looked at DNA methylation.  Methylation is a mark that can be added to the DNA telling the cell to turn off that part of DNA.  If there is methylation on gene that normally stops a cell from growing like crazy, that methylation would turn that gene off and the cell could grow out of control.
  • In a subset of samples, they performed next generation sequencing to know the full sequence of the tumor genomes.

Results and Discussion: From all of this data, the researchers found  quite a bit.

  • Copy number results: There were many differences in copy number including deletions of genes important for slowing growth and duplications of genes the told the cell to grow more.
  • Gene expression results: Genes that are responsible for cell growth, like the gene EGFR, were expressed more in glioblastoma tumor cells.  This has proven to be an interesting result because there are drugs that inhibit EGFR.  These drugs are currently being tested in the clinic to see if this EGFR drug is a good treatment for patients with a glioblastoma that expresses a lot of EGFR.
  • Methylation results: They found a gene called MGMT that is responsible for fixing mutated DNA was highly methylated.  This mutation was actually beneficial to patients because it made them more sensitive to the most common chemotherapy, temozolomide.  However, this result also suggests that losing MGMT methylation may cause treatment resistance.
  • Sequencing results: From all of the sequencing they created over 97 million base pairs of data! They found mutations in over 200 human genes. From statistical analysis, seven genes had significant mutations including a gene called p53, which usually prevents damaged cells from growing, but when mutated the cell can more easily grow out of control
glioblastoma_pathways

This is the summary figure from this paper that shows the three main pathways changed in glioblastoma and the evidence they found to support these genes’ involvement. Each colored circle or rectangle represents a different gene. Blue means that the gene is deleted and red means that there is more of that gene in glioblastoma tumors.

Bringing all of this data together, scientists found three main pathways that lead to cancer in glioblastoma (see the image above for these pathways).  These pathways provide targets for treatment by targeting drugs to specific genes in these pathways. Scientists also identified a new glioblastoma subtype that has improved survival​. This is great for patients who find out that they have this subtype!  Changes in the methylation also show how patients could acquire resistance to chemotherapy. Although chemotherapy resistance is bad for the patient, understanding how it happens allows scientists to develop drugs to overcome the resistance based on these specific pathways.

Although this is where the story ended for this article, the TCGA data has been used for many more studies about glioblastoma.  For example, in 2010, TCGA data was used to identify four different subtypes of glioblastoma: Proneural, Neural, Classical, and Mesenchymal that have helped to tailor the type of treatments use for each group. For example proneural glioblastoma does not benefit from aggressive treatment, whereas other subtypes do. Other researchers are using the information about glioblastoma mutations to develop new treatments for the disease

To learn more about the Cancer Genome Atlas Project, check out this article “The Cancer Genome Atlas: an immeasurable source of knowledge” in the journal or watch this video about the clinical implications of the TCGA finding about glioblastoma

How do we know the genome sequence?

Imagine someone asked you to explain how a car works. Even if you knew nothing about cars, you could take the car apart piece by piece, inspect each piece in your hand and probably draw a pretty good diagram of how a car is put together.  You wouldn’t understand how it works, but you’d have a good start in trying to figure it out.

Now what if someone asked you to figure out how the genome works? You know it’s made of DNA, but it’s the ORDER of the nucleotides that helps to understand how the genome works (remember genes and proteins?). All the time in the news, you hear about a scientist or a doctor who looked at the sequence of the human genome and from that information could conclude possible causes of the disease or a way to target the treatment. DNA sequencing forms a cornerstone of personalized medicine, but how does this sequencing actually work? How do you take apart the genome like a car so you can start to understand how it works?

As a quick reminder – DNA is made out of four different nucleotides, A, T, G, and C, that are lined up in a specific order to make up the 3 billion nucleotides in the human genome.  DNA looks like a ladder where the rungs are made up of bases that stick to one another: A always sticking to T and G always sticking to C.  Since A always sticks to T and G always sticks to C, if you know the sequence that makes up one side of the ladder, you also know the sequence of the other side.

DNA_ladder

The first commonly used sequencing is called Sanger sequencing, named after Frederick Sanger who invented the method in 1977. Sanger sequencing takes advantage of this DNA ladder – this method breaks it in half and using glowing (fluorescent) nucleotides of different colors, this technique rebuilds the other side of the ladder one nucleotide at a time. A detector that can detect the different fluorescent colors creates an image of these colors that a program then “reads” to give the researcher the sequence of the nucleotides (see image below to see what this looks like).  These sequences are just long strings of As, Ts, Gs, and Cs that the researcher can analyze to better understand the sequence for their experiments.

sanger_sequencing

This was a revolutionary technique, and when the Human Genome Project started in 1990, Sanger Sequencing was the only technique available to scientists. However, this method can only sequence about 700 nucleotides at one time and even the most advanced machine in 2015 only runs 96 sequencing reactions at one time.  In 1990, using Sanger sequencing, scientists planned on running lots and lots of sequencing reaction at one time, and they expected this effort would take 15 years and cost $3 Billion. The first draft of the Human Genome was published in 2000 through a public effort and a parallel private effort by Celera Genomics that cost only $300 million and took only 3 years once they jumped into the ring at 2007 (why was it cheaper and fast, you ask? They developed a fast “shotgun” method and analysis techniques that sped up the process).

As you may imagine, for personalized medicine where sequencing a huge part of the genome may be necessary for every man, woman, and child, 3-15 years and $300M-$3B dollars per sequence is not feasible. Fortunately, the genome sequencing technology advanced in the 1990s to what’s called Next Generation Sequencing. There are a lot of different versions of the Next Gen Sequencing (often abbreviated as NGS), but basically all of them run thousands and thousands of sequencing reactions all at the same time. Instead of reading 700 nucleotides at one time in Sanger sequencing, NGS methods can read up to 3 billion bases in one experiments.

How does this work? Short DNA sequences are stuck to a slide and replicated over and over. This makes dots of the exact same sequence and thousands and thousands of these dots are created on one slide. Then, like Sanger sequencing, glowing nucleotides build the other side of the DNA ladder one nucleotide at a time. In this case though, the surface looks like a confetti of dots that have to be read by a sophisticated computer program to determine the millions of sequencing.

NGS

So what has this new technology allowed scientists to do? It has decreased the cost of sequencing a genome to around $1000. It has also allowed researchers to sequence large numbers of genomes to better understand the genetic differences between people, to better understand other species genomes (including the bacteria that colonize us or the viruses that infect us), and to help determineexomee the genetic changes in tumors to better detect and treat these diseases. Next Generation Sequencing allows doctors to actually use genome sequencing in the
clinic. A version of genome sequencing has been developed called “exome sequencing” that only sequences the genes.  Since genes only make up about 1-2% of the genome, NGS of the exome takes less time and money but provides lots of information about what some argue is the most important part of the genome – the part that encodes proteins.  Much of the promise of personalized medicine can be found through this revolutionary DNA sequencing technique – and with the cost getting lower and lower, there may be a day soon when you too will have your genome sequence as part of your medical record.


For more information about the history of Sequencing, check out this article “DNA Sequencing: From Bench to Bedside and Beyond” in the journal Nucleic Acids Research.

Here is an amusing short video about how Next Generation Sequencing works described by the most interesting pathologist in the world.

Personalized Medicine: A Cure for HIV

Personalized Medicine – finding the right treatment for the right patient at the right time – is quickly becoming a buzzword both in the medical field but also to the public. But is it just hype? No!  I discussed a number of examples of how personalized medicine is currently be used in breast cancer in a previous post. In this and future posts, I’ll talk about a few fascinating emerging examples of the promise of personalized medicine.  These are NOT currently being used for patient treatment as part of standard of care, but could be someday.

HIV

HIV lentivirus

The Human Immunodeficiency Virus (HIV), the cause of AIDS, is a virus that attacks the immune system.  This attack prevents immune cells from fighting other infections.  The result of this is that the patient is more likely to acquire other infections and cancers that ultimately kill them.  When first discovered in the early 1980s, HIV infection was a death sentence. Untreated, survival is 9 to 11 years.  In the past 30 years, antiviral treatments have been developed that, when taken as prescribed, essentially make HIV infection a chronic disease, extending life to 25-50 years. But there is no cure for HIV, and as of 2012, over 35.3 million people were infected with the virus.

The lack of a vaccine to prevent the disease or of a cure to treat those infected isn’t because no one is trying. Since the virus was identified as the cause of the disease, scientists have been working to find a prevention or cure (along with developing all of the antiretroviral drugs that delay/treat the disease). I’m not going to discuss all of this interesting research (though it is worthy of discussion), instead I’m going to talk about one patient, Timothy Ray Brown, who was cured of HIV/AIDS through a stroke of genetic understanding and luck!

Brown was HIV positive and had been on antiretroviral therapy for over 10 years when he was diagnosed with leukemia in 2007. His leukemia – Acute Myeloid Leukemia (AML) – is caused by too many white blood cells in the bone marrow, which interferes with the creation of red blood cells, platelets and normal white blood cells. Chemotherapy and radiation are used to treat AML by wiping out all of the cells in the bone marrow – both the cancer cells and the normal cells. Brown’s doctors then replaced the cells in the bone marrow with non-cancerous bone marrow cells of a donor.  This is called a stem cell transplant, and it is commonly used to treat leukemia – often resulting in long term remission or a cure of the disease.

But the really cool part of this story isn’t the treatment itself.  Rather it’s that that Brown’s doctor selected bone marrow from a donor that had a mutation in the gene CCR5. So what? The CCR5 protein is found on the outside of the cells that the HIV virus infects. CCR5 is REQUIRED for the virus to get inside the cell, replicate, and kill the cell. Without CCR5, HIV is harmless. There is a deletion mutation in CCR5 called delta32 that prevents HIV from binding to the cell and infecting it.  Blocking HIV from getting into the cell prevents HIV infection.  In fact, it’s been found that some people are naturally resistant to HIV infection because they have this deletion. Two copies of the gene are found in 1% of the Caucasian population, and it’s thought that this mutation was selected for because it also prevents smallpox infection.
HIV_ccr5So Brown’s doctors repopulated his bone marrow with cells that had the CCR5-delta32 mutation.  This didn’t just cure his leukemia but it also prevented the HIV from infecting his new blood cells, curing his HIV. He is still cured from HIV today!

What does this mean for others who are infected with HIV? Is a stem cell transplant going to work for everyone?  Unfortunately, no. This mutation is very rare, so finding donors with this mutation isn’t feasible.  Plus, this is a very expensive therapy that comes with risks such as graft-versus-host disease from the mismatch between the person receiving the transplant and the transplanted cells themselves. However, there are possible options to overcoming these challenges, including “gene editing.” In this method, T cells from HIV-positive patients would be removed from the body and then gene editing would be used to to make the CCR5-delta32 mutation in these cells.  These cells could then be re-introduced into the patient.  With the mutation, HIV won’t be able to infect these T cells, which would hopefully cure the disease, while avoiding some of the major graft-versus-host side effects. A small clinical trial tested this idea in 2014 (full article can be found in the New England Journal of Medicine), and HIV couldn’t be detected in one out of four patients who could be evaluated. Although this is a preliminary study using an older gene-editing technique, it shows promise for “personalized gene therapy” to potentially cure HIV.

Growing tumors outside the body to kill the tumor still inside

To understand how to kill a tumor, you have to study the tumor. Historically, much of how scientists understand tumors comes from removing a tumor from a patient’s body, putting Cell_Cultureit in a plastic dish (called a petri dish), and studying whatever cells are grown in this dish. You may be familiar with the book “The Immortal Life of Henrietta Lacks” by Rebecca Skloot. This book talks about HeLa cells, which are cells that were taken from Henrietta’s cervical cancer, grown in a dish, and propagated for the past 60+ years as what is called a “cell line“.  These cells grow and divide indefinitely, and have been propagated and transferred from lab to lab to be studied.  HeLa cells are one of the most famous and most-researched cells that have helped scientists better understand cancer. HeLa cells are not the only cell line that exists or has been used to study cancer.  There are cell lines from lung cancer tumors, prostate cancer, brain cancer, and most other major cancers. However, there are a few problem with using cell lines to understand and treat cancer.

  1. Cell lines are EXTREMELY hard to create.  As you may imagine, a plastic dish is nothing like the environment inside the body that the tumor was removed from.  In the petri dish the cells are put into “media,”t he liquid that is used to feed the cells in the petri dish, and this media is also nothing like the nutrients and other growth factors feeding the tumor inside the body. Because of this unnatural environment, some of the tumor cells die – and in many cases mostor all of the tumor cells die.
  2. The cells that are left in the petri dish do not accurately represent the tumor anymore. A tumor isn’t a whole bunch of identical cells, but rather a tumor contains a lot of genetically different cells.  Scientists call this tumor heterogeneity. This is one of the reasons why drug resistant cells emerge after treating a tumor with drugs (like in the case of melanomadescribed in a previous post).  There are already drug resistant cells inside the tumor that don’t die when treated with drug.  Unfortunately, not all of these different cells in the tumor will live in a petri dish, so only a selected type or types of cells will live and can be studied.
  3. Even though cell lines had been the most useful tool in the past to understand cancer biology, they are not at all useful in understanding the EXACT tumor from a particular person. What does this mean? For example, drugs that kill HeLa cells in a petri dish might not work to kill another person’s cervical cancer because the genetic cause of that cervical cancer is different. In personalized medicine, the goal is to identify the drugs that will work to kill a particular patient’s tumor. Because of this, cell lines just aren’t good enough.

Scientists have been working on a number of solutions, and I’ll talk about four:

  1. Biobanking. A biobank collects excess tumor tissue from patients who are having a
    liquidnitrogenfreezers

    Where tumor tissue is stored in a biobank before researchers use it

    tumor removed as part of a surgery.  This tissue is immediately preserved by freezing and can then be used by researchers to study that particular tumor or many tumors of a particular type (e.g., lung cancer).  The disadvantage to this is that the tumor sample isn’t an unlimited resource. Once the tissue has been used up – it’s gone. The remaining examples all focus on growing the tumor tissue so that it can be propagated and used for many experiments.

  2. Modified cell line growth. HeLa cells were not grown in any special way, but researchers at Georgetown Universityhave found ways to grow tumor cells in a petri dish  that are identical to the tumor and nearly all tumors can grow under these conditions. So what are these conditions?  The researchers grow cells on top of a layer of mouse cells called feeder cells because they provide the cell-based nutrients to “feed” the tumor and allow it to grow.  They also use a particular inhibitor that allows the cells to grow indefinitely. They have created these modified cell lines from different types of tumors, from frozen biobanked tumors, and from as few as 4 live cells.  Even though this system, is better, it still doesn’t replicate the 3D architecture of a tumor…
  3. cancer organoids

    Cancer organoids. Notice the 3D clumps of cells after 217 days of growth. Thanks to the Kuo lab for the image

    Organoids. As you would expect the word to mean, an organoid is a mini 3D organ bud grown in a dish. Don’t imagine a teeny tiny beating heart.  These organoids are just clumps of cells, but an organized clump of cells that can help better understand cells and organs. The discovery of how to create organoids was so interesting that it was a 2013 Big Advance of the Year by The Scientists magazine. Scientist have also found a way to grow cancer cells into these 3D organoid structures. With tumor organoids, researchers can both study the genetics of the tumor (like you can with cell lines) as well as how the tumor behaved in a 3D environment that is more similar to what the tumor encounters in the body.  But what if we could do even better?

  4. Patient-derived xenograftsare when tumor tissue is taken directly from a patient’s tumor and put directly into a mouse.  Why would this be so awesome? The environment inside a mouse is more similar to the environment that the tumor is used to inside a person’s body.  The cells are less likely to die because they aren’t living in unnatural plastic. Also, a whole piece of tumor can be implanted into the mouse, maintaining the tumor cells connections to neighboring cells, which are critical for the tumor cells to communicate with one another for survival.

With all of these systems available to study tumors from a specific patient, what are scientists actually doing with these cells? In some cases, they are being used to sequence the genomes of the tumors to identify mutations that may be causing the tumor. If a tumor can be grown so that there is a lot of it, the tumor cells themselves can also be used to test treatments either in a dish or inside of a mouse. Imagine a cancer patient getting their tumor removed, part of the tumor is grown in one of the ways described above. Then the tumor is exposed to the top 10, or 50 or 100 anti-tumor drugs or combination of drugs to see what kills the tumor. This drug or combo of drugs can then be used to treat the patient. There are companies that are currently working on doing exactly this (check out Champions Oncology) so this “big dream” may soon become a cancer patient’s more promising reality.

 

How scientists “cured” melanoma

When talking about Personalized Medicine, one of the recent shining examples of this concept in practice is in the treatment of melanoma. Melanoma is a cancer of the pigment cells called melanocytes and is most commonly diagnosed as a skin cancer. The prognosis for melanoma is dismal when caught at later stages where the cancer cells have spread into lower layer of the skin or throughout the body (see the stats in the image below). Treatment typically involves surgery to remove the cancer cells, followed by chemotherapy and/or radiation therapy, but the response to these treatments is low.

melanoma

There are two interesting personalized medicine examples for melanoma.  The first is in determining whether a low stage (I or II) melanoma has a likelihood of spreading.  Once a low stage melanoma has been removed by surgery, there is still a 14% chance that these patients will develop metastatic (melanoma that spreads) disease. To determine which patients are more at risk, a biotech company developed DecisionDx-Melanoma. This test looks at the expression of 31 genes and separates the patients into two groups based on the gene expression profiles.  One group only has a 3% risk of developing invasive melanoma within 5 years whereas the other group has a 69% chance.

However, whether the cancer progresses or not, treatment is still an issue. That is, it was until a few years ago when scientists found that  50-60% of all melanoma patients have a mutation in the gene called “BRAF.” This mutation tells the cancer cells to grow faster, so you can imagine that if you stop this signal telling the cancer cells to GROW, then they might stop growing and die. This is exactly what the drug PLX4032 (vemurafenib) does – it inhibits this mutated BRAF and stops the cancer cells from growing in 81% of the patients with this mutation (see the photo at the bottom of the post to see how dramatic this effect is).  On the other hand, in patients without this mutation, the drug has severe adverse effects and shouldn’t be used.  Because of this, doctors don’t want to prescribe this treatment to patients without the mutation.  Therefore, scientists created a companion diagnostic.  These are tests that are used to identify specific mutations before treatment to help decide what treatment to give (see image below). In the case of melanoma, this companion diagnostic tests if the patient has the BRAF mutation, and the patient is only treated with vemurafenib if they have this mutation.

This treatment was revolutionary with an incredible ability to cure melanoma. It was like melanoma was previously being treated with the destruction of a nuclear bomb, and now it is being treated with the precision of a sniper rifle – targeting the exact source of the cancer. So why is the word “cure” so obviously in quotes? Unfortunately, after continued therapy, the cancer relapses (see the image below). Imagine treating cancer cells being like closing a road- it’ll block up traffic (kill the cancer cells), but then you’ll be able to find back roads that get you to the same place.  In the case of cancer, the drug is targeting mutations in BRAF, and BRAF finds ways to evade the drug by mutating again (effectively removing the roadblock).  Or the cancer cells themselves may have other routes besides mutated BRAF making the cancer grow. So although this drug is a life extender, scientists have been working to combine it with other targeted drugs (blocking off alternative routes) to make it a long-term life saver.

melanoma_relapse

From the Journal of Clinical Oncology

What is Personalized Medicine?

bullseyeA few years ago I was asked to teach a course to adults at the ASU Osher School of Lifelong Learning about the Emerging Era of Personalized Medicine. This was exciting because it would give me the opportunity to help empower these adults to better understand their health, the science behind what make them sick, and what scientists and doctors are doing to cure them.  This was also a challenging course to develop because only a few years ago personalized medicine wasn’t the common buzzword like it is today. In fact, in early 2014, the Personalized Medicine Coalition contracted a research survey that found that 6 in 10 of people surveyed hadn’t heard of the term “personalized medicine” (see all results of the survey here). Despite the public being unaware of this huge advance, in the past few years, scientists and doctors continue to evolve this concept and medicine isn’t just “personalized” but now it can also be described as “precision,” predictive,” “individualized,” “stratified,” “evidence-based,” “genomic” and much, much more.

So what is new about this type of medicine?  Of course since the days of Hippocrates, doctors have provided care to patients that take their “personalized” needs in mind. Based on the patient’s symptoms and their experiences, the doctor provides treatment. But what if two patients have the same symptoms but different underlying diseases?  A fever and a headache could be the flu or malaria. Or two people could have the same disease, like breast cancer, but the underlying genetic changes are different so that the cancer should be treated differently for each patient.

The current concept of personalized/precision medicine uses each person’s individual traits (genetic, proteomic, metabolomic, all the -omics) and harnesses our molecular understanding of disease for the prevention, diagnosis, and treatment of disease.

personalized-med2The ultimate goal of personalized medicine is to improve patient health and disease outcomes. The graph above shows how better understanding the genetic and molecular causes of disease can improve health at all phases of disease progression.

  1. Knowing the risk factors that cause of disease (either environmental, like smoking, or genetic, like the BRCA gene mutation) can help to prevent disease before it starts by eliminating the risk factors or providing additional screening to catch the disease early.
  2. Biomarkers that detect disease before major symptoms can be used to treat the disease early, which usually has a better outcome than treating a disease that has progressed further (think stage 1 versus stage 4 metastatic cancer).
  3. Once a disease has been diagnosed, the molecular understanding of the disease can help determine what treatment the patient should receive (see below for an example).
  4. Biomarkers can also be used to predict whether the disease will progress slowly or quickly or whether or not a selected treatment is working.

For all aspects of personalized medicine, there lies the promise to make an enormous impact both on public health but also on decreasing the cost of healthcare.

breast_cancerLet’s use breast cancer as an example of how personalized medicine plays out in real life, right now. For breast cancer detection, breast self-exams and mammograms are typically used.  With personalized medicine, we now have an understanding of one of the genetic risk factors of breast cancer – mutations in the BRCA genes.  Patients at higher risk for developing breast cancer because of these mutations can be monitored more closely or preventative action can be taken. In the past, breast cancer treatment focused on treated with non-specific chemotherapy and surgery. Although both of these treatments are still of value, now doctors also test for the presence of certain breast cancer genes like Her2.  If Her2 is present in breast cancer cell, the drug Herceptin that specifically targets this Her2 gene can be used to specifically kill those cancer cells. If Her2 isn’t present, this drug isn’t effective, causes negative side effects and wastes time and money when a more effective treatment could be used.  Once breast cancer is diagnosed, a patient would be interested in knowing how quickly their cancer will progress. This used to be primarily based on the stage of the cancer, where stage 4 cancers have spread to other locations in the body so the prognosis isn’t great. Based on molecular markers, scientists have now created panels of biomarkers (Oncotype DX and MammaPrint) that predict breast cancer recurrence after treatment.

These personalized medicine-based tests and drugs are incredible. However, this is a field that both holds considerable promise and requires lots of work to be done.  For every incredible targeted therapy developed, there are patients that are still waiting for the treatment for their disease or the genetic variant of their disease.  In future posts, I’ll talk a lot about both the promise and the pitfalls of personalized medicine.

If you want to learn more about personalized medicine, check out this YouTube video with a cartoon comparing treatment with and without the concept of personalized medicine.

Five Ways for You to Participate in Science – Citizen Science

Bunsen_burner

The Bunsen burner I didn’t have. Thanks Wikipedia for the image

I had a chemistry set growing up.  It was small with tiny white bottles holding dry chemicals that sat perfectly on the four tiny shelves of an orange plastic rack.  My dad would let me use the workbench in the basement to do experiments – entirely unsupervised!! You might expect that I did really interesting chemical reactions, and this formative experience helped me to develop into the curious scientist that I am today. Completely wrong.  I remember following the instructions, mixing the chemicals, and then getting stuck because I didn’t have a Bunsen burner.  So many chemical reactions rely on heat, and the green candle stuck to the white plastic top of an aerosol hairspray can wasn’t going to cut it.

My main options for doing science as a kid revolved my failed chemistry experiments, my tiny microscope and slides, and a butterfly net that never netted a single butterfly (not for lack for trying).  However, today with computers (that’s right – no computer growing up – that’s how old I am!) there are hundreds if not thousands of ways for people to get involved in science, without having to invest in a Bunsen burner. This citizen science movement, relies on amateur or nonprofessional scientists crowd-sourcing scientific experiments. I’m talking large scale experiments run by grant-funded university-based scientists that have the possibility of really affecting how we understand the world around us. One example you may have heard about is the now defunct Search for Extraterrestrial Intelligence (SETI) which used people sitting at their computers to analyze radio waves looking for patterns that may be signed of extraterrestrial intelligence. They didn’t find anything, but it doesn’t mean that they wouldn’t have if the program had continued!

Here are five ways that you can become involved in science from where you’re sitting right now!

americangut1. American Gut: Learn about yours (or your dog’s) microbiome

For $99 and a sample of your poop, you will become a participant in the American Gut project. After providing a sample, the scientists will sequence the bacterial DNA to identify all of the bacterial genomes that are present in your gut.  This study already has over 4,000 participants and aims to better understand all of the bacteria that covers and is inside your body – called your microbiome – and to see how the microbiome differs or is similar between different people or between healthy people versus those who may be sick. The famous food writer Michael Pollan wrote about his experience participating this the American Gut project in the New York Times.  They are also looking at dogs and how microbiota are shared with family members, including our pets!

2. Foldit: solve puzzles for sciencefoldit

Puzzles can be infuriating, but at least they have a point to them when you get involved in the Foldit project.  Proteins are the building blocks of life.  Made out of long strings of amino acids, these strings are intricately folded in your cells to make specific 3D shapes that allow them to do their job (like break down glucose to make energy for the cell).  Foldit has you fold structures of selected proteins using tools provided in the game or ones that you create yourself.  These solutions help scientists to better predict how proteins may fold and work in nature.  Over 240,000 people have registered and 57,000 participants were credited in a 2010 publication in Nature for their help in understanding protein structure.  Read more about some of the results here.

3. EyeWire: Mapping the BrainEyeWire-Logo

The FAQs on the EyeWire website are fascinating because as they tell you that there are an estimated 84 billion neurons in the brain, they also insist that we can help map them and their connections. After a brief, easy training, you’re off the the races, working with other people to map the 3D images of neurons in the rat retina.  You win points, there are competitions, and a “happy hour” every Friday night. The goal is to help neuroscientists better understand how neurons connect to one another (the connectome).

4. Personal Genome Project: Understanding pgpyour DNA

The goal of the Personal Genome Project is to create a public database of health, genome and trait data that researchers can then use to better understand how your DNA affects your traits and your health. This project recruits subjects through their website and asks detailed medical and health questions.  Although they aren’t currently collecting samples for DNA sequencing because of lack of funding, they have already sequenced the genomes of over 3,500 participants. The ultimate goal is having public information on over 100,000 people for scientists to use.

mindcrowd5. MindCrowd: Studying memory to understand Alzheimer’s Disease

Alzheimer’s Disease is a disease of the brain and one of the first and most apparently symptoms is memory loss.  MindCrowd wants to start understanding Alzheimer’s disease by first understanding the differences in memory in the normal human brain.  It’s a quick 10 minute test – I took it and it was fun!  They are recruiting an ambitious 1 million people to take this test so that they have a huge set of data to understand normal memory.

This is a randomly selected list based on what I’m interested in and things that I’ve participate in, but you can find a much longer list of projects you can participate in on the Scientific American website or through Wikipedia.  Also, if you’re interested in learning more about the kind of science that people are doing in their own homes, the NY Times wrote an interesting article: Home Labs on the Rise for the Fun of Science.  If decide to try one out, share which one in the comments and what you think!

No s**t?!?! Interesting facts about poop

I was talking to my sister and four-year-old nephew the other day and my sister prompted him to tell me what he wanted to study when he grew up. He looks right at me and answers “poop”. Totally funny coming from the boy who really is obsessed with his own poop, but as a scientist, I responded that I could tell him lots about poop and asked, “what about poop are you interesting in studying?”  His response, “All of it.” Well, I agree. Poop is far more interesting than we give it credit for.  In the next two posts, I will share with you all the interesting stuff I know about poop.  This post will be facts about poop and the second post will be about using poop as a cure for diseases.  Let’s get down and dirty...

fecalmatterI’m not one of those people fascinated by poop.  I have never read any of the most popular books on the topic “Everyone Poops” or “What’s Your Poo Telling You“. In fact, I won’t even admit that I poop myself (as my husband will attest I insist that it’s all butterflies and rainbows down there).  But (butt!) being in a lab makes you think about things you never expected.  A common laboratory activity is something called a journal club. Held weekly, undergrads, graduate students and post-docs take turns discussing a scientific topic or journal article.  I like talking about the newest technology and controversial topics, so when it was my turn, I decided to look into the ancient, but recently rediscovered, therapeutic uses of poop to help cure diseases. As a started my research on the topic, I realized that I knew very little about poop in general.  Being the scientist that I am, I went to learn more.  And lucky you, I’m going to share!

watering_poopFirst and foremost, what is poop made of? The majority (75%) is water! The remaining 25% is a mix.  About a third of this 25% (doing the math, that’s 7.5% of your poop) is dead bacteria (back to that later) and a third fiber and undigested food (like those corn kernels you didn’t chew before swallowing).  The final third contains living bacteria, protein, cell linings, fats, salts, and substances released from the intestines and liver. In fact, the brown color of poop comes from some of these secreted substances such as bile and also bilirubin, which comes from dead red blood cells.

seven types of poopThere are seven different types of poop that have been categorized in the Bristol Stool Form Scale (or BSF for short) developed by Dr. Ken Heaton from University of Bristol.  I was going to spend the next 5 minutes wondering exactly what sort of methodology brought him to discover this seven type system, but then I just looked at the original article. “Sixty-six volunteers had their whole-gut transit time (WGTT) measured with radiopaque marker pellets and their stools weighed, and they kept a diary of their stool form on a 7-point scale and of their defecatory frequency.” I’m glad I was not a volunteer in that study – keeping a daily diary of my stool form and have the length of time from mouth to poop tracked – ick!  However, Dr. Heaton was able to conclude that the form the stool takes depends on the time it spends in the colon, with 3 and 4 being ideal stools. Now one more thing for siblings, partners, and spouses to argue about – who’s poo is better?

But(t) let’s get serious.  Besides being an indication of intestinal health, poop is also filled with bacteria.  These bacteria are representative of the bacteria that can be found in your gut and are part of your “microbiome“. Your microbiome (all of the bacteria and other bugs in and around your body) outnumber your human cells 10 to 1, and scientists think that 300-1000 bacterial species inhabit the GI tract alone!  We’re not entirely sure exactly how many species because most of these bacteria don’t grow outside the gut (in the presence of oxygen), and when we look for gut bacteria by sequencing the DNA of poop samples, we’re not sure if the bacteria in poop represents all the bacteria that are found in the gut.

Either way, what do all those bacteria do? They help with digesting food and producing vitamins.  They regulate fat storage and do some crazy things like influence the immune system and the brain (more on that in a future post).  These bacteria are also protective against pathogens, like bad  infectious bacteria or viruses. How the gut microbiome protects against pathogens is still being studied, but we know that some gut microbiome bacteria create antimicrobials that kill bad bacteria.  In other cases, its all about the balance of the good bacteria versus the bad.  When this balance changes, it can be a cause or consequence of the disease. And one of the cures to these diseases, might just be poop itself, which is what I’ll discuss in my next post.

Want to learn more about poop?  Check out some of these resources:

Why is the specificity of a biomarker important? PSA for prostate cancer as an example.

I’ve described what biomakers are here and how they are discovered here. I’ve spent so much time discussing biomarkers because this is one of the aspects of personalized medicine that you may have already encountered in your doctor’s office or will encounter soon.  Similar to how we understand risk, it’s important to understand biomarkers because many healthcare decisions will be based on the results of tests that look at the presence, absence, or quantity of biomarkers.

So how do you know what the results of a biomarker test mean or whether or not a biomarker is good or not?  Scientists have created two measurements that can quickly tell you how good a biomarker test is: sensitivity and specificity.  Before we talk about what those two measurements measure, let’s first talk about the different scenarios for a patient after getting the result of a test using a biomarker.

  1. The test is positive and the patient has the disease. This is a good scenario because then the patient can move forward with the appropriate treatment.
  2. The test is positive but the patient doesn’t have the disease.  This is what we call a “false positive” because the test is incorrectly showing up as positive.  This can cause huge issues because a patient will receive a diagnosis, follow-up tests or treatment even though they don’t have a disease.
  3. The test is negative and the patient doesn’t have the disease.  Again, this is a good scenario because the patient is a-okay.
  4. The test is negative but the patient has the disease.  This is a “false negative” because the test is falsely showing that the patient doesn’t have a disease when they actually do.  This can also cause issues because then a patient won’t be treated even though they should be.

sensitivity_specificitySensitivity and specificity measure the best case scenarios – sensitivity measures when the test is positive and the patient has a disease and specificity measures when the test is negative and the person doesn’t have a disease.  The ideal test has 100% sensitivity (all sick people are tested as being sick) and 100% specificity (all healthy people have a negative test).  But this ideal situation is difficult to achieve.  Let’s use Prostate Specific Antigen (PSA) as a biomarker test for prostate cancer as an example.

There are 241,740 new cases of prostate cancer each year, and it is the most common malignancy in men (29% of all male cancers).  PSA levels are screened in men over 50 for increased expression and over $3 billion per year is spent for this screening.   What is PSA? It’s a protein produced by the prostate gland and can be elevated in men with prostate cancer, which is why it has been used as a biomarker for prostate cancer.  However, PSA may also be elevated in men with other conditions such as prostatitis (inflammation of the prostate), benign prostatic hyperplasia (enlargement of the prostate), or urinary tract infections.  Because of this, the PSA test is highly susceptible to false positives and false negatives.  Typically PSA greater than 4 ng/mL (this means that there is 4 nanograms of PSA protein in 1 milliliter of urine) is considered a positive test result for prostate cancer.  The sensitivity at this level is 21%, which means 21% of patients that have prostate cancer have PSA levels  greater than 4 ng/mL.  Specificity of the test is 91%, which means that 91% of patients who test negative do not have prostate cancer. This is good – there aren’t many patients who don’t have prostate having follow-up test or biopsy because of a false positive (because of the high specificity).

Let’s see what happens if a lower concentration of PSA are used as a cut off to try to detect more patients with cancer.  If you look at patients with greater than 1.1 ng/mL, the sensitivity increases significantly to 83%, which means that more people with cancer are being detected (great news!).  The trade off is a specificity of 39%, which means that a huge number of patients will be incorrectly diagnosed as having cancer (high false positives).  This will result in follow-up tests and biopsies.  The effect of these tests and biopsies are both psychological (thinking you have cancer when you don’t) and physical (an increased risk of complications and side effects caused by the biopsy).

psa test

So is PSA a good test or a bad test?  For the patients who have received a positive PSA test and have aggressive prostate cancer, this test saves lives.  However,because this is a common screening test, it does cost a lot of money.  Because the sensitivity of the test at higher PSA concentrations is so low, some cancers get missed.  And if the cut off is decreased to catch more of these cancers, there’s a much higher number of false positives in men who do not have prostate cancer resulting in costly, stressful test that may have added complications.

What’s the solution?  In the case of PSA, doctors have started measuring PSA levels over time.  Increases in PSA levels over time are found more often in men with prostate cancer.  PSA also exists in a few different forms. PSA can either be attached to other molecules or not.  The form that isn’t bound to other molecules is called “free-PSA” and scientists have found that the amount of free-PSA compared to the total amount of PSA is reduced in men with prostate cancer.  These improvements have decreased false negatives and false positives, making it a much better test.

Overall, biomarkers have the potential to revolutionize medicine, and in so many cases they already have.  But for you as a patient, understanding the challenges and pitfalls of these tests will help you be a more empowered patient with the knowledge to ask key questions when you receive the results from one of these new tests.

 

How do you find a biomarker? A needle in the haystack.

biomarker_useBiomarkers are biological substances that can be measured to indicate some state of disease.  They can be used to detect a disease early, diagnose a disease, track the progression of the disease, predict how quickly a disease will progress, determine what the best treatment is for the disease, or monitor whether or not a treatment is working. Biomarkers have the potential to do so much, and identifying biomarkers for different steps in the health/disease continuum would help doctors to provide each individual with targeted, precision healthcare.  Biomarkers have the potential to save billions of healthcare dollars by helping prevent disease, by treating disease early (when it’s usually less expensive to treat), or by targeting treatments and avoid giving a treatment that won’t be effective.

spotthedifferencesWith all this potential, you would expect doctors to be using data from biomarkers to guide every single healthcare decision – but this isn’t the case quite yet.  First scientists have to find these biomarkers – a process often referred to as biomarker discovery.  I like to compare finding a biomaker to those “spot the differences” games where you have to look at two images and circle what is different in one picture compared to the other.  This is exactly what scientists do when finding a biomarker, except instead of comparing pictures, they are comparing patients.  And it’s not an easy game of “spot the differences” it’s complicated: the pictures are small and there are tons of details.

Let’s imagine a scenario that a scientist might face when wanting to find a biomarker for the early detection of pancreatic cancer.   Cancer is caused by mutations in the DNA, so you decide to look for DNA mutations as your biomarker for pancreatic cancer. So how do you “spot the differences” to find DNA biomarkers for pancreatic cancer?  First, you will need patient samples – maybe tissue or blood samples from a biobank that already has samples from patients with pancreatic cancer.  If samples aren’t already available, you will have to initiate a study partnering with doctors to collect samples from pancreatic cancer patients for you.  You will also need the second “picture” to compare the pancreatic cancer “picture” to.  This second picture will be samples from people who don’t have pancreatic cancer (scientists usually call this group the “control” group).  Then you have to “look” at the two groups’ DNA so you can find those differences.  This “looking” is often done by some genomics method like sequencing the DNA. This is where a lot of the complication comes in because if you look at all of the DNA, you will be comparing 3 billion individual nucleotides (the A, T, G, and Cs we’ve discussed in earlier posts) from each patient to each of the controls.  Even if you just look at the DNA that makes proteins, you’re still comparing 30 million nucleotides per patient.  And you can’t just compare one patient to one control!  Each of us is genetically different by ~1%, so you need to compare many patients to many controls to make sure that you find DNA that is involved in the disease and not just the ~1% that is already different between individuals.  But wait, we’re not done yet!  The biomarkers that you identify have to be validated – or double checked – to make sure that these differences just weren’t found by mistake.  And before biomarkers can be used in the clinic, they need to be approved by the Food and Drug Administration (FDA)

biomarker_discovery

From http://www.pfizer.ie/personalized_med.cfm


Whew… that was a lot of work! And so many people were involved: lead scientists who directed the project and got the money to fund it, researchers who do most of the work, computer people who are experts at crunching all of the data, and maybe even engineers to help run the equipment. Finding the biomarker needle in the biological haystack is difficult and takes time, money, and lots of people.  This is one of the reasons why there are only 20 FDA approved biomarkers for cancer (data from 2014).  But just because it’s difficult, doesn’t mean it’s impossible.  Furthermore, this effort is necessary to improve healthcare and decrease healthcare costs in the future.  It just might take a bit more time than we’d all like.

If you want to read more about the challenges and some of the solutions to biomarker discovery in cancer, take a look at this scientific article.  Or read about some successes from right in our backyard at Arizona State University on identifying biomarkers for the early detection of ovarian cancer and breast cancer.