Monday, December 16, 2019

Science for Kids: Population Genetics with PTC paper


I love going out and teaching kids about science.  Since moving from North Carolina to Indiana, I have really missed engaging in STEM outreach at local schools.  However a group of us at work decided we would be on the lookout for these opportunities and get into some schools.  Our company Inari has been supportive of our efforts and so we recently found a local middle school, East Tipp that welcomed us to take over an 8th grade science class for a day.

The group  at work has been amazing as we build out our activities and lessons.  The credit for this science module that delves into population genetics goes to Cole Davis.  He put together a great lesson!

Objective:  

The objective of this lesson is to learn more about about population genetics which is the study of gentic variation in populations.  Some of the concepts to tackle include gene frequency, dominant, recessive, homozygous, heterozygous, etc.  The core exercise involves tasting PTC (phenylthiocarbamide) paper and mapping out the class population genetics based on the whether each student can taste the bitter compound (PTC) or or not taste PTC.

Supplies:  


The strips can be found very cheaply on Amazon: PTC Test strip link




Background:


DNA is how all of our genetic identity is stored and thus determines who are are and what we look like.   This DNA encodes for GENES and our genes are ultimately translated into many different  proteins.   Proteins define all the functions in our bodies or in a living organism. 

This figure shows how DNA is transcribed to RNA and RNA is translated into protein.
https://biologywise.com/protein-synthesis-process


The exercise today looks at one particular protein called a PTC taste receptor that is found in on the tongue.  We have lots of taste receptors that allow us to taste salty, sweet, bitter or sour.  Each different receptor is coded by our DNA.   One of those genes found coded in our DNA is for a PTC taste receptor that allows some of us to taste PTC.  If you can taste it tastes very bitter!  


https://learn.genetics.utah.edu/content/basics/ptc/


For around 75% of the population, PTC tastes very bitter!   For the other 25% of the population they do not taste any bitterness (or anything at all) when trying PTC.  

The PTC taste receptor protein found on our tongues looks like this:



Whether you taste PTC or not is determined by just one gene called TAS2R38.    The genes we get are determined by our parents.  For all genes, we all get one copy of a gene from our mom and one copy of a gene from dad.   Our parents DNA is ultimately what determines the makeup of our DNA and so our parents can give us different versions of the PTC gene depending on what they have.   The gene version that forms a protein that can taste PTC is a DOMINANT gene.    If we have just one gene from our parents that is dominant we can taste PTC.  However over time, the DNA code changed and that change means the DNA translated into a slightly different protein.   This changed version of the gene is RECESSIVE.   The new protein that some people have from a recessive version of the gene is not functional, meaning it does not work and the receptor does not taste the PTC.    However since you get a copy of the gene from mom and a copy from dad, you need to have both recessive copies (non-functional) in order to not taste PTC.

PTC gene:

A = Dominant (just one copy can taste PTC)
a = Recessive (takes two copies to NOT taste PTC)

Individuals have 2 copies of the genes:  One from each parent

AA = "Taster" since both copies are Dominant from each parent = Homozygous Dominant
Aa or aA = "Taster" since one parent still gave a dominant version of the gene - Heterozygous
aa = "Non Taster" since both parents gave a recessive version of the gene = Homozygous Recessive


EXERCISES

Exercise 1:   Taste the PTC paper

With the background out of the way, now the fun can begin.  The students can take the PTC paper and taste it.   At this point you see all kinds faces and get all kinds of reactions!

This just tastes like paper.
YUCK!
I don't taste anything
That was disgusting!
I didn't like that!

As long as you have a decent size population, you will have some tasters and non-tasters.   Statically 1 in every 4 people do NOT taste.  However, smaller populations can have very different frequencies.  Once everyone tastes you can dig into the genetics of the class.

Exercise 2:   Class Frequencies

The first thing to do is get the frequencies of the class population.  How many tasters in the class?  How many non-tasters?  After determining the class size you can determine the percentage of tasters (AA, Aa, or aA) vs the non-tasters (aa).  Did the class follow the average 75% taster / 25% non taster human population?  If not, why would the class frequency look different?

Exercise 3:   Hardy-Weinberg Equation

When we look at the class population data, we now know with certainty that the non-tasters are aa or homozygous recessive.  However everyone that tasted the PTC could either be AA (homozygous dominant) or Aa / aA heterozygous.   There is a mathematical formula that we can use to predict how many people in the class are AA vs heterozygous.  The formula is called the Hardy-Weinberg Equation and as a class we worked through solving it.  Following steps 1-5 below will solve this equation. 



Here is a resource where the equation is shown with a real example:  http://www.germanna.edu/wp-content/uploads/tutoring/handouts/Hardy-Weinberg-Equilibrium.pdf

Exercise 4:   Random Distribution vs Hardy-Weinberg Equation 

We just predicted the percentage of AA, heterozyous and aa frequencies in our class using the Hardy-Weinberg equation.   Now we tried to simulate natural variation in the population by rolling a dice so that each Taster could assign themselves as AA, Aa or aA.  The non tasters are still aa.

Each taster rolls a dice and based on there rolls they are the following.

1 or 2: aA
3 or 4: Aa
5 or 6: AA

Since each student now has a genetic identity for PTC based on the dice roll if they were tasters,  we re-calculated the frequencies and compared it to the Hardy-Weinberg result.  If there were differences we discussed the reason.  The results was usually much closer when we had a really big class.  In the smaller classes the two results were usually a little different. 


Exercise 5:  Bottlenecks

After working through the population genetics we talked some about bottle necking.   This is when a population is reduced to a smaller group or size and the limited diversity in the that population becomes the new norm.



 I work in agriculture and there has been intentional bottlenecking through artificial selection in some crops.   We talked a little through this concept.  This bottlenecking is one of the reason we have been able to increase yields.  




There are also examples where bottlenecking in agriculture has had consequences.  The potato famine is one example we talked through as a class.  The lack of genetic diversity led to a loss of the potato crop which in turn led to starvation and death.

From:  https://www.britannica.com/event/Great-Famine-Irish-history
The Irish relied on one or two types of potatoes, which meant that there wasn't much genetic variety in the plants (diversity is a factor that usually prevents an entire crop from being destroyed). In 1845 a strain of water mold accidentally arrived from North America and thrived in the unusually cool moist weather that year. It continued to destroy potato crops from 1846 to 1849. 

The PTC paper really helps to bring some fun into the concept of genetics.   There are some worksheets that Cole put together that allow the students to work through these exercises.   The worksheets contain even more details and scenarios depending on how deep you would like to go with this science lesson.

Worksheet:
Population Genetics Worksheet using PTC paper


Thanks to some awesome folks at Inari with huge props to Cole, Katie, Grant, Gretchen, Jess.  And thanks to East Tipp Middle School for inviting us!





Monday, November 18, 2019

Reasons to Embrace Vulnerability


I recently took a soft skill training course at work where we talked about a myriad of topics.  There was one group exercise I remember where we tried to define the traits of a great leader.   We each wrote traits on a post it note and then summarized the findings.   Many of the things you would expect showed up such as good communication, honesty, delegation and accountability.   However there was one interesting trait that rose to the top in our group named Vulnerability.

A couple months passed and I joined another class that focused on helping you grow as person.  One of the main themes on how to grow and develop as a person was understanding the importance of vulnerability.

In two separate classes that discussed personal growth, the word vulnerable came up.  What makes vulnerability such a good thing?   I remember my wife and I were on a subway ride in Boston not that long ago.  I was standing in the aisle while my wife sat beside a stranger, struck up a conversation and started telling them all about our life including some of the challenges we were going through as a family.   I felt so uncomfortable that my wife was telling this person all this info about us.  Even though I would never see this person again I did not like the feeling of being vulnerable.  This was personal information!  Looking back I admire the courage that my wife had and there we have it.   Vulnerability takes courage. 

The definition of vulnerability is:

the quality or state of being exposed to the possibility of being attacked or harmed, either physically or emotionally.

When we talk about vulnerability in the context of personal development we are mostly talking about situations that we feel we could be harmed emotionally.  That definition is pretty scary, but when you re-read it a couple of times it says "the possibility of being harmed."   Many times we don't want to be vulnerable just because of the possibility of emotional harm which manifests as shame.   If I go back to the conversation my wife had with a stranger, I really did think about how exposed and open to judgement she was.  Then the unexpected happened.  The stranger starting opening up about a similar experience he had.  I watched a pretty deep and open conversation happen for the 20 minute ride among complete strangers.   One person took the risk to be vulnerable and an amazing conversation ensued.   I reflected how I would not have taken that risk and I would have missed out on someone's else's empathy and insight.

As our learning group talked about vulnerability I gathered a few resources to share.  These videos are not new, in fact they have been around for quite awhile.   However, if you get a chance to dig into the subject of vulnerability it will provide some food for thought.  It has certainly made me think a lot more on where I can be more vulnerable in my life.

Here are a few key takeaways.

1.  Embracing vulnerability creates connection which leads to openness, innovation and trust
2.  We cannot have vulnerability without empathy
3.  Shame is the biggest antagonist of vulnerability

I have watched all these videos a few times and  applying this stuff is not always easy for me.  However I can immediately identify people in my life that have a firm grasp on these principles.   In every case they are people I feel connected with and they are the people that have had the biggest impact on my life.   Look for these people in your life as they will be a blessing to you. 

All about vulnerability by Brene Brown:

 1.  Embracing vulnerability creates connection which leads to openness, innovation and trust



I enjoyed reading this commentary based on the Ted talk:.   Link:   Commentary piece on vulnerability




2.  We cannot have vulnerability without empathy



3.  Shame is the biggest antagonist of of vulnerability




A beautiful quote to sum all this up:

There can be no vulnerability without risk; there can be no community without vulnerability; there can be no peace, and ultimately no life, without community. 
-M. Scott Peck



Tuesday, June 4, 2019

M81 and M82 Astrophotography - First Light with ASI294MC Pro Camera and ASIair wireless controller


Since moving to Indiana I was just waiting to get my scope out and take a few pictures.  My backyard is flat and open here which is a vast improvement to the backyard I had in North Carolina.   However, it has been raining here a lot this spring and so when I finally saw some clear skies the other night I decided to get out under the stars.  The other thing I have learned since moving to Indiana though is that the sun does not set until late!!  It is about 10 pm before I have dark enough skies to start setting up.

I did not have a long session, but the point of my night under the starts was to test some new equipment and software.   If any of you are debating on whtehr to upgrade to any of this equipment, I hope this can answer a few questions you may have.

A new main camera

All of my astrophotography in the past was done with a DSLR.  I started with a D90 and then moved on to a D5300.  Neither camera was modded and yet they both served me well.   I was using Backyard Nikon as the software to take all of pictures with my Nikon cameras.   Then I started reading and researching CMOS cameras and decided to take the plunge.   I moved to a ASI294 MC  Pro camera with cooling which is made by ZWO. 

ZWO website:  https://astronomy-imaging-camera.com/

I finally had a dedicated camera for my Stellarvue 90 refractor.  In addition this camera has a built in cooler which is not present in a DSLR.  The more heat that is present when you take pictures the more noise you get in your pictures. So having cooling can really help decrease the noise you will see in a picture.

Connecting the camera to the scope was a breeze.   Everything just screws together.  Just make sure you have the needed distance to achieve focus.  The ASI294 ships with an extension tube which is all I need to achieve focus.

The ASI294 is the red camera on the very right of the picture:



Focus and Polar Alignment

Polar alignment is always the one step that does not seem to always go well for me.  My go to method was to have my mount facing North, focus my scope and then use the Celestron polar alignment setting in the mounts hand controller.  The mount I am using is the Celestrron AVX.  This worked OK, but then I read about the software Sharpcap.  I am now a big fan of this software for just polar alignment alone although it has many other features like focusing.  The software is free for most uses although their is a small fee for features like polar alignment.

When I was using a DSLR I hooked up the camera to BackYard Nikon to see on my computer screen what the scope was seeing.  Then I could use what I saw to focus the scope and then polar align with the Celestron polar align utility on the mount's hand controller.  Now that I switched to the ASI294 I needed a new software that was compatible with the camera for focusing and polar alignment.  Backyard Nokon only worked with DSLRs and not this new camera.   SharpCap fills in the void left by not having BYN since it recognizes the ASI cameras made by the company ZWO.

Sharpcap:  https://www.sharpcap.co.uk/

So before we polar align we need to be sure the guide camera and the main camera are in focus.   In my setup I am using a second smaller ZWO cameera called the ASI120 mini that is attached to a smaller finder scope.   This samll cope will be my guide scope and camera.   You can just plug the USB cord running from each camera into your computer.   Sharpcap lets you chose the camera you want to see and brings up the image on the screen based on the exposure you chose.  You basically see a live view and are able to use that view to use the focusing knobs to achieve focus on both your guide camera and your main imaging camera. 

Now that both the cameras are in focus, it is time to polar align.   The smaller guide camera called the ASI120 mini is the camera I also used for polar alignment.  Technically you could use either camera to acheive polar alignment.  I just make sure the ASI120 mini is still plugged into my computer and it is the camera that is chosen in Sharpcap.  Once Sharpcap is open and your camera is selected you are able to view what the camera sees and go into the polar align function.   Here the program tells you exactly what to do and I was able to align my scope in about 10 minutes on the very first try with this software.

This is a screenshot taken right from the sharpcap website.  I forgot to take any screenshots when I was going through the procedure.  The basic principle is you are moving your mounts adjustment knobs to follow the yellow arrow and align on Polaris.   As you move the appropriate adjustment knowbs the  arrow gets closer and closer to polar alignment.  Polar alignment is important to make sure you can take long expsoures with start trailing. 




Imaging and Guiding

Normally with a DSLR I would use BackYard Nikon to capture my exposures and PH2 to guide my scope.   Since I switched to the ASI294 I needed to change programs.  I could have continued to use Sharpcap to take my photos, but instead I was testing out some new hardware and software from ZWO.  ZWO is the company who makes the ASI294 and the ASI120.  Since both my guide camera and my main imaging camera are made by ZWO, I wanted to try their  hardward and software to easily control the camera and take images throughout the night.

The ZWO hardware to specifically use with ZWO cameras is called the ASIair.   The beauty of the hardware is thatASIair allows you to take all of your photos wirelessly.   In theory you could sit on your couch in your living room and tell your scope what pictures to take while also viewing the pictures as they load.    It seems a little lazy, but on those really cold winter nights this sounds like a great idea to me!  It also means you do not need to keep you computer hooked up outside all night.

The ASIair is a small little box that is using a Raspberry Pi as the brains.  My guide scope (ASI120 mini) and my imaging scope (ASI294 MC Pro) both connect to the ASIair by USB cables.   This little box is basically plug and play.  One USB cable runs from each camera to the ASIair.   The ASIair does need a power source to run.  The hand controller from my Celestron mount also attaches to the ASIair so that it can wirelessly move your scope to the object you want to capture using the software instead of manually entering in your viewing object in the mount's hand controller.  Even better there is also a guiding option built into the ASIair software.  It looks like a mini version of PHD which would normally be done on a computer that is hooked up to your mount.  The ASIair software is only in app form and can be downloaded either with Android or Apple.   I first loaded it onto my Android phone, but it was a little too much info on a small phone screen for me, so I moved on to an old Apple Ipad mini which was a much better solution in my opinion.

Here is what the ASIair looks like and footprint it takes up.   It comes with some Velcro stickers so you can attach it wherever you want on your mount or scope. 




So how did it all work?  Much better than I thought for my first night with it.   I decided my object to test was M81 and M82.  These are both galaxies that are very close to Polaris in the night sky.   After I had a polar align with Sharpcap I was ready to test the ASIair out.  I did need to hook up a laptop to the guide camera to use Sharpcap to aceive focus and then polar align, but after the polar align I was done with all computers for my imaging session.  The rest was done on the Ipad mini in my house on on the couch (well almost, as I did run into one intial problem of plate solving.)

Using the ASIair program was mostly intuitive.

1  First you connect your Android or Apple device hosting the software to the wifi channel that the ASIair is broadcasting.  I just opened up wifi networks on my Ipad mini and connected to the 5G signal coming from the ASIair

2.  Once connected there are a few setting you need to put in, like your mount name, the ZWO camera you are using and and the focal length of your guide scope and your image scope.  The  Software does support many ZWO cameras and many different mounts.

3.  Next you can tell your scope where to "GOTO".   Supposedly your can go into SkySafari (another planetarium app) and pick any object you want and your mount would go there.  I could not get this function to work as the Skysafari app would not recognize my mount.  This is something I will need to troubleshoot.    However there is a "Choose Object" database located in the ASIair app that had many objects including M81 and M82 (The screenshot below shows the choose object button).  When I first slewed to this object the scope did not go to the right place.  This was the only other big problem I ran into during the session.  I did not do any star calibration since the ASIair has a plate solving feature.  Since I was polar aligned, I clicked on plate solving and the software quickly said "plate solved."  I assumeted this was all I need to do to calibrate the scope and mountt to the night sky (the scope knew where it was looking and going when told to goto an object).  but my first slew to M81 proved this to be untrue.  I tried plate solving a few more times but could not get this function to work.  I had to go back outside and use the mounts hand controller to calibrate on 3 stars.  Once calibrated, I went back the Choose Object in the ASIair and everything worked perfectly.



4.  Once at the proper location to see M81 and M82 I clicked on the "Guide" option to chose a star to lock onto.   I set the exposure of the ASI120 mini (attached to my guide scope) to 3 seconds and 48 gain and hit the "loop" button. 



The stars appeared on the screen and then I just clicked on a star with my finger and hit the "guide button".   The mini PHD graph showed some nice guiding.  My scope was polar aligned, focused, pointed at M81/M82 and guiding and locked on a star.  Now we were ready to actually start imaging.

5.  Now that you are guiding you are ready to tell your main scope to start imaging.  I used a gain of 120 and an exposure of 60 seconds.  I also set the camera temperature to -20 C .  This camera can theoretically go down to -40 but it depends on the outside temperature as to how low it can go.  Also it looks like you do not want this ASI294 camera to work to hard to cool or it actually creates more heat trying to cool to get the temperature down.   I know that does not make too much sense, but there are lots of internet posts on the subject.  Therefore I used -20 C as my target temp on this night.



6.  Next you can just set how many lights, how many darks, etc you want and hit the Go button.  The camera will do what you tell it to do.  I started by telling the camera to take 30 pictures at 60 second exposure and it did just that.   After each exposure the picture is sent to the device for you to view.   The ASIair box has a microSD card where all the images are being stored.

Here was a single 60 second exposure picture of M81 and M82 beaming back to my iPad Mini.


All in all the ASIair worked wonderfully.  I was able to image wirelessly for the most part.  I still needed my laptop for just a bit to polar align and focus and I hit a few snags with the sky alignment where I still needed to manually use the hand controller.  Technically I could have used the ASIair software to focus as well and ZWO have mentioned a polar align feature coming.  Note the ASIair was made for cameras made by ZWO.  So if you have ZWO cameras like the ASI120 mini or the ASI294 I highly recommend the ASIair device.   Seeing how my imaging session was going from afar with a live feed of the incoming pictures was great instead of  sitting outside all night or constantly running outside to check on the computer.

Stacking the Pictures

At the end of the session, I had 70 lights, 20 darks, and 30 flats to stack.  I stacked in Deep Sky Stacker (DSS) and also tried stacking in a new software called Astro Pixel Processor (APP).  Both are free and fairly straightforward although both have many little setting that you can play with to optimize the final stack.   Ultimately I feel like I had a little better stacking experience with APP even though this was the first time I have ever used it. 

APP:  https://www.astropixelprocessor.com/

If you try out APP, you will see the steps 0-6 on the opening screen.  Just follow each step chronologically.  If using a one shot color camera like the ASI294 then in step 0 (RAW/FITS) be sure to check "force Bayer CFA" and chose the Bayer pattern.  The ASI294 Bayer pattern is RGGB.
.


Processing the Stack

Now that we have stacked our lights, flats, and darks  into one file we can process that image.  This is where we need to remove background noise like light pollution, etc.     I used a software called Startools.   I actually struggled a little with trying to get the color processed correctly for the final image.   I want to give the StarTools creator Ivo Jager I huge shout out for helping me get the most out of this data.

 I shared the raw data and final stacked image before porocessing and this is the workflow that Ivo shared to get a nice result.  I was able to replicate the workflow and get a similar final image.

If you would like to play with some raw data for both stacking and processing here is the link:

Stacks and individual frames hosted on Google drive

If you want to try out startools here is one workflow that you can start with (Thanks Ivo for your help sharing this!!)

Super simple workflow below;

--- AutoDev
To see what we got, we can see noise, green/yellow bias, gradients, oversampling, stacking artifacts.
--- Crop
Removal of stacking artifacts.
Parameter [X1] set to [61 pixels]
Parameter [Y1] set to [38 pixels]
Parameter [X2] set to [4094 pixels (-50)]
Parameter [Y2] set to [2788 pixels (-34)]
Image size is 4033 x 2750
--- Bin
Parameter [Scale] set to [(scale/noise reduction 50.00%)/(400.00%)/(+2.00 bits)]
Image size is 2016 x 1375
--- Wipe
Vignetting preset.
Parameter [Temporary AutoDev] set to [Yes]
Parameter [Dark Anomaly Filter] set to [8 pixels]
Parameter [Drop Off Point] set to [0 %]
Parameter [Corner Aggressiveness] set to [100 %]
Parameter [Aggressiveness] set to [93 %]
--- Auto Develop
Final stretch.
Parameter [Ignore Fine Detail <] set to [2.0 pixels]
Parameter [RoI X1] set to [1126 pixels]
Parameter [RoI Y1] set to [686 pixels]
Parameter [RoI X2] set to [1389 pixels (-627)]
Parameter [RoI Y2] set to [819 pixels (-556)]
--- Deconvolution
Worth a try.
(automatically generated mask)
Parameter [Radius] set to [1.3 pixels]
--- Color
Spiral galaxies like M81 tend to show a yellow core (less star formation) and a bluer outer rim (more star formation)
Something about the stars is off, as they showing fringing (it looks like DSS had trouble aligning them correctly?)
Parameter [Dark Saturation] set to [2.70]
Parameter [Blue Bias Reduce] set to [1.00]
Parameter [Green Bias Reduce] set to [1.72]
Parameter [Red Bias Reduce] set to [1.76]
--- Wavelet De-Noise
Switching Tracking off
Parameter [Scale 5] set to [50 %]
Parameter [Grain Size] set to [23.3 pixels]
Parameter [Smoothness] set to [85 %]

Final Image

When I tried to replicate a similar workflow this is the final image that finally comes at the end of the entire process:



Hello there M81 and M82! 

Here is just a little more info on each galaxy from Wikipedia.




M81 and M82 taught me a lot.   I really like the ZWO products, both the new ASI294 camera that has replaced my DSLR and ASIair wirelesss controller.  I have only had one quick session with both of these pieces of hardware and I hit a few challenges.   I need to play around to get things a little more streamlined, but I still managed to set up and get enough data for an image.  Taking flats with the ASI294 is definitely something I think I need to optimize.   Also I could not get SkySafari linked with the ASIair to use the Skysafari app for GOTO.  The plate solving feature in the ASIair app also did not do anything for me.  I still had to manually calibrate my scope with a few stars so the mount knew the layout of the sky. 

I also really like APP and suggest you give it try for a stacking option.  When it comes to polar alignment, I wish I had found Sharpcap a long time ago. 

Clear Skies!



Wednesday, February 20, 2019

Inari: Joining a Start-up of Gene Editing and Innovation

(Disclaimer:  I want to state that what I write below are my personal opinions and views.)

I like to use this forum as way to not only share some of my thoughts, views and hobbies for others but to help document my own life experiences for myself.  I may be wrong but I am guessing one day when I am old I will enjoy going back through the story of my life to remember some of the details I will probably have forgotten.  So here goes the story of where life is taking my family in 2019.

I have not been as active blogging lately because my family and I have been preparing to move from the place we have called home for 17 years.  We will be moving from Cary, North Carolina to West Lafayette, Indiana where I will take on a new job.

One of my biggest passions beyond my family (and other than space!) has always been in agriculture biotechnology.  I have had the privilege of working for two very large agriculture companies since graduating from college.  These are companies that employ thousands and thousands of people.  They are companies where I have made friends and learned everything I know in ag biotech field.  I can say that I have loved every minute of it.  So the biggest question I get from everyone on my decision is  "why did you leave?"  Some of you reading this may be going through the same decision of whether to leave a job where you are comfortable and try something where there are many unknowns.    I also questioned myself for weeks on why I should try something different if I am comfortable and happy.   Why take a risk when staying put could be the better decision?

Abraham Maslow said, "In any given moment we have two options, step forward into growth or step back into safety."

I think ultimately I wanted to continue to grow by learning new things completely out of my comfort zone.  There are some days I am still petrified of leaving the job I knew so well and leaving the friends I made in NC.   However after much thought and reflection as well as a lot of family discussion we took the leap and I joined a company called Inari.  Inari is a small start-up based in Cambridge, Boston.  As they continue to grow they opened up a site in West Lafayette, Indiana (right beside Purdue University) and this is where we will be moving. 


Here is our new building in West Lafayette, Indiana.



If you want to learn more about Inari you can visit their website:  www.inari.com  I went into this job with the task of building a team in these new and empty labs in Indiana.   I have only been there for close to 3 months and in just this small amount of time I have learned much about this small start-up company.  Here are just a couple things that really excite me for the future of this company.

1.  We are working on gene-editing

When I say I wanted to continued to grow, I really wanted to keep learning new ways to make an impact in agricultural biotechnology.  Inari is a company focused on gene-editing technology.  They want to use gene-editing technologies to improve crops faster than traditional breeding by targeting precise changes that unlock the plants inherent potential for improvement.  I fully believe that gene-editing is the technology of the future and I was starting to worry that the technology was leaving me behind.  Many companies are working on gene-editing including the ones I have left.   In my past jobs I was also trying to make plants better with a biotech approach, but we were applying a technology(transgenesis) that is slightly different.  Inari as an entire company is focused on using gene-editing to create new products and this was a major reason for my decision to join Inari.

I know the term "gene-editing" scares some people, but I can at least try to give a very simplified primer on the subject.  The terms gene-editing and CRISPR/Cas are just some of the terms that get thrown around in the papers, news and social media.  There are some great resources online that go into much more detail than I am now, but gene-editing begins with one thing.  If you are gene-editing then you are able to cut DNA.  Scientists can use "molecular scissors" to make this cut.

https://geneticliteracyproject.org/2018/02/23/universal-genetic-scissors-crispr-cas-9-sister-protein-can-cut-dna-rna/

DNA is the code that defines who you are and likewise it is the same code that defines what a plant is,  or for that matter any living thing.   I liken DNA to computer code.  It exists unseen, yet gives rise to everything you see.   A DNA sequence (like a computer code) may say a flower color should be purple.

https://www.khanacademy.org/science/high-school-biology/hs-molecular-genetics/hs-rna-and-protein-synthesis/a/intro-to-gene-expression-central-dogma


Now what if someone thought the flower would be much prettier if it was white.  Well  plant breeders have been crossing plants for hundreds of years to do just this.   They have figured out that by combining DNA from different plants then you get different combinations of DNA until you eventually get what you want.  However it takes time to figure out the right crosses that show the output you want.  And when you cross plants, you may get the color you want but now you may get other bad stuff that also came in with the DNA.  For instance maybe you finally get that exact shade of red rose you want, but it meant your roses completely lost that great smell that makes a rose a rose.  The human analogy would be that you got the prefect hair texture from your mom and the tallness from your dad, but you also got a gene for sickle cell disease.

This problem of mixing DNA by crossing and getting so many changes at once (some bad and some good) is where gene can shine.  Gene editing at its simplest form can solve that problem by only changing the exact DNA sequence you need to get the result you want while leaving everything else alone.  In the plant example we could target one gene to get the color we want while all the other DNA stays the same.   Don't let the word gene editing scare you.  When a plant breeder crosses plants they are in a sense editing too, but they rely on much more random DNA recombination to get the desired trait.  Gene editing on the other hand just targets the exact DNA you need to target to get the right color.

I really think of gene editing as surgery but at a super small scale.   Just like in surgery, gene editing is all possible by precisely cutting the DNA where you want it to be cut.  A Doctor makes exacts cuts  when they do surgery and gene-editing uses the same precision.  We can target the exact location of a gene and cut the DNA.  Living organisms are extraordinary and so when that DNA is cut the plant says Uh-oh and tries to fix it.  The problem is the cells machinery fixes the DNA  incorrectly and the gene will no longer work after that cut.

So how do you even make the cut to begin with?  There are multiple "scissors" that can cut DNA.  You may hear words to describe these "scissors" including "TALENS" and "Meganucleauses".  The most common "scissor" used now is called "Cas" and is a protein found in bacteria that is relatively easy to use.  Here is a look at the "scissors" in real life actually cutting DNA!!!   Amazing!  The big orange blob is the Cas protein cutting DNA.

https://www.nature.com/articles/s41467-017-01466-8



If we go back to our flower color example, a real life example of gene editing of this is by a Japanese group who targeted a single gene and cut the DNA of that gene.  The plant repaired the DNA but not correctly and that gene no longer worked.  So without that one gene, the flower color went from purple to white.

https://www.asianscientist.com/2017/09/in-the-lab/morning-glory-color-violet/

Using tools to cut DNA and disrupt a gene is called a knock-out (loss of function) and is the simplest form of gene-editing.   Here is a graphic I picked up at a conference that does a great job summarizing.


Gene-editing can become much more complex as you may target multiple genes at once. However the concept is all the same.  You cut DNA at a precise location that you designate in order to get the exact change you are trying to make.  DNA is amazing stuff.  It defines every living thing, but it can also code some detrimental effects like human genetic diseases or disease susceptibility in plants.  By being able to modify the existing DNA in a plant (and some companies are trying to cure  human diseases the same way) the possibilities become very promising for humanity.

Inari is applying the gene-editing technology to commercial crops and I am enjoying seeing this story unfold in front of me.  I can't wait to see the future!  Here is the Inari model taken right from the website.





2.  We are innovating:  Strong interpersonal relationships lead to strong innovation

I have always been fascinated with innovation and how companies innovate.  Some companies are better than others at coming up new and novel ideas and then implementing them.   I am sure there are many strategies to become better at innovation, however I really think that innovation is directly linked to strong interpersonal relationships.  I think any size company can innovate, but it has to establish a culture of strong relationships among and across teams.

I had read one paper on this in the past that shows a model I personally think holds up very well.  It sure looks like common sense and looks easy, but as an organization grows it can be much harder to maintain this.  This may be one reason small companies seem to innovate so well, although that does not mean a large company cannot innovate well.


For people to innovate, there must exist communication from multiple directions in which all parties share a sense of psychological safety.   Psychological safety is the shared belief that a team is safe from interpersonal risk taking.

Psychological safety starts with positive interpersonal relationships.  These positive interpersonal relationships lead to strong communication within the organization on the good and the bad and is an enabler for people to meet and talk.   These pieces all come together to allow employees to innovate because they feel supported by those around them.   The biggest barrier to innovation is when there is no psychological safety; people don't innovate well when they are worried about the consequences (or lack of support) from risk taking.

I think Inari excels at innovation on a couple of fronts and has a been a great place to observe and work.  First they understand the concept that communication is key.   I had never been in an environment where the entire company sits together for lunch every single day.  This interaction over times builds positive interpersonal relationships and that is a catalyst to innovation.  Secondly Inari focuses on celebrating failure.  Celebrating failure is critical to achieve psychological safety since it supports boldness across the community.  It also encourages fast decision making since the community accepts failure in order to learn and do better the next try.  Two of Inari's core values sum up this discussion:  "Open" and "Boldness."

While packing and moving is a job in of itself, I am genuinely enthusiastic about tackling the upcoming challenges head on and seeing the possibilities that open up.