text
stringlengths 0
21.4k
|
---|
It can be tricky to evaluate and hire technical people from a different discipline, especially so for a biologist trying to build a software team. For that reason, biotech startups will often hire senior leaders first for their digital teams, with an expectation for that person to build the team. |
This strategy can backfire, however, if this first hire is too senior to build software. A leader with only people management skills will be quick to hire contractors and third-party vendors to meet short-term needs during the months-long team building process. This can lead to lock-in or over-dependence on external systems and disincentivize the "build" option for build/buy decisions. The best first hire is an experienced engineer with strong coding skills as well as some leadership and team building capability. |
If you can hire two or more experienced engineers with a history of building together, you're likely to get great results on rapid timescales. At Generate our initial Informatics team was a hands-on leader and a senior engineer duo, with years of experience working together on another digital biotech. |
If you can't find the rare software engineer with biotech experience, you can do well hiring from the tech industry and teaching them the ropes. The high salary demands will stretch the budget, but a bold investment will give you a strong early start on the platform. Hiring early for software will give engineers time to learn the business so they can focus on the right problems. You can get early wins to justify this investment, as engineers can write a simple script to save biologists hours of data munging. Encourage the team to build one-off solutions and set up useful infrastructure while allowing time to build more sophisticated systems. |
In contrast, it is not necessarily a good idea to hire machine learning talent early. If you don't know in very precise terms what ML/AI you need, or it's not a concrete part of your value proposition, then wait to hire your first ML scientist. Instead, find generalists in software and bioinformatics who know enough applied ML to get early wins with simple models that justify custom model development. Only after you have enough data, and know the subspecialty of ML you need, is it time to invest in your ML team. |
The reason for this logic is that great ML scientists are very expensive but are not going to provide a return on the investment if there is not a clear problem and sufficient data to solve it. This is not to discourage hiring an ML team, but to encourage leaders to understand and specify the problem ML is solving for their biotech, and what types of modeling approaches and architectures are appropriate for their data. Too often AI is held up as a magic wand that can solve arbitrary problems, but in reality the data infrastructure is more important and is, in any case, a prerequisite for ML. |
Lab automation |
The typical attitude toward lab automation is that it is reserved for mature companies with high-volume assays and bio-production processes. Recent trends in robotic automation are upending this view, however, due to cheaper hardware. |
For a small lab, it is now affordable to buy a benchtop robot and begin incorporating automation in day-to-day operations. Plenty of experimental biologists now have basic scripting skills and can program an OT-2 to run their assays, but this skill is not a given and needs to be screened for during interviews. |
Establishing a culture of automation early on will not only deliver gains in precision, reproducibility and productivity, but also prepare you well when it is time to scale your robotic operations. Invest in automation engineers early, and require experimentalists automate their protocols, in order to accelerate your data generation engine. |
Scaling the platform |
If a startup is successful, likely it will need to scale fast to capitalize on the opportunity. This is a massive challenge-even the most forward-thinking startups struggle to maintain their culture and keep up with technology as they dilute their tech-heavy founding teams with domain-specific talent from traditional biopharma. |
Growing pains are inevitable, but can be overcome if they are seen as problems to be solved. Search for these problems and surface them before they become ingrained, and then empower those at the front lines to solve them. Otherwise, you risk a toxic culture of employees giving up, blaming leadership, and letting issues become someone else's problem. The most important thing leaders can do is consistently and clearly communicate the vision to energize the true believers, deflect toxic negativity, and sidestep politics. Remind the team that scaling is a very good problem to have. |
Great digital infrastructure can ease these problems. In fact you've built software to empower your employees with permissionless access to data, ML models, software services, and requests for physical workflows, the scaling phase is where the investment pays off in productivity gains. While new managers need time to establish and navigate personal relationships, digital tools enable your people to get their job done without the need for human communication channels. |
Onboarding |
Onboarding is crucial for maintaining culture. Make sure you have laid out simple rules and processes that make your culture special. Have a full program of videos, reading material, process flowcharts, major capabilities, and anything else you want every single employee to know. If you've developed your org as cross-functional product teams, clearly lay out the goals of each group, its product or menu of services, its leader, and how an employee might interact with the group. |
A digital biotech needs to pay special attention to cross-disciplinary training during onboarding. Biologists need to be walked carefully through digital toolbox, since the entire mindset is likely very new. You may consider setting every employee up with a coding environment and the ability to programmatically access the company's digital services. A few training sessions may be needed to prevent a lab data culture dominated by Excel and Prism, and encourage one of database queries, Python scripts, and Jupyter/R notebooks. Keep in mind that the reason many biologists joined your company is because of the promise of ML and computation. A small investment in training and onboarding to get them using digital tools will pay off in the form of an enabled, energized lab. |
Likewise, you'll want to walk your computational teams through all of the relevant biology. This will prevent them from solving the wrong problems or making simplistic assumptions about the data. Encourage computational people to shadow the wet lab and deeply understand each assay and protocol. Foster applications of software and machine learning in experimental data processing, starting with a cross-disciplinary onboarding process. |
The onboarding program needs to be in place well before the company begins scaling and preferably starting with employee #1. Very soon in the company's lifetime it will need to be someone's full time job to manage the process. |
Conclusions |
A digital biotech is a new and different breed of organization, blending the best of tech and biotech industries. The advantages of a software-heavy, ML-powered biology platform will be so strong that these companies will likely dominate or become major players in the biopharma industry. |
The unique multidisciplinary nature of these companies requires a new approach to starting and scaling, which I've tried to outline here. Many of the ideas were drawn from modern tech lore or business cases, and could be applied generically to any tech startup. However, there is enough of a disconnect between biotech and tech cultures that these lessons are worth repeating here for a cross-disciplinary audience. |
Most importantly, there is no actual playbook, just anecdotes, lessons and mental models to help make decisions. Just as with science, keep an open mind, experiment often, and share what you've learned with the world. Good luck! |
Keystone events for life science tools companies are like 18th Century naval battles. Carefully crafted competitive language. By the end of this year, following ASHG and other conferences, I'll probably know more and think differently. |
I believe the work Illumina team members have accomplished, especially during a pandemic, is nothing short of amazing. As the current unequivocal leader in sequencing, these technology improvements will be a tailwind for a life science industry in desperate need of a win. And for that, I'm very happy. Regardless of how market share shifts, competition is terrific for researchers, clinicians, and patients alike. We've not had competition in sequencing like this in over a decade. |
Similar to some others, I'm mixed on the announcements overall. I've structured this blog around the four big unveilings - NovaSeq 6000 Dx, NovaSeq X Series, XLEAP-SBS, and Complete Long Reads. Within these products, there are areas where Illumina clearly has delivered on major pain points and others where I feel marketing and reality have become detached. While some of these improvements are substantial, I'd argue they also are safe and predictable - which isn't necessarily a negative in the current market environment. |
I'd be remiss not to mention several others who've posted on this subject already, including Varro Analytics, Keith Robison, Shawn Baker, and Nava Whiteford. I'm sorry if I missed anyone. Let's start with a brief overview of the hardware announcements. |
NovaSeq 6000 Dx |
"The launch of NovaSeq™ 6000 Dx as the first FDA-registered and CE-marked in vitro diagnostic |
The NovaSeq 6000 Dx seems like a slam dunk to me. FDA-cleared devices like the 6000 Dx are compatible with FDA-cleared testing kits, the 6000 Dx confers an attractive operating margin for labs that can run it near full utilization. Since the instrument includes an onboard DRAGEN server, labs can spend less on high-performance computing infrastructure or cloud fees. |
Though the centralized lab model access to molecular diagnostic testing. This is especially true outside of the United States, where cleared devices may be the only way to test a population. The instrument announcement didn't surprise me, however. One of the first-ever web results for 'NovaSeq Dx' was its prediction by James Hadfield on his Enseqlopedia.com page. Good job! |
NovaSeq X Series |
"NovaSeq X Series |
The NovaSeq X Series, including the X and X Plus, was a more unexpected unveiling. As mentioned above, others have summarized elsewhere the details of both platforms better than I could. I'd recommend looking at the product specification page in its Apple-esque glory. The X Plus version features a dual flow-cell design, doubling per-instrument throughput compared to the X for a ~25% increase in capital expense. I suspect this adoption cycle will mirror that of the NovaSeq 5000 and 6000, the latter being much more popular. The X Series and 6000 Series overlap modestly in terms of throughput - the middle-throughput X Series flow-cell 30X short-read-only human genome, hits shelves sometime in the waning quarters of 2023. Curiously, Illumina hasn't stated a release timeline for the 2x50 or 2x100 paired-end kits on the 25B flow-cell. I imagine customers hankering for ultra-high-throughput single-cell counting assays might be somewhat disappointed - an application Illumina has cited as a potential growth driver. |
In some ways, today's Illumina appears to be a vast empire now bordered by half a dozen highly-motivated, reasonably well-resourced factions. It's difficult to defend that much border territory simultaneously. Despite their relatively small sizes, some competitors' capabilities impart significant advantages over the status quo - whether along the sequencer 'weight class' spectrum or the inexorably growing application space. The NovaSeq X Series is a push back against high-throughput competition as well as an attempt to catalyze another slide along the price-elasticity curve. I'm reasonably confident Illumina is correct - that growing sample volumes crossed with deeper sequencing should result in higher consumables pull-through for the X Series as it becomes the firm's flagship sequencer. |
Many of the coordinated sub-system-level innovations contained within the X Series help to preserve Illumina's cost-of-goods-sold current-future offerings. Overall, I'm excited to watch this adoption cycle, though it's reminiscent of the Illumina playbook of old-scale flow-cell density to generate a wave of high-margin revenue from a subset of customers that generate a disproportionately large percentage of global sequencing volume. I'll save the onerous rundown of what I believe are the major rate-limiters for sequencing adoption. The unit price-per-genome isn't near the top of that list, but provider education, clinical decision support, robust insurance coverage, and billing and coding infrastructure sure are. |
XLEAP-SBS |
"The creation of XLEAP-SBS™ Chemistry, formerly referred to as Chemistry X, delivers fundamentally new sequencing by synthesis |
XLEAP-SBS is Illumina's go-forward short-read chemistry. As summarized above, XLEAP improves over standard SBS along several axes, though I'm reluctant to call it fundamentally new. At its core, XLEAP still is SBS - incorporate, detect, cleave, de-block, rinse, repeat - just better this time. The new chemistry is nearly twice as fast, meaning labs can crank through more samples per wall-clock hour. Processing more samples per day per instrument compresses the payback period for a new NovaSeq X Series. Of course, this assumes sample procurement and batching aren't limiting the customer - and you know what they say about assuming. |
During its presentation, Illumina touted the stability and resilience of XLEAP reagents, enabling what I perceived to be their most exciting advancement - ambient temperature shipping! XLEAP reagents and consumables can be consolidated and lyophilized, resulting in a 90% reduction in both packaging weight and waste. I cannot overstate how this eases logistics and shipping COGS for Illumina. Meanwhile, lab personnel can rest assured their consumables won't thaw if delivered over the weekend. There's also been a lot of refrigerator-generated pain on Twitter, so the news is timely. Territories with insufficient cold-chain logistics now can more easily step into the world of high-throughput sequencing. This could galvanize the creation of population databases from less-represented parts of the world, ensuring more medically-relevant genomics for all. |
XLEAP's ostensible accuracy improvements seem murky to me. Illumina claims a 3X accuracy increase over standard SBS but doesn't state by what measure. The company briefly showed reductions in phasing and pre-phasing error, meaning that the thousands of DNA molecules within a sequencing cluster are more in sync with each other - yielding greater signal-to-noise and fewer base-calling errors. Ultimately, the gold-standard accuracy metric for NGS is the loved score. For those unaware, Q-scores are logarithmic - not linear. |
Using this scale, XLEAP-SBS achieves a nominal ~4.8 point Q-score increase. Illumina's product specification page for the NovaSeq X Series reflects this modest increase. For its popular 2x150 paired-end kit, the company states that 85% of bases are at or above Q30. This is unchanged from the NovaSeq 6000 reagents. I'm confused as to why Illumina listed accuracy in its press release if the fold-improvement doesn't manifest on the product page. Perhaps they're referring to variant calling accuracy, which is measured by an F1-score - the harmonic mean of a classifier's precision and recall. I'll address this a bit later. Regardless, how does XLEAP compare with the competition's read-level accuracy claims? |
PacBio technologies in hopes of turning artifact-laden sequence reads into higher-quality consensus reads. I'll save a full discussion on this, including the advantages and disadvantages of UMI correction, for another time. |
Unlike all other groups I'm aware of, PacBio and Element use sequencing-by-binding chemistries in their short-read platforms. In my opinion, SBB is fundamentally different than any incarnation of SBS - contact, detect, dissociate, de-block, incorporate, rinse, repeat. While less proven in users' hands compared to SBS, SBB promises a much higher ceiling for data quality. Element claims >90% of bases above Q30 on its 2x150 paired-end kit, a marked improvement over XLEAP-SBS's 85%. Meanwhile, PacBio claims >90% of bases above Q40 on its 1x200 single-end kit - more than an order of magnitude higher than the XLEAP 2x100 paired-end kit. Examples of both companies' quality traces are below. |
Caveat emptor! The above data are from internal runs by both companies. Of course, what actually happens in customers' hands is more important than what a company claims. For clarity, Element's system just became available and PacBio plans to launch its short-read sequencer during the first half of 2023 with an external beta slated for this Fall. We'll see how the market reacts to these launches as raw short-read data quality is not a knob that has been turned historically. Illumina does not plan to port XLEAP-SBS to its mid-throughput machines until 2024, suggesting they're also in wait-and-see mode. |
As some have pointed out, gone is XLEAP's claim of 2X longer read lengths. With SBS, base accuracy and read length often pull in different directions, so I guess something had to give. Illumina still mentions the possibility of longer short reads using XLEAP, though the 2x250 kit from the NovaSeq 6000 is notably missing on the NovaSeq X Series product page. I imagine some are disappointed not to see a 2x300 or 2x400 kit! I put longer in italics because a 600-bp read compared to an 800-bp read is the difference between a dozen and a baker's dozen of doughnuts, whereas a native long-read is an entire bakery ! Now, on the matter of long reads... |
Complete Long-Reads |
"Illumina® Complete Long-Reads, formerly known as Infinity, will accelerate access to the remaining 5% of genic regions with a scalable, high-throughput, low DNA-input long read technology. Further, the technology provides a complete and accurate representation of the genome at the single molecule level without needing a new platform. In addition, it offers contiguous long reads to close key gaps. Two products will be launched with full end-to-end workflows in 2023. The first product, currently in early access, is a whole-genome assay that uses long contiguous reads to further extend the coverage and completeness of our existing 30x genome. The second is an enrichment panel targeting only the difficult to map regions of the genome. When combined with the 30x genome on NovaSeq 6000 and NovaSeq X Series, the panel delivers a significant improvement in throughput and cost versus on-market long read technology for human whole-genome sequencing." |
Earlier I mentioned there were places I felt marketing and reality had become detached - a lot of that resides in this section. I won't bury the lede - some Illumina customers will use Complete Long Reads - a now-defunct data type replaced by HiFi reads. Regardless of my data quality, workflow, and price concerns - scale and distribution matter in any industry. I sincerely hope Illumina's CLR technology helps fuel the growing awareness of and demand for longer-range information in multi-omics. Do I think CLR can replace native long-read technologies? No, I do not. |
In my view, the sudden re-emergence of many synthetic long-read, and Element have built or bought SLR workflows, ostensibly enabling long-read sequencing on a short-read platform. Does it sound too good to be true? History says yes. |
Each SLR workflow is slightly different but involves the same process of computationally reassembling short reads into the original, longer molecules from whence they came. While no one is certain exactly what Illumina is doing, Twitter and sell-side sleuths have converged on something resembling Sequencing Assembly via Mutagenesis conference, Illumina presented an IGV screenshot of its long-read assay with the title 'Longas', likely referring to Longas Technologies' Morphoseq - a SAM SLR approach Illumina may have stealthily acquired. I linked to a PacBio article because Illumina seems to have pulled the FoG slides. |
Understandably, Illumina attempted to distinguish CLR from other synthetic long-read approaches - including its own last attempt in 2014. Management said CLR is not a 'strobe read' approach - an ancient PacBio technology that became defunct after the company improved its polymerase photoshielding capabilities. Illumina also said that CLR isn't synthetic. I'm happy to eat crow in the future - but CLR is absolutely a synthetic long-read method no matter how buttoned up the library prep and bioinformatics steps become. No flow-cell, cluster-based SBS platform can generate accurate data past a few hundred bases, let alone 6,000-7,000 bases - the reported read N50 value for CLR. If Illumina were to have done the revolutionary engineering necessary to enable multi-kilobase sequencing, why isn't it affecting the core short-read workflows? Why wasn't it mentioned in the presentation? |
While 6-7kb CLR reads are an improvement over standard short-reads, they're a far cry from true calling, small variant calling in hard-to-sequence loci, and long-range phasing of genetic variation. There's also the issue of data quality. What systematic errors imbued by the library prep will persist in the final data, if any? Unfortunately, I don't believe Illumina has published any data to corroborate its claims in the press statement above. In the next paragraph, I'll touch only briefly on genome assembly since clinical sequencing is actually reference-guided re-sequencing. |
De novo assembly is a much taller order than clinical re-sequencing. There's no hiding behind a reference - it's just blue sky and sequence data. Read length, read quality, and even coverage matter a lot for assembly, as evidenced by the Telomere-to-Telomere to the DRAGEN suite, though the generation of pangenomes and acceptance into clinical pipelines seem rate-limiting. |
Genome assembly isn't where CLR is being positioned, as far as I can tell. SV calling, small variant calling in hard-to-sequence loci, and long-range phasing are much more relevant for clinical WGS today. Illumina's webpage claims that a CLR genome using DRAGEN yields a combined F1 score of 99.87% for small variant + HiFi reads achieves an F1 score of 99.89% - outperforming CLR + DRAGEN for small variant detection on this truth set. Granted, DeepVariant currently is not productized nearly to the same extent as DRAGEN. |
Of course, this score only reflects small variants - not structural variants - which are the largest class of genetic variation may impart artifacts into CLR data - which could prove problematic for complex SVs like tandem repeats that require ultra-precise, intra-SV resolution. Once again - we must see the data! |
Buckle up because this is an opinionated paragraph. I dislike the notion that short-reads leave just 5% of 'genic regions' uncharacterized. Over the past year, Illumina has referred to these regions as 'small edge cases', as if a sprinkling of CLR data is the final oomph needed to reveal all the genome's secrets. For some researchers and clinicians, these regions are all they focus on! What is or isn't an 'edge case' shouldn't be determined by the limits of a technology, regardless of its ubiquity. Ultimately, biology and all its beautiful difficulties should. On that topic, what about the >97% of the human genome harboring non-coding variation? What about structural variation? What about the fact that our genomes are diploid ? Even in mappable regions, will CLR be able to tease apart subtly-divergent haplotypes along entire gene bodies? What about base modifications like methylation that can affect disease etiology? What about other epigenetic features like chromatin accessibility? What about full-length RNA isoform expression, alternative splicing, and genome annotation? In my view, we are far too early into our human genetics journey to claim victory over the higher-order complexities linking genotype to phenotype. But I digress. |
Some Twitter commenters have pointed out workflow and sample compatibility considerations with CLR, and I think it's fair to mention them. Once again, we're not certain exactly what Illumina is doing, so read with caution. Both the WGS and targeted enrichment workflows seem specific to humans. Whether the company will support non-human applications is unknown right now. I've wondered if the mutagenic PCR conditions are universal or if they change based on the organism or sequence complexity. That might explain the human-centrism. Moreover, I'm curious if CLR will enable full-length RNA isoform sequencing - an active area of rare disease research currently intractable with short-reads alone. |
Illumina is right to call out the relatively high DNA input requirements on native long-read platforms. This matters in some clinical and research settings as not all sample types will yield enough starting DNA. Moreover, recontacting and getting more DNA from a patient can be difficult. Illumina specifies that 50ng of DNA is sufficient for CLR WGS. Of course, CLR is a PCR. This will be an interesting area to watch over the next year. |
During its follow-up investor call, Illumina didn't quantify how cost or throughput would compare against native long-read platforms, likely waiting for them to play their respective next hands. Both PacBio and Nanopore will have conferences later in 2022. I won't speculate on those companies here. I will, however, attempt to guess what CLR could look like - and once again I'm happy to have my napkin math swatted away. |
Unlike linked reads, synthetic long-reads exhibit a dislocation between physical coverage and sequence cost, resulting in an oversampling penalty. There's one round of coverage to assemble short-reads into synthetic long-reads and another round to cover the reference with synthetic long-reads. I've heard fold-penalty estimates ranging between 5X and 10X. Looking at Illumina's recent investor presentation on Page 37, they suggest the NovaSeq X Plus can process 3,000 CLR WGS samples per year - down from 20,000 30X short-read WGS samples. This implies a fold-penalty of 6.66X, which I will round to 7X. |
For the CLR WGS workflow, let's assume you buy a NovaSeq X Plus. If you need 7X short-reads to create synthetic long-reads and 30X synthetic long-reads to cover the genome, that's 210X genome coverage at $2/GB. If a 30X genome is $200, then a 210X genome is $1,400 of reagents. Let's also assume a three-year amortization schedule for the $1.25M capital expense. The sample throughput drops by the same fold-penalty as coverage, so 20,000 genomes-per-year shrinks to ~3,000 genomes-per-year. That's 9,000 samples over three years. This adds ~$140 additional dollars per sample - equating to roughly $1,540 per CLR-boosted genome, excluding service fees. |
Illumina is making CLR available on the NovaSeq 6000 as well, which enables a ~$600 short-read genome using the S4 flow-cell. Assuming full instrument amortization and the same fold-penalty, a current NovaSeq 6000 owner could generate a ~$4,200 CLR-boosted genome. During the second half of 2023, Illumina plans to launch an enrichment panel for hard-to-sequence loci. I'm unsure what the content will be or what the kit will cost. Regardless, the panel would certainly be cheaper than a full CLR-boosted genome. For Illumina to offer both suggests a trade-off - likely an improvement in per-sample price and throughput at the cost of variant calling power, phasing, and comprehensiveness. |
Ultimately, the burden of proof is on synthetic long-read vendors. We need to see the data. The phrase "this time is different" given the quantity of ill-fated synthetic long-read protocols makes me pause. Why didn't they catch on when native long-reads were 100X more expensive, 100X lower throughput, and 100X less accurate? As the cost of all sequencing falls all around, ease of use, data quality, and the bioinformatics burden will matter more and more. It's not the cost of the human genome that matters - it's the value delivered to the medical system. To fully elucidate that value and make it clear to all healthcare stakeholders, we'll need all tools at our disposal. |
Conclusion |
I believe Illumina likely will remain the dominant NGS vendor for the foreseeable future for many reasons that I didn't write about in this blog - structural entrenchment, service & support, sales infrastructure, network effects, ecosystem partnerships - the list goes on. At the same time, I don't believe one hammer can hit all nails in perpetuity. Market share is poised to shift more dramatically during the 2020s than in the 2010s. New markets and sequencing applications may be built atop a different technology - growing the overall pie rather than fragmenting it. |
I purposefully didn't discuss Illumina as a stock, though the current market climate likely shaped parts of the firm's announcements and messaging. Investors know the competition is blossoming in an environment where top-line growth is being eagle-eyed. The bottom-line is being scrutinized even more. It's no wonder, then, that Illumina would protect its high-throughput, gross profit engine. As I said earlier, this seems like a rational and prudent move that should play out similarly to the NovaSeq 6000 cycle. I'm excited to see technological swords and marketing shields clash once again - and you should be too. Monopolies do not beget a blazing pace of innovation, and that's what we need this early in the 'omics revolution. |