Skip to content

Commit

Permalink
Updated docs to commit db4e81eafe041dd1f06f69f6b7699b88cc92ab88.
Browse files Browse the repository at this point in the history
  • Loading branch information
Circle-CI-website committed Jan 31, 2024
1 parent f23e556 commit 5696515
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions seminar/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -124,11 +124,11 @@ <h4>How to apply as a speaker</h4>
<p>The seminar is a great opportunity to present your recent work to a large international audience.
If you want to apply as a speaker, please use the contact in the registration confirmation email.</p>
<h4>Next seminar</h4>
<h6> Title: TBD </h6> 7 February 2024 5:30 p.m. - 6:30 p.m. Central European Time
<h6> Title: HyenaDNA: Long-range Genomic Sequence Modeling at Single Nucleotide Resolution </h6> 7 February 2024 5:30 p.m. - 6:30 p.m. Central European Time
<p>Speaker: <strong><a href="http://erictnguyen.com/">Eric Nguyen, Christopher Ré lab</a></strong>, Stanford University</p>
<strong>Abstract:</strong>
<p align="justify">
TBD
Genomic (DNA) sequences encode an enormous amount of information for gene regulation and protein synthesis. Similar to natural language models, researchers have proposed foundation models in genomics to learn generalizable features from unlabeled genome data that can then be fine-tuned for downstream tasks such as identifying regulatory elements. Due to the quadratic scaling of attention, previous Transformer-based genomic models have used 512 to 4k tokens as context (<0.001% of the human genome), significantly limiting the modeling of long-range interactions in DNA. In addition, these methods rely on tokenizers to aggregate meaningful DNA units, losing single nucleotide resolution where subtle genetic variations can completely alter protein function via single nucleotide polymorphisms (SNPs). Recently, Hyena, a large language model based on implicit convolutions was shown to match attention in quality while allowing longer context lengths and lower time complexity. Leveraging Hyena’s new long-range capabilities, we present HyenaDNA, a genomic foundation model pretrained on the human reference genome with context lengths of up to 1 million tokens at the single nucleotide-level, an up to 500x increase over previous dense attention-based models. HyenaDNA scales sub-quadratically in sequence length (training up to 160x faster than Transformer), uses single nucleotide tokens, and has full global context at each layer. We explore what longer context enables - including the first use of in-context learning in genomics for simple adaptation to novel tasks without updating pretrained model weights. On fine-tuned benchmarks from the Nucleotide Transformer, HyenaDNA reaches state-of-the-art (SotA) on 12 of 17 datasets using a model with orders of magnitude less parameters and pretraining data. On the GenomicBenchmarks, HyenaDNA surpasses SotA on all 8 datasets on average by +9 accuracy points.
</p>

<h4>Upcoming speakers</h4>
Expand Down

0 comments on commit 5696515

Please sign in to comment.