Skip to content

Commit

Permalink
update cv and add amigolut. update bib for tdc trets paper
Browse files Browse the repository at this point in the history
  • Loading branch information
oliviaweng committed Nov 30, 2024
1 parent a0210c3 commit cd1ee6d
Show file tree
Hide file tree
Showing 4 changed files with 12 additions and 10 deletions.
5 changes: 4 additions & 1 deletion index.html
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,10 @@ <h3><script type="text/javascript" src="date.js"></script> </h3>
<h2 id="publications">Publications</h2>
<ol>
<li>
<p>Colin Drewes, Tyler Sheaves, <strong>Olivia Weng</strong>, Keegan Ryan, William Hunter, Christopher McCarty, Ryan Kastner, Dustin Richmond. <a href="https://dl.acm.org/doi/pdf/10.1145/3666092">Turn on, Tune in, Listen up: Maximizing Side-Channel Recovery in Cross-Platform Time-to-Digital Converters</a>. In <em>ACM Transactions on Reconfigurable Technology and Systems (TRETS)</em>. To appear.</p>
<p><strong>Olivia Weng</strong>, Marta Andronic, Danial Zuberi, Jiaqing Chen, Caleb Geniesse, George A. Constantinides, Nhan Tran, Nicholas Fraser, Javier Mauricio Duarte, Ryan Kastner. <a href="/">Greater than the Sum of its LUTs: Scaling Up LUT-based Neural Networks with AmigoLUT</a>. In submission.</p>
</li>
<li>
<p>Colin Drewes, Tyler Sheaves, <strong>Olivia Weng</strong>, Keegan Ryan, William Hunter, Christopher McCarty, Ryan Kastner, Dustin Richmond. <a href="https://dl.acm.org/doi/pdf/10.1145/3666092">Turn on, Tune in, Listen up: Maximizing Side-Channel Recovery in Cross-Platform Time-to-Digital Converters</a>. In <em>ACM Transactions on Reconfigurable Technology and Systems (TRETS) 17, 3, Article 49</em>. September 2024.</p>
</li>
<li>
<p><strong>Olivia Weng</strong>, Andres Meza, Quinlan Bock, Benjamin Hawks, Javier Campos, Nhan Tran, Javier Duarte, Ryan Kastner. <a href="https://dl.acm.org/doi/pdf/10.1145/3665334">FKeras: A Sensitivity Analysis Tool for Edge Neural Networks</a>. In <em>ACM Journal on Autonomous Transportation Systems 1, 3, Article 15</em>. September 2024.</p>
Expand Down
3 changes: 1 addition & 2 deletions index.xml
Original file line number Diff line number Diff line change
Expand Up @@ -32,8 +32,7 @@ Before coming to UCSD, I received my BS in Computer Science at the University of
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>

<guid>http://oliviaweng.github.io/projects/</guid>
<description>FKeras: A Sensitivity Analysis Tool for Edge Neural Networks JATS
Many scientific applications require neural networks (NNs) to operate correctly in safety-critical or high radiation environments, including automated driving, space, and high energy physics. For example, physicists at the Large Hadron Collider want to deploy an autoencoder to filter their experimental data at a high data rate (~40TB/s) in a high radiation environment. Thus, the autoencoder hardware must be both efficient and robust.</description>
<description>AmigoLUT: Scaling Up LUT-based Neural Networks with Ensemble Learning Applications including high-energy physics and cybersecurity require extremely high throughput and low latency neural network inference on FPGAs. Lookup Table (LUT)-based NNs like LogicNets address these constraints by mapping neural networks directly to LUTs, achieving inference latency on the order of nanoseconds. However, it is difficult to implement larger, more performant LUT-based NNs because LUT resource usage increases exponentially with respect to the number of LUT inputs.</description>
</item>

</channel>
Expand Down
14 changes: 7 additions & 7 deletions projects/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -30,20 +30,20 @@ <h1>Projects</h1>



<h2 id="amigolut-scaling-up-lut-based-neural-networks-with-ensemble-learning">AmigoLUT: Scaling Up LUT-based Neural Networks with Ensemble Learning</h2>
<p>Applications including high-energy physics and cybersecurity require extremely high throughput and low latency neural network inference on FPGAs.
Lookup Table (LUT)-based NNs like <a href="https://github.com/Xilinx/logicnets">LogicNets</a> address these constraints by mapping neural networks directly to LUTs, achieving inference latency on the order of nanoseconds.
However, it is difficult to implement larger, more performant LUT-based NNs because LUT resource usage increases exponentially with respect to the number of LUT inputs.
Our work <em>AmigoLUT</em> creates ensembles of smaller LUT-based NNs such that they scale up linearly with respect to the number of models to achieve higher accuracy within the resource constraints of an FPGA.</p>
<h2 id="fkeras-a-sensitivity-analysis-tool-for-edge-neural-networks">FKeras: A Sensitivity Analysis Tool for Edge Neural Networks</h2>
<p><a href="https://dl.acm.org/doi/pdf/10.1145/3665334">JATS</a></p>
<p><a href="https://dl.acm.org/doi/pdf/10.1145/3665334">JATS'24</a></p>
<p>Many scientific applications require neural networks (NNs) to operate correctly in safety-critical or high radiation environments, including automated driving, space, and high energy physics.
For example, physicists at the Large Hadron Collider want to deploy an autoencoder to filter their experimental data at a high data rate (~40TB/s) in a high radiation environment.
Thus, the autoencoder hardware must be both efficient and robust.</p>
<p>However, efficiency and robustness are often in conflict with each other.
To address these opposing demands, we must understand the fault tolerance inherent in NNs.
To identify where and why this inherent redundancy exists in a NN, we present <a href="https://github.com/KastnerRG/fkeras">FKeras</a>, a fault tolerance library for Keras, which is an open-source tool that measures the fault tolerance of NNs at the bit level, using various metrics such as the gradient and the Hessian.
Once we identify which parts of the NN are insensitive to radiation faults, we need not protect them, reducing the resources spent on robust hardware.</p>
<h2 id="ensemblelut-evaluating-ensembles-of-logicnets">EnsembleLUT: Evaluating Ensembles of LogicNets</h2>
<p>Applications including high-energy physics and cybersecurity require extremely high throughput and low latency neural network inference on FPGAs.
<a href="https://github.com/Xilinx/logicnets">LogicNets</a> addresses these constraints by mapping neurons directly to LUTs, achieving inference latency on the order of nanoseconds.
However, it is difficult to implement larger, more performant neural networks as LogicNets because LUT usage increases exponentially with respect to neuron fan-in (i.e., synapse bitwidth X number of synapses).
Our work <em>EnsembleLUT</em> creates ensembles of smaller LogicNets such that we scale up LogicNets linearly with respect to the number of models to achieve higher accuracy within the resource constraints of an FPGA.</p>
<h2 id="tailor-altering-skip-connections-for-resource-efficient-inference">Tailor: Altering Skip Connections for Resource-Efficient Inference</h2>
<p><a href="https://arxiv.org/abs/2102.01351">SLOHA'21</a>, <a href="https://dl.acm.org/doi/10.1145/3543622.3573172">FPGA'23</a>, <a href="https://dl.acm.org/doi/pdf/10.1145/3624990">TRETS'24</a></p>
<p>Deep neural networks employ skip connections&mdash;identity functions that combine the outputs of different layers&mdash;to improve training convergence; however, these skip connections are costly to implement in hardware because they consume valuable resources.
Expand All @@ -56,7 +56,7 @@ <h2 id="pentimento-data-remanence-in-cloud-fpgas">Pentimento: Data Remanence in
The data constituting an FPGA pentimento is imprinted on the device through bias temperature instability effects on the underlying transistors.
Measuring this degradation using a time-to-digital converter allows an attacker to (1) extract proprietary details or keys from an encrypted FPGA design image available on the AWS marketplace and (2) recover information from a previous user of a cloud-FPGA.</p>
<h2 id="maximizing-channel-capacity-in-time-to-digital-converters">Maximizing Channel Capacity in Time-to-Digital Converters</h2>
<p><a href="https://ieeexplore.ieee.org/abstract/document/9444070">FCCM'21</a>, <a href="https://dl.acm.org/doi/pdf/10.1145/3543622.3573193">FPGA'23</a>, <a href="https://dl.acm.org/doi/pdf/10.1145/3666092">TRETS (To appear)</a></p>
<p><a href="https://ieeexplore.ieee.org/abstract/document/9444070">FCCM'21</a>, <a href="https://dl.acm.org/doi/pdf/10.1145/3543622.3573193">FPGA'23</a>, <a href="https://dl.acm.org/doi/pdf/10.1145/3666092">TRETS'24</a></p>
<p>Side-channel leakage poses a major security threat in multi-tenant environments.
In FPGA systems, one tenant can instantiate a voltage fluctuation sensor that measures minute changes in the power distribution network and infer information about co-tenant computation and data.
In this project, we present the <em>Tunable Dual-Polarity Time-to-Digital Converter</em>&mdash;a voltage fluctuation sensor with three dynamically tunable parameters: the sample duration, sample clock phase, and sample clock frequency.
Expand Down
Binary file modified weng_cv.pdf
Binary file not shown.

0 comments on commit cd1ee6d

Please sign in to comment.