Skip to content

Commit

Permalink
Update schedule.qmd
Browse files Browse the repository at this point in the history
Minor: formatting
  • Loading branch information
bechang authored Dec 12, 2023
1 parent 4a33def commit 07131b3
Showing 1 changed file with 8 additions and 6 deletions.
14 changes: 8 additions & 6 deletions schedule.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -60,27 +60,29 @@ The readings will be classified into the following order of recommendation:

# Final Project Presentations

Tue 12/12
## Tue 12/12

- 2:00pm-2:35pm: Abhishek Purshothama and Christian Fontenot: Goal-Directed Abstract Interpretation​ of Distributed Systems with **P**âtissier
- 2:35pm-3:10pm: David Baines and Matt Buchholz: Improving convergence time in PL-focused LLMs in limited training data environments via optimized syntactical learning task selection

Thu 12/14
## Thu 12/14

- 2:00pm-2:35pm: Scott McCall and Zilong Li: Developing Programming Languages for Ease of Use with LLM’s
- 2:35pm-3:10pm: Karthik Sairam, Lawerence Khadka, and Emily Parker: Static Analysis of React Hooks

### Abhishek Purshothama and Christian Fontenot: Goal-Directed Abstract Interpretation​ of Distributed Systems with **P**âtissier
## Abhishek Purshothama and Christian Fontenot: Goal-Directed Abstract Interpretation​ of Distributed Systems with **P**âtissier

Reasoning about the correctness of parameterized systems such as distributed systems is challenging because an analysis framework needs to reason about an unspecified number of components. However, these systems exhibit a significant amount of regularity in both structure and communication  which can be identified and leveraged to simplify the reasoning. We present a goal-directed abstract interpretation approach for the verification of distributed systems. This approach is implemented in Pâtissier, a framework for reasoning about safety properties of programs written in the P language.

### David Baines and Matt Buchholz: Improving convergence time in PL-focused LLMs in limited training data environments via optimized syntactical learning task selection
## David Baines and Matt Buchholz: Improving convergence time in PL-focused LLMs in limited training data environments via optimized syntactical learning task selection

Large language models (LLMs) have gained popularity in recent years for aiding programmers in coding tasks, such as code generation and summarization. However, as new programming languages are continually introduced and adopted, existing code LLMs may not generalize well to unseen languages; further, existing code LLM training techniques, which typically rely on large amounts of data, may not be well-suited to an emerging, low-resource programming language. In this paper, we draw inspiration from models for low-resource natural languages (NLs); we explore how syntax-driven pre-training tasks can augment the performance of a code LLM on downstream tasks, and how the relative effectiveness of modified pre-training changes with the amount of data available. We demonstrate how these augmented training techniques can help bootstrap a code LLM for an emerging programming language, with the goal of reducing the barrier to developing tools for a new programming language (e.g. an autocomplete model for programming in said language).

### Scott McCall and Zilong Li: Developing Programming Languages for Ease of Use with LLM’s
## Scott McCall and Zilong Li: Developing Programming Languages for Ease of Use with LLM’s

In this paper, we discuss the importance of prompting in large language models, and how performance on certain tasks greatly varies based on the quality and manner of the prompt itself, which can make interacting with these models difficult. Addressing these limitations is important due to the increased mainstream usage of large language models such as ChatGPT in a wide variety of contexts. The difficulty from improving the performance of these models comes from the billions of parameters that are used in these models, and the idea that fine-tuning them would be impractical for most users and impossible for models that are closed-source. Prompting is a good approach to retrieve satisfactory responses from large languages models, and it is user-friendly since using the natural language is enough. However, many prompting methods are associated with certain templates and the call of languages models’ APIs. Our goal is to develop a programming language that can interact with an LLM such as ChatGPT in a way such that constraints and other conditions can be added to the prompt such that it provides a layer of abstraction to the user that makes it easier to interact with the LLM in a way that provides higher quality results.

### Karthik Sairam, Lawerence Khadka, and Emily Parker: Static Analysis of React Hooks
## Karthik Sairam, Lawerence Khadka, and Emily Parker: Static Analysis of React Hooks

Hooks provide an ergonomic model for reasoning about states and effects in React.js components. But, they come with their own set of rules that the developers must manually abide by. Without a strict compile time verification of those rules, there exist chances of subtle bugs, slower than expected performance issues and infinite re-rendering of the UI, which can occur completely unbeknownst to the developer until the app is actually run. In this paper, we try to model the behaviour of these hooks in terms of the React Concurrent Fiber Architecture to identify such bugs much earlier in the development cycle.

Expand Down

0 comments on commit 07131b3

Please sign in to comment.