Skip to content

Commit

Permalink
Final version with videos
Browse files Browse the repository at this point in the history
  • Loading branch information
rmichon committed Dec 4, 2023
1 parent 5ba6d2e commit e4008af
Showing 1 changed file with 42 additions and 5 deletions.
47 changes: 42 additions & 5 deletions docs/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -40,9 +40,11 @@
<a class="dropdown-item" href="#syfala">Dec. 1: Faust/FPGA Workshop</a>
</div>
</li>
<!--
<li class="nav-item">
<a class="nav-link" href="#coming">Coming to PAW/Contact</a>
</li>
-->
<li class="nav-item dropdown">
<a data-toggle="dropdown" class="nav-link dropdown-toggle" href="">Previous Editions</a>
<div class="dropdown-menu">
Expand All @@ -63,7 +65,7 @@
<img class="first-slide" src="img/banner.jpg" alt="First slide">
<div class="container">
<div class="carousel-caption">
<a style="font-size: 40px;" class="btn btn-lg btn-primary" href="https://forms.gle/ALfpkYEoC3GGpaWy9" role="button">Register!</a>
<a style="font-size: 40px;" class="btn btn-lg btn-primary" href="#program-overview" role="button">Watch the Talks!</a>
<!--<a style="font-size: 40px;" class="btn btn-lg btn-primary" href="https://www.youtube.com/@grame-centrenationaldecrea9100/streams" role="button">Watch PAW-23 Live!</a>-->
</div>
</div>
Expand All @@ -75,15 +77,19 @@
<div class="container">
<h1 id="about"><b>PAW 2023</b><br><i>AI and Audio Programming Languages</i><br><small>Marie Curie Library, INSA Lyon (France)<br>Dec. 2, 2023</small></h1>

<p>The Programmable Audio Workshop (PAW) is a yearly one day FREE event gathering members of the programmable audio community around scientific talks and hands-on workshops. The 2023 edition of PAW is hosted by the <a href="https://team.inria.fr/emeraude">INRIA/INSA/GRAME-CNCM Emeraude Team</a> at the <a href="https://maps.app.goo.gl/k8NDxRkdSZZCmTG67">Marie Curie Library of INSA Lyon</a> (France) on December 2nd, 2023. The theme of this year's PAW is "Artificial Intelligence and Audio Programming Languages" with a strong focus on computer music languages (i.e., Faust, ChucK, and PureData). The main aim of PAW-23 is to give an overview of the various ways artificial intelligence is used and approached in the context of Domain Specific Languages (DSL) for real-time audio Digital Signal Processing (DSP).</p>
<p>The Programmable Audio Workshop (PAW) is a yearly one day FREE event gathering members of the programmable audio community around scientific talks and hands-on workshops. The 2023 edition of PAW was hosted by the <a href="https://team.inria.fr/emeraude">INRIA/INSA/GRAME-CNCM Emeraude Team</a> at the <a href="https://maps.app.goo.gl/k8NDxRkdSZZCmTG67">Marie Curie Library of INSA Lyon</a> (France) on December 2nd, 2023. The theme was "Artificial Intelligence and Audio Programming Languages" with a strong focus on computer music languages (i.e., Faust, ChucK, and PureData). The main aim of PAW-23 was to give an overview of the various ways artificial intelligence is used and approached in the context of Domain Specific Languages (DSL) for real-time audio Digital Signal Processing (DSP).</p>

<!--
<p>PAW is completely free, but the number of in-person seats is limited. If you wish to attend PAW in person, please, register as soon as possible at <a href="https://forms.gle/ALfpkYEoC3GGpaWy9">PAW 2023 REGISTRATION</a>. PAW will also be streamed and recorded. A streaming link will be posted on this website soon before the event (there's no need for you to register if you plan on attending PAW remotely).</p>
-->

<hr>

<h1 id="program-overview">Program Overview</h1>

<!--
<p style="font-size: 1.5rem; text-align: center;"><b>Streaming Link: <a href="https://www.youtube.com/@grame-centrenationaldecrea9100/streams">https://www.youtube.com/@grame-centrenationaldecrea9100/streams</a></b></p>
-->

<div class="row">

Expand Down Expand Up @@ -260,37 +266,65 @@ <h3>All Day (9h - 18h30): Installations/Demos</h3>

<hr>

<h1 id="program-details">Program Details</h1>
<h1 id="program-details">Program Details and Videos</h1>

<h2 id="braun-talk"><b>09:30:</b> <a href="#braun">David Braun</a> — Machine Learning with Faust and JAX</h2>

<center>
<iframe width="800" height="450" src="https://www.youtube.com/embed/046Gi7WhCYY?si=eOeZWyyR8dVzoIr_" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
</center>

<p>In most examples of modern machine learning, ML practitioners use Python to design complex mathematical models that can be auto-differentiated and then optimized via stochastic gradient descent in order to maximize some objective. Audio engineers, however, don't use Python because it lacks the elegant syntax and powerful libraries of an audio domain-specific language (DSL) such as Faust. We present a pipeline which is one of the first of its kind to bridge the gap between a library-rich audio DSL and a powerful auto-diff ML framework. This Faust-to-<a href="https://jax.readthedocs.io/en/latest/">JAX</a> pipeline allows audio engineers to auto-differentiate DSP functions that would have been too time-consuming to re-implement in Python or difficult to differentiate manually. Once Faust code is converted to JAX, the XLA compiler produces well-optimized code that scales well in cloud-computing systems. We present several early experiments showing our pipeline's potential to optimize audio-related objectives.</p>

<h2 id="puckette-talk"><b>10:00: </b><a href="#puckette">Miller Puckette</a> — PureData and AI</h2>

<center>
<iframe width="800" height="450" src="https://www.youtube.com/embed/xKItGyGlVMI?si=-uVpEivtwBq1rgSr" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
</center>

<p>Almost twenty years ago Davide Morelli introduced Pure Data bindings for the FANN machine learning library. The resulting Pd objects include a multi-layer perceptron object, ann_mlp, that can train and/or run MLPs natively on standard machine architectures (Intel or ARM). In this talk I'll give a demo of how to use ann_mlp to train additive synthesis models either to imitate existing sounds or to allow low-dimensional interpolation of user-supplied synthetic sounds. This work was inspired by work by Wessel and Lee from the 1990s, and also more recent work by Sam Pluta.</p>

<h2 id="betancur-talk"><b>11:20: </b><a href="#betancur">Celeste Betancur</a> — (X)AI in Live Coding Environments: Pandora’s Dream</h2>

<center>
<iframe width="800" height="450" src="https://www.youtube.com/embed/HpRGy275U-c?si=JodAeJKVAGPdsz2a" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
</center>

<p>Pandora's Dream is a versatile live coding playground that opens up a world of possibilities integrating CHAI - Chuck for AI, openGL and the Chuck language. Pandora's Dream is a use case of all the new possibilities to use simple and explainable machine learning and AI models and integrate them into live performances. With this in mind, it is possible to analyze and extract audio features such as Chroma, MFCC, Centroid (and many others) and then train models such as KNN, HMM, SVN and MLP (also the new Wekinator object). In this case, importance is given to the complementary that the system can give to the performance and not to the algorithms itself. Actually, the features, algorithms, and data used are not the most advanced or sophisticated and in the end, the model is dependent on the performer's decisions. Pandora's Dream is centered in musical abstract data and not in the generation of audio at a sample by sample level. Finally, it is important to notice that all the training stages can be done and redone during the live performance to adjust, limit or expand the model. </p>

<h2 id="carre-talk"><b>12:00: </b><a href="#carre">Benoît Carré</a> — AI and Music Composition</h2>

<center>
<iframe width="800" height="450" src="https://www.youtube.com/embed/zGh5bMMx2x0?si=CqSRt_x-sB37_6P2" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
</center>

<p>Artificial intelligence is useful in tools dedicated to the production of music, such as sound processing, the audio separation of instruments or virtual instruments like Inspired by Nature in Ableton Live. A.I. was in the news when musicians posted "fake Drake songs" online. Voice cloning technology is the latest, most visible and spectacular advance. But what are its limitations? Is A.I., already very powerful for voice generation, equally effective for composition? What does it lack to produce 8 really interesting bars from start to finish? What is the state of the art in text-2-music A.I.? What does it need to attract musicians en masse? And what about database annotation? What about the results it offers in the subsequent interaction? These are all questions I ask myself as I experiment with these tools. I'll share a few examples that illustrate my explorations, and we can discuss the limitations and potential they inspire.</p>

<br>
<hr>

<h2 id="braun-workshop"><b>14:00: </b><a href="#braun">David Braun</a> — Faust and AI Workshop</h2>

<center>
<iframe width="800" height="450" src="https://www.youtube.com/embed/VIlCY7wRahM?si=EHozxrNYq54qjSYI" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
</center>

<p>Hands-on workshop following up on the tools presented during the morning session.</p>

<h2 id="puckette-workshop"><b>15:00: </b><a href="#puckette">Miller Puckette</a> — PureData and AI Workshop</h2>

<center>
<iframe width="800" height="450" src="https://www.youtube.com/embed/Yh9J5XeqPvg?si=9AIyqf2J_m4FoivD" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
</center>

<p>Hands-on workshop following up on the tools presented during the morning session.</p>

<h2 id="betancur-workshop"><b>16:30: </b><a href="#betancur">Celeste Betancur</a> — Chuck and AI Workshop</h2>

<center>
<iframe width="800" height="450" src="https://www.youtube.com/embed/dYWQU4-jqCg?si=Xor74oc6teBDpMM6" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
</center>

<p>Hands-on workshop following up on the tools presented during the morning session.</p>

<hr>
Expand Down Expand Up @@ -325,6 +359,7 @@ <h2 id="carre">Benoît Carré</h2>
<a href="https://linktr.ee/skyggemusic">https://linktr.ee/skyggemusic</a>
</p>

<!--
<h1 id="coming">Coming to PAW + Contact</h1>
<p style="font-size: 2rem;"><b>Participants must register online</b>: <a href="https://forms.gle/ALfpkYEoC3GGpaWy9">PAW-23 REGISTRATION</a>.</p>
Expand All @@ -343,6 +378,7 @@ <h1 id="coming">Coming to PAW + Contact</h1>
<li>L'atelier Salengro: <a href="https://goo.gl/maps/RKEnYXLx7Bd8aWhA9">https://goo.gl/maps/RKEnYXLx7Bd8aWhA9</a> (bakery, cheap option)</li>
</ul>
</p>
-->

<h1 id="syfala">Bonus Event: Faust/FPGA Workshop<br>on Dec. 1, 2023 @ CITI Lab (Lyon)</h1>

Expand All @@ -363,10 +399,11 @@ <h3>Practical Information</h3>
<li><b>When:</b> December 1st 2023, 9h-18h</li>
</ul>

<!--
<h3>Registration</h3>

<p>The workshop is full: we don't take registrations anymore. Sorry :(.</p>
-->

<!--
<p>Registration to this <b>IN-PERSON ONLY</b> workshop is mandatory. A 20 euros fee including lunch, coffee breaks, etc. will have to be paid by participants (additional information on how to make the payment will be sent to participants after registration). The workshop is limited to 10 people (first come, first served).</p>
Expand Down Expand Up @@ -402,7 +439,7 @@ <h3>Technical Instructions and Prerequisites</h3>
<ul>
<li><b>What you will need to bring:</b> Attendees will work on their own laptop. This laptop should include Wifi connectivity, an Ethernet port, and USB 2 port (i.e., USB 2 adapter if needed).</li>
<li><b>What you need to know:</b> We expect participant to have the following prerequisites: basic C++ programming, SSH connection configuration, Ethernet network configuration, basic bash shell commands</li>
<li><b>We expect participants</b> to register to the workshop before November 15th, 2023.</li>
<!--<li><b>We expect participants</b> to register to the workshop before November 15th, 2023.</li>-->
</ul>
</ul>
</p>
Expand Down

0 comments on commit e4008af

Please sign in to comment.