-
Notifications
You must be signed in to change notification settings - Fork 8
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
avillalpando
authored and
avillalpando
committed
Apr 1, 2022
1 parent
0626f27
commit a8e3e5f
Showing
8 changed files
with
11,511 additions
and
84 deletions.
There are no files selected for viewing
Binary file not shown.
Binary file not shown.
Binary file not shown.
Large diffs are not rendered by default.
Oops, something went wrong.
11,200 changes: 11,200 additions & 0 deletions
11,200
misc/notebooks/DressedQuantumCircuit/Dressed_QNLP_demo.ipynb
Large diffs are not rendered by default.
Oops, something went wrong.
Binary file added
BIN
+2.58 MB
...notebooks/DressedQuantumCircuit/Processing text data with quantum computers webinar .pptx
Binary file not shown.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,28 @@ | ||
# Neural Networks in Dressed Quantum Circuits for NLP | ||
|
||
|
||
## Dressed Quantum Circuits | ||
|
||
The idea behind this Jupyter Notebook is to extend the work found in https://pennylane.ai/qml/demos/tutorial_quantum_transfer_learning.html. In transfer learning, the first layers of a pretrained neural network are used to solve a different problem to that used to train the network, adding new layers to the model that specialise in that specific task. | ||
|
||
We are using a BERT model to retrieve the context dependant embeddings for the words present in a sentence. Which layer is the best to retrieve the embeddings from is unclear, and it will need to be investigated. Once we have those vectors, they opropagate through a feedforward network that first will reduce the dimensionaility to an intermediate representation (in the notebook it is set to 20), and then the following layers will continue reducing the dimensionality of the vectors until reaching the number of parameters needed by the tensor representation of the quantum circuit offered by the Lambeq library for that word in a specific sentences. | ||
|
||
Some benefits of this approach are: | ||
|
||
* Any sentence structure and word contained in the BERT model used can be processed by the full pipeline. No need to store the values of parameters for a dictionary | ||
|
||
* It is possible to generalize to different NLP tasks | ||
|
||
* If the dimensionaility of the category space is changed, the NN can be re-scaled to reuse the model for new circuit dimensionaility. | ||
|
||
|
||
|
||
Issues | ||
|
||
* Pytket loses the tensor nature of parameters, giving an output consisting of a list of floats or simply counts -> Differentiable circuits in Pennylane could be a solution. | ||
|
||
* It is not clear if we gain any quantum advantage with this methods, as a classical NN has to be trained. | ||
|
||
|
||
|
||
|
Binary file not shown.