From 9c5541d1e4a2289df36b18dbe4624201ee3f6bb9 Mon Sep 17 00:00:00 2001 From: Raghava Uppuluri Date: Mon, 15 Aug 2022 01:19:03 -0400 Subject: [PATCH 1/2] added shortcode --- .../wiki/Active Projects/robot-arm/software.md | 15 ++++++++------- layouts/shortcodes/mermaid.html | 4 ++++ 2 files changed, 12 insertions(+), 7 deletions(-) create mode 100644 layouts/shortcodes/mermaid.html diff --git a/content/wiki/Active Projects/robot-arm/software.md b/content/wiki/Active Projects/robot-arm/software.md index 2e3276d..00e0892 100644 --- a/content/wiki/Active Projects/robot-arm/software.md +++ b/content/wiki/Active Projects/robot-arm/software.md @@ -21,7 +21,7 @@ From this, you can then: ## System Overview -```mermaid +{{}} graph subgraph Behavior Planning BP1[Behavior Planner] @@ -94,9 +94,10 @@ graph class BP1,LL1,BP2,BP3,V2 not_started class PC1 in_progress class S1,S2,LL2,LL3,V1,PC2,PC3 done -``` -```mermaid +{{}} + +{{}} graph l1[Not Started] l2[In Progress] @@ -107,7 +108,7 @@ classDef done fill:#81ff9b class l1 not_started class l2 in_progress class l3 done -``` +{{}} ## High level @@ -116,7 +117,7 @@ After the robot turns on or at any given point of time, what should the robot do Overall, it should output executable commands that return true or false if they are completed successfully and then output the next command. A preliminary decision flowchart that a robot can make is modeled here: -```mermaid +{{}} graph a1[Scanning for change in board state] p1[Virtual Human] @@ -128,7 +129,7 @@ graph p1 -- Next move --> a3 p2 -- Next move --> a3 a3 --> a4 --> a1 -``` +{{}} To implement the behavior planner, a Finite State Machine (FSM) and/or a Behavior Tree (BT) can be used, which both have tradeoffs in **modularity** and **reactivity**, ([read more](https://roboticseabass.com/2021/05/08/introduction-to-behavior-trees/), scroll to the last section for ). @@ -360,4 +361,4 @@ TODO - [Camera calibration ROS package]() ### Encoders -- [Arduino Encoder Package](https://www.arduino.cc/reference/en/libraries/encoder/) \ No newline at end of file +- [Arduino Encoder Package](https://www.arduino.cc/reference/en/libraries/encoder/) diff --git a/layouts/shortcodes/mermaid.html b/layouts/shortcodes/mermaid.html new file mode 100644 index 0000000..0746d34 --- /dev/null +++ b/layouts/shortcodes/mermaid.html @@ -0,0 +1,4 @@ +
+ {{.Inner}} +
+ From 9a06ba14eb63ac25fdb8f45a405a290fcbaa9f01 Mon Sep 17 00:00:00 2001 From: Raghava Uppuluri Date: Mon, 15 Aug 2022 01:32:34 -0400 Subject: [PATCH 2/2] bugfix --- content/wiki/Active Projects/robot-arm/software.md | 4 ++-- layouts/shortcodes/mermaid.html | 2 +- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/content/wiki/Active Projects/robot-arm/software.md b/content/wiki/Active Projects/robot-arm/software.md index 00e0892..facef2f 100644 --- a/content/wiki/Active Projects/robot-arm/software.md +++ b/content/wiki/Active Projects/robot-arm/software.md @@ -162,13 +162,13 @@ Playing chess is more than just picking and placing pieces. The robot needs to a We can do this using a computer vision techniques with an image of the current chessboard as an input and the FEN notation of to board as an output. The following diagram shows this visually: -```mermaid +{{}} graph LR a1[Real chessboard] --> a2[2D chessboard] a2 --> a3[FEN Notation] a3 --> a4[Chess Engine] a2 --> a5[Virtual human player] -``` +{{}} Some external projects we plan to use to complete the above: - [Real chessboard -> 2D chessboard](https://github.com/maciejczyzewski/neural-chessboard) - [2D chessboard -> FEN Notation](https://github.com/Elucidation/tensorflow_chessbot) diff --git a/layouts/shortcodes/mermaid.html b/layouts/shortcodes/mermaid.html index 0746d34..3dd8ff7 100644 --- a/layouts/shortcodes/mermaid.html +++ b/layouts/shortcodes/mermaid.html @@ -1,4 +1,4 @@
{{.Inner}}
- +