technical coding blog at https://sarkologist.github.io/blog/
Do you sometimes feel you have added too many bullet points, and that each top level bullet should be its own heading, with child bullets a new section under that heading? And of course they should be subheadings of the heading all the bullets were originally a section of.
Here's the code which does the transformation of the markdown:
unindentBulletIntoSubheader :: Text -> Text
unindentBulletIntoSubheader = execState $
zoom (text . many' headerTitleContent . _1 . _HeaderTitleContent) $ do
(headerLevel, _, _) <- get
zoom (_3 . text . many' (bullet <%> (header headerLevel)) . _1) $ do
let f (Left (Bullet bulletLevel content)) =
if bulletLevel==0
then Right (Header (headerLevel+1) content)
else Left (Bullet (bulletLevel-1) content)
f (Right x) = Right x
modify f
- "partial" in the sense of not parsing more structure than necessary, contra a full parse
- e.g. markdown: don't parse the italic inside the header if you are only interested in the raw text inside, but otherwise does if you need to transform it
- bidirectional: not only parse but render
- fusion: does all parse-transform-render in one pass! like with a hylomorphism!
- optics-based: everything is a traversal, so it is compatible with most
lens
combinators
- blog post explaining what it does: https://sarkologist.github.io/blog/posts/2022-11-17-texty-composable-partial-parse-transform-render.html
- blog post explaining how it works: https://sarkologist.github.io/blog/posts/2022-11-18-texty-how-it-works.html
- see test examples here: https://github.com/sarkologist/text-transforms/blob/master/tests/TextyTest.hs
- code is here: https://github.com/sarkologist/text-transforms/blob/master/src/Texty.hs
- gist
- like
optparse-applicative
but for environment variables
- reads csv lines and streams out the closest distances on a trajectory encountered so far to a given point
- uses
pipes
library for streaming
This script parses a csv-esque file for a list of items, then counts the items, then prints out the item counts in descending order.
def makeTableSchema(descriptor: Descriptor): TableSchema
def makeTableRow(msg: Message,
customRow: (FieldDescriptor,
Yoneda[Repeated, Any]) => Yoneda[Repeated, Any]
= { case (_, x) => x })
: TableRow
converts protobuf schemas and values to Google Bigquery: repo
- ensures unabiguous converted bigquery
TableRow
s (notorious protobuf issue)
- fuses multiple transformations on
repeated
values
/**
- recursively traverse `Message`, producing `A`
- Yoneda is for efficient `.map`-ing.
it composes mapped functions without applying until `Yoneda.run` is called
- note that this is a *paramorphism* instead of just a *catamorphism*, i.e.
each recursive step keeps the `Message` value at that level.
this is helpful e.g. to tell at runtime if the `Message` is a leaf
*/
def foldMessage[A](
recurse: Seq[(Yoneda[Repeated, (A, Message)], FieldDescriptor)] => A,
base: (AnyRef, FieldDescriptor) => A)(message: Message): A = {
extensively property tested with ScalaCheck
- generate arbitary protobuf values to test for conversion
- check that converted value is compatible converted schema
- by generating from a generated bigquery
TableRow
, paths to traverse its structure - then using that same path to attempt to walk the corresponding
TableSchema
- by generating from a generated bigquery
- check that distinct protobuf
Message
produce distinct bigqueryTableRow
- generate paths from two generated
Message
s - walk the converted
TableRow
s using those paths - we should expect different values
- generate paths from two generated
library to enhance Beam pipelines with ability to write metrics to InfluxDB repo
implements the Solace message bus API and request/response with ZIO in order to for concurrency/fault-tolerance repo
- gist
- in the fashion of: blog post by Gabriel Gonzalez
These are some code I have written at a previous place of work. They are libraries I have written to factor out repeated code I have encountered in the codebase. They are meant to showcase my understanding of coding and library design in the natural context of the work I have done.
One is a JavaScript library for wrapping UI event handlers and callbacks associated with the asynchronous HTTP requests they make. It puts up a loading indicator and disables form submission during the duration of the async request, but otherwise leaves page functionality unchanged. There was previously no such general functionality, and only ad-hocly implemented in the pages where accidental form submission or slow async request were discovered to have been a problem.
The other one is a Java convenience library for outputting the HTML source for comboboxes. The preexisting library code for this was a mess of copy-and-paste methods catering to variants on combobox contents. I have reduced it to two natural methods called in the fluent style, with zero loss of specific functionality. The code reduction is a factor of ten, though the old code has been omitted to protect the guilty.
- The code was the best I could conceive respecting:
- The use of old (pre-ECMAScript 5 and pre- Java 1.5) versions of the languages
- The prevailing quality and level of abstraction of client code
- The prevailing conventions within the team
- my level of competence :D
- The purpose of the above disclaimer is not to indict my previous employer but solely to place the code in the context in which it was written