Skip to content

Latest commit

 

History

History
203 lines (148 loc) · 5.32 KB

AbstractMusicalModel.md

File metadata and controls

203 lines (148 loc) · 5.32 KB

Abstract Musical Model

A model of music, ignorant of graphical representations.

Performance Context

struct PerformanceContext {
    let performerID: String
    let instrumentID: String
    let voice: Int
}

Temporal Context

typealias TemporalInterval <DurationType> = Interval<DurationType>

let durationInterval = TemporalInterval<Duration>(start: a, end: b)
let metricalDurationInterval = TemporalInterval<MetricalDuration>(start: a, end: b)

Musical Information

Event

A atomic action uttered by a single Voice.

public final class Event { }

// Ensure that an `Event` can be used as a `Key` value in a `Dictionary`.
extension Event: Hashable {
    var hashValue: Int {
        return ObjectIdentifier(self).hashValue
    }
}

Attributes

Atomic

An atomic Event may be associated with any number of Attribute values.

struct Pitch { ... }
struct Dynamic { ... }
struct Articulation { ... }
struct OSCMessage { ... }
...

Spanning

A spanning Attribute connects two atomic Event objects.

struct DynamicSpanner { ... }
struct TransitionSpanner { ... }

Database

In order to relate Attribute values to Event objects, we generate a database, comprised of many Dictionary values.

Database generation

Each of these relationships is stored in a Dictionary with (Event, Attribute) as a (key, value) pair.

First, we can partially specialize, and relate these Dictionary types as:

typealias Attribution <Attribute> = Dictionary<Event, Attribute>

Then, we can store these in a Dictionary keyed by an AttributeIdentfier, as:

typealias AttributeIdentifier = String
var attributions: [AttributeIdentifier: Attribution<Any>] = [:]

Types of Attributions

As the number of possible attributes grows, so too can the number of generated mappings:

Atomic Attributions
var information: [AttributeIdentifier: Attribution<Any>] = [:]
information["pitch"] = Attribution<[Pitch]>(...)
information["dynamic"] = Attribution<Dynamic>(...)
information["articulation"] = Attribution<[Articulation]>(...)
information["oscMessage"] = Attribution<[OSCMessage]>(...)
Spanning Attributions
let eventA = Event()
let eventB = Event()
let spanner = SpannerType()
var spanners[eventA] = [..., spanner, ...]
var spanners[eventB] = [..., spanner, ...]
spannerStart[spanner] = A
spannerEnd[spanner] = B
Performance context
var performanceContext: [AttributeIdentifier: Attribution<PerformanceContext>] = [:]
Temporal context
let temporalContext: TemporalInterval(MetricalDuration(2,4), MetricalDuration(5,4))

Putting it together

We can enter the following:

  • "Jill" is playing Tuba.
  • She plays a middle-c with a staccato articulation at a pp dynamic.
  • She plays this from the 2nd 1/4-note-beat to the 5th 1/4-note-beat

as:

let event = Event()

let performanceContext = PerformanceContext(
    performerID: "Jill",
    instrumentID: "Tuba",
    voice: 0
)

let durationContext = DurationInterval(
    start: MetricalDuration(2,4),
    end: MetricalDuration(5,4)
)

information["articulation"]![event] = .staccato
information["pitch"]![event] = [Pitch(noteNumber: 60)]
information["dynamic"]![event] = Dynamic([.p, .p])

Discussion

  • Spanner types (slur from Event a -> Event b)
  • Metrical organization (rhythmical information)
  • Duration vs MetricalDuration
  • Consider collapsing common types (PerformanceContext, TemporalContext) into pre-fab types

Database extraction

To reconstruct a given Event, we can iterate through an array of Event values, and then query each Attribution type for the existence of each Event.

for event in events {
    let attributes: [Any] = information.flatMap { _, attribution in attribution[event] }
    // display attributes
}

A primary concern of the dn-m renderer is to display filtered versions of a full-score. We can very simply inject this filtration step inside this loop:

let informationToShow: [AttributeIdentifier] = ["pitch", "articulation"]
for event in events {
    let attributes = information
        .lazy
        .flatMap { _, attribution in attribution[event] }
        .filter { identifier, _ in informationToShow.contains(identifier) }
    // display only desired attributes
}

Further, we can perform more complex filters. For example, given a desired durational span, performance context, as well as the informational constraints, show only applicable information.

let durationSpan: DurationSpan = ...
let informationToShow = ["pitch", "articulation"]
let performanceContexts = [...]
for event in events {
    let attributes = information
        .lazy
        .flatMap { _, attribution in attribution[event] } // filter out non-existent attributions
        .filter { identifier, _ in informationToShow.contains(identifier) }
        .filter { _, durationSpan in durationSpan.contains(TemporalContext(event))
        .filter { _, performanceContexts in performanceContexts.contains(PerformanceContext(event)) }
    // display only desired attributes
}

Discussion:

  • Is there any way to preserve Type of attributes, or must each be deconstructed upon receipt?
  • At some point, a diffing algorithm may be helpful in the updating of the graphics.