A coding impromptu

This post is a rolling collection of algorithms and computational ideas I like, implemented in BQN:

Table of Contents

Extrapolating Perlis' remark1, it's likely that a group of 50 individuals would devise 35 to 40 distinct solutions to even the simplest problem in BQN. Therefore, I will frequently juxtapose my implementations with those of seasoned BQNators2.

The infamous _while_ modifier

I call it infamous because it made me feel stupid twice: first when I encountered it in code, and again when I read its docs. To understand its behaviour, you need to be familiar with quite a bit of BQN, especially the functional programming and combinators aspects. As a newbie at the time, I found it quite daunting. It took me about five scattered attempts over several months to get it. Looking back, the difficulty wasn't so much BQN's syntax, but my struggle to express complex recursion, which modifier recursion definitely is.

An unrolling of the first two steps reveals that up to 2⋆n evaluations of 𝔽 can occur at recursion level n. This is derived by noting that, within the BQN combinator, the left function of the rightmost atop dictates that the 𝔽 for the subsequent step is, in accordance to 𝔽_𝕣_𝔾:

_w0_ ← {π”½βŸπ”Ύβˆ˜π”½_𝕣_π”Ύβˆ˜π”½βŸπ”Ύπ•©}
_w1_ ← {(π”½βŸπ”Ύβˆ˜π”½)βŸπ”Ύβˆ˜(π”½βŸπ”Ύβˆ˜π”½)_w0_π”Ύβˆ˜(π”½βŸπ”Ύβˆ˜π”½)βŸπ”Ύπ•©}
_w2_ ← {((π”½βŸπ”Ύβˆ˜π”½)βŸπ”Ύβˆ˜(π”½βŸπ”Ύβˆ˜π”½))βŸπ”Ύβˆ˜((π”½βŸπ”Ύβˆ˜π”½)βŸπ”Ύβˆ˜(π”½βŸπ”Ύβˆ˜π”½))_w0_π”Ύβˆ˜((π”½βŸπ”Ύβˆ˜π”½)βŸπ”Ύβˆ˜(π”½βŸπ”Ύβˆ˜π”½))βŸπ”Ύπ•©}

Another way to clarify the concept is to implement the same logic both as a function and as a 1-modifier, and then compare these implementations with the two 2-modifiers (one exhibiting linear and the other a logarithmic number of stack frames):

Whiles ← {Fβ€ΏGπ•Šπ•©:
  Wfun ← {π•ŽβŸGβˆ˜π•ŽΛ™βŠΈπ•Šβˆ˜π•ŽβŸG𝕩}
  _wom ← {π”½βŸGβˆ˜π”½_π•£βˆ˜π”½βŸG𝕩}
  _wtmlog_ ← {π”½βŸπ”Ύβˆ˜π”½_𝕣_π”Ύβˆ˜π”½βŸπ”Ύπ•©}
  _wtmlin_ ← {π•Šβˆ˜π”½βŸπ”Ύπ•©}
  ⟨f Wfun 𝕩, f _wom 𝕩, f _wtmlog_ g 𝕩, f _wtmlin_ g ⎊"SO"π•©βŸ©
}

Let’s test it with a simple iteration that exceeds CBQN’s recursion limit, triggering a stack overflow:

⟨1⊸+, 5000⊸β‰₯⟩ Whiles 0
⟨ 5001 5001 5001 "SO" ⟩

Z algorithm

This is a very efficient procedure that finds prefix strings in linear time. The imperative implementation reads:

ZI ← {π•Šs:
  lβ€Ώrβ€Ώz ← 0βš‡0 0β€Ώ0β€Ώs
  z ⊣ {
    v ← r(⊒1⊸+β€’_while_{(𝕩+𝕨)<β‰ s ? =Β΄βŸ¨π•©,𝕩+π•¨βŸ©βŠ‘Β¨<s ; 0}<β—Ά({zβŠ‘Λœπ•©-l}⌊-+1)β€Ώ0)𝕩
    r <β—Ά@β€Ώ{π•Š: l↩𝕩-v+1 β‹„ r↩𝕩} 𝕩+v-1
    z v⌾(π•©βŠΈβŠ‘)↩
  }Β¨ ↕≠s
}
ZI "abacabadabacaba"
⟨ 15 0 1 0 3 0 1 0 7 0 1 0 3 0 1 ⟩

Two algorithmic improvements can be made here, namely only iterate over indices where the character found is equal to the first character, and only search to extend the count if it goes up to the end of r:

ZFun ← {π•Šs:
  CountEq ← { 1⊸+β€’_while_((≠𝕨)βŠΈβ‰€β—ΆβŸ¨βŠ‘βŸœπ•¨β‰‘βŠ‘βŸœπ•©,0⟩) 0 }
  l←r←0 β‹„ Ulr ← {(rβŒˆβ†©π•¨+𝕩)>r ? l↩𝕨 β‹„ 𝕩; 𝕩}
  SearchEq ← ⊣ Ulr ⊒ + + CountEqβ—‹(β†“βŸœs) ⊒
  Set ← {iπ•Šπ•©: ((r-i) (i SearchEq 0⌈⊣)βŸβ‰€ (i-l)βŠ‘π•©)⌾(iβŠΈβŠ‘) 𝕩 }
  (⌽1↓/βŠ‘βŠΈ=s) Set´˜ β†‘Λœβ‰ s
}

I came up with two array versions, with quadratic and cubic time complexities respectively:

ZAQ ← Β―1↓↓(+´·∧`⊣=β‰ βŠΈβ†‘)Β¨<
ZAC ← (+´∧`)Β¨<=β†•βˆ˜β‰ {Β«βŸπ•¨π•©}⌜<
(ZAQ≑ZAC)β—Ά@β€ΏZAC "abacabadabacaba"
⟨ 15 0 1 0 3 0 1 0 7 0 1 0 3 0 1 ⟩

With further refinements, the earlier solutions can be transformed into:

ZAQβ€ΏZAC ← {(+´∧`)¨𝕏}Β¨ βŸ¨β‰ β†‘β†“=βŒ½βˆ˜β†‘, <=«⍟(β†•βˆ˜β‰ )⟩

Longest increasing sub-sequence

This problem can be solved in \(O(n\log n)\) using dynamic programming. Here is an imperative implementation which is quadratic, but can be optimized:

LISI ← {
  kβ€Ώdp ← Β―1β€Ώ(βˆžΒ¨π•©)
  {i ← βˆ§Β΄β—Ά(βŠ‘βŠβŸœ0)β€Ώ{π•Š:k+↩1} dp<𝕩 β‹„ dp π•©βŒΎ(iβŠΈβŠ‘)↩}Β¨ 𝕩
  +´∞>dp
}
LISIΒ¨ ⟨0β€Ώ1β€Ώ0β€Ώ3β€Ώ2β€Ώ3, 10β€Ώ9β€Ώ2β€Ώ5β€Ώ3β€Ώ7β€Ώ101β€Ώ18, 7β€Ώ7β€Ώ7β€Ώ7β€Ώ7⟩
⟨ 4 4 1 ⟩

A more elegant array solution is:

LISA ← +Β΄βˆžβ‰ βˆžΒ¨{π•¨βŒΎ((βŠ‘π•©β‹π•¨-1)βŠΈβŠ‘)𝕩}´⌽
LISAΒ¨ ⟨0β€Ώ1β€Ώ0β€Ώ3β€Ώ2β€Ώ3, 10β€Ώ9β€Ώ2β€Ώ5β€Ώ3β€Ώ7β€Ώ101β€Ώ18, 7β€Ώ7β€Ώ7β€Ώ7β€Ώ7⟩
⟨ 4 4 1 ⟩

Let's )explain this optimized version, so we can truly appreciate its beauty:

 +Β΄βˆžβ‰ βˆžΒ¨{π•¨βŒΎ((βŠ‘π•©β‹π•¨-1)βŠΈβŠ‘)𝕩}´⌽ 
 β”‚ β”‚ β”‚ β”‚β”‚    β”‚ β”‚ β”‚  β”‚ β”‚  β”‚ 
 β”‚ β”‚ β”‚ {┼────┼─┼─┼──┼─┼─´│ 
 β”‚ β”‚ ∞¨ β”‚    β”‚ β”‚ β”‚  β”‚ β”‚ β”‚β”‚ 
 β”‚ β”‚  β””β”€β”Όβ”€β”€β”€β”€β”Όβ”€β”Όβ”€β”Όβ”€β”€β”Όβ”€β”Όβ”€β”ΌβŒ½ 
 β”‚ βˆžβ‰ β”€β”€β”€β”Όβ”€β”€β”€β”€β”Όβ”€β”Όβ”€β”Όβ”€β”€β”Όβ”€β”Όβ”€β”˜  
 +Β΄ β”‚   β”‚    β”‚ β”‚ β”‚  β”‚ β”‚    
  β””β”€β”˜   β”‚    β”‚ β”‚ β”‚  β”‚ β”‚    
        β”‚    β”‚ 𝕨-1  β”‚ β”‚    
        β”‚    π•©β‹β”€β”˜   β”‚ β”‚    
        β”‚   βŠ‘β”€β”˜     β”‚ β”‚    
        β”‚   β””β”€β”€β”€β”€β”€β”€βŠΈβŠ‘ β”‚    
        π•¨βŒΎβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β”‚    
         β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€π•©    
β•Άβ”€β”€β”€β”€β”€β”€β”€β”€β”˜

The full expression is structured as a two-train: we sum all finite entries from the result of the rightmost three-train. The three-train is a right fold over the reversed input, with an initial array of ∞ and the same length as the input. In each step of the fold, we modify the right argument using under: we perform a binary search with strict comparison to find where the next element should go. The element is either placed at the end of the unfilled region, or it replaces the first element that is greater than 𝕨. Since BQN uses a based array model, we pick the enclosed atom from this operation's result. So it goes3.

N-queens problem

This problem is the archetypal example of backtracking. Initially, I tried to solve it using a function to place the queens in the full board, hoping that it would lead to a more array oriented solution:

8 {((∨⌜´0⊸=)∨(0=-⌜´)∨0=+⌜´) 𝕩-Β¨<↕𝕨} 2β€Ώ3
β”Œβ”€                 
β•΅ 0 1 0 1 0 1 0 0  
  0 0 1 1 1 0 0 0  
  1 1 1 1 1 1 1 1  
  0 0 1 1 1 0 0 0  
  0 1 0 1 0 1 0 0  
  1 0 0 1 0 0 1 0  
  0 0 0 1 0 0 0 1  
  0 0 0 1 0 0 0 0  
                  β”˜

This resulted in a more complicated algorithm, so I decided to go for the classical Wirth implementation:

NQ ← {π•Šn:
  Vβ€ΏP ← {βŠ£π•(⊒∾-β‹ˆ+)´∘⊒}Β¨ βŸ¨βˆ¨Β΄βŠ‘Β¨Λœ, {1⌾(π•©βŠΈβŠ‘)𝕨}¨⟩
  {n≠𝕩 ? +Β΄(𝕨V⊒)β—ΆβŸ¨(𝕩+1)π•ŠΛœπ•¨P⊒,0⟩∘(π•©β‹ˆβŠ’)Β¨ ↕n ; 1
  }˜´ (0β‹ˆ0Γ—Β·β†•Β¨βŠ’βˆΎΒ·β‹ˆΛœ+˜)n 
}

Which nicely compares with the OEIS sequence:

a000170 ← 1β€Ώ0β€Ώ0β€Ώ2β€Ώ10β€Ώ4β€Ώ40β€Ώ92
a000170 ≑ NQΒ¨ 1+↕8
1

And of course, in the implementation above I could have used a single array instead of three, but I find the resulting validation and position functions very aesthetic the way they are.

Majority element

The Boyer–Moore algorithm allows for finding the majority element (element that appears more than βŒŠπ•©Γ·2 times in the array) in linear time. If such element exists, then it is equal to the mode of the data, and for this task we have a nice array solution. The original implementation could be expressed as:

BM ← {v←0 β‹„ Iβ†βŠ’βŠ£=β—Ά{π•Š:v+↩1}β€Ώ{π•Š:v-↩1} β‹„ 0{π•Š:v=0}β—ΆβŸ¨I,IΛœβŠ£βŸ©Β΄π•©}
BM 6β€Ώ1β€Ώ3β€Ώ1β€Ώ3β€Ώ3β€Ώ4β€Ώ3β€Ώ3β€Ώ5
3

The previous fold tracks the majority element as state, a more elegant approach maintains the number of votes:

BM ← {e←@ β‹„ 0{𝕩=0 ? e↩𝕨⋄1 ; 𝕩+Β―1⋆e≒𝕨}´𝕩 β‹„ e}
BM 6β€Ώ1β€Ώ3β€Ώ1β€Ώ3β€Ώ3β€Ώ4β€Ώ3β€Ώ3β€Ώ5
3

An identity on the naturals

Some time ago, while working on performance optimization of linear algebra operations with Boolean arrays, I encountered an interesting summation property for an array \(a\) of length \(n\):

\begin{equation*} \sum_{i | a_i \neq 0} \sum_{j=i+1} f_j = \sum_{j=0} f_j \sum_{i < j | a_i \neq 0} 1 \end{equation*}

It turns out that the RHS can be elegantly transformed into a scan, giving rise to a beautiful identity that applies to all natural numbers, not just Booleans as I initially thought:

(+`≑·+Β΄/β‰€βŸœ<βŠ’Λœ) β€’rand.Range˜ 1e3
1

This identity holds because βŠ’Λœ represents the indices i of the list, and since +Β΄(/𝕩)=i ←→ iβŠ‘π•©, the fold sums all the elements in 𝕩 up to i, for i in the range of the length of the list. Ergo, a scan.

Depth of nested lists

Studying tree algorithms in APL, I learned about the depth vector representation. If the nested object in consideration is a string, the best approach is using boolean masks. However, when dealing with a BQN list, recursion becomes necessary to determine the depth of nested elements. Here’s how it can be implemented:

{=β—ΆβŸ¨β‹ˆ0, 1+Β·βˆΎπ•ŠΒ¨βŸ©π•©} ⟨1, ⟨2, ⟨3⟩, ⟨4, ⟨5, ⟨6, 7⟩⟩⟩⟩, 1⟩
⟨ 1 2 3 3 4 5 5 1 ⟩

H-index

This metric is one of the reasons for the deplorable state of modern academia, and the headaches for outsiders trying to get in. Consider that Peter Higgs has an estimated h-index of only 12. By contrast, a random professor nowadays boasts an h-index ten times as high, and exponentially less impact. Enough of ranting, let's concentrate on finding an elegant way to implement this useless thing:

HL ← (+Β΄βˆ˜Β«βŠ’Λœβ‰€+`⌾⌽)Β·/βΌβ‰ βŠΈβŒŠ
HS ← +´∨β‰₯1+βŠ’Λœ
(HL≑HS)β—Ά@β€ΏHL 14β€Ώ14β€Ώ11β€Ώ9β€Ώ5β€Ώ5β€Ώ1β€Ώ1β€Ώ1β€Ώ1β€Ώ0
5

If someone ever published that much, sorting will eventually be slower:

HLβ€ΏHS {π•Žβ€’_timed𝕩}Β¨< 1e8 β€’rand.Range 1e3
⟨ 0.083824959 0.21801262700000001 ⟩

A testament to the idea that the simplest solution in BQN is often the most efficient: I initially clip my citations array with {β‰ Β¨βŠ”β‰ βˆ˜π•©Β¨βŒΎ(β‰₯βŸœβ‰ βˆ˜π•©βŠΈ/)𝕩}, which is just /βΌβ‰ βŠΈβŒŠ.

Trapping rain water

This is a classical interview problem that can be solved in linear time. Interestingly, it admits a very elegant array solution:

(+´⊒-˜⌈`⌾⌽⌊⌈`) [0,1,0,2,1,0,1,3,2,1,2,1]
6

That is, we take the minimum of max-scans from the left and from the right, and subtract the corresponding height. Reducing the resulting array gives the amount of trapped water.

A closely related problem is container with most water, which unfortunately is not so easy to solve in linear time using an array approach (one can easily implement the imperative two pointers solution in BQN, but it will probably be slow). Here are two solutions, one \(O(n^2)\) and the other \(O(n\log n)\), both tacit:

⟨⌈´∘β₯ŠβŒŠβŒœΛœΓ—Β·-βŒœΛœβŠ’Λœ, βŒˆΒ΄βˆ¨Γ—(⌈`⊸-⌈⊒-⌊`)βˆ˜β’βŸ© {10 π•Žβ€’_timed𝕩}Β¨< β€’rand.Range˜1e4
⟨ 0.080050875 4.14558e¯5 ⟩

Computing edit distances

The Levenshtein (or edit) distance is a measure of the similarity between two strings. It is defined by the following recurrence, which is the basis of dynamic programming algorithms like Wagner-Fisher:

\begin{align*} d_{i0} &= i, \quad d_{0j} = j, \\ d_{ij} &= \min \begin{cases} d_{i-1,j-1} + \mathbf{1}_{s_i \neq t_j} \\ d_{i-1,j} + 1 \\ d_{i,j-1} + 1 \end{cases} \end{align*}

There is an elegant implementation of a variation of the Wagner–Fischer algorithm in the BQNcrate. It has been particularly challenging for me to understand itβ€”not due to the clarity of the primitives, but rather because of the clever transformation employed. I believe that this variant can be derived by shifting the distance matrix. Given two strings \(s\) and \(t\) of lengths \(n\) and \(m\), respectively, we define a new distance matrix as follows:

\begin{equation*} p_{ij} = d_{ij} + n - i + m - j \end{equation*}

Under this transformation, the recurrence relation becomes:

\begin{align*} p_{i0} &= p_{0j} = m + n, \\ p_{ij} &= \min \begin{cases} p_{i-1,j-1} + \mathbf{1}_{s_i \neq t_j} - 2 \\ p_{i-1,j} \\ p_{i,j-1} \end{cases} \end{align*}

The above recurrence can be easily identified in the 3-train's middle function, which is folded over the table of the costs (table comparing the characters). Note that we compare insertions and substitutions, and then we can do a min scan over the result to get the deletions, which gives a vectorised implementation.

The only part I can't quite piece together is the construction of the cost table, which is done by reversing \(t\). Given that the final result for \(p_{ij}\) ​ is located in the bottom-right corner and we use foldr, I would have expected \(s\) to be the one reversed instead. However, both approaches work, as demonstrated by the following code:

_l ← {Β―1βŠ‘(1⊸+β₯Š+)β—‹β‰ (⌊`⊒⌊⊏⊸»∘⊒-0∾1+⊣)˝𝔽}
T ← ⌽⊸(=⌜)_l≑=⌜⟜⌽_l
Tβ—‹{@+97+𝕩‒rand.Range 25}Β΄ 1e4β€Ώ1e5
1

I suspect the above can be explained by the following properties of the Levenshtein distance:

  • \(L(s,t) = L(t,s)\)
  • \(L(s,t) = L(\text{rev}(s),\text{rev}(t))\)
  • \(L(\text{rev}(s),t) = L(s,\text{rev}(t))\)

If you know why both formulations work, please let me know!

Solving the cubic equation

This function computes the real roots of an arbitrary cubic equation. Initially, the equation is transformed into its depressed form via an appropriate substitution. Depending on the sign of the discriminant, the roots are then determined using Cardano's method when the discriminant is positive, or ViΓ¨te’s trigonometric method when it is negative. In the case where the discriminant is zero, the proportionality to the square of the Vandermonde polynomial implies that a repeated root is present, the roots are resolved through direct analytical methods. We have chosen those methods to avoid using complex numbers, which are not yet supported in BQN.

Cub ← {aβ€Ώbβ€Ώcβ€Ώd:
  (bΓ·3Γ—a)-Λœβ€’math{
    𝕩>0 ? +´𝕩(𝕗.Cbrt+β‹ˆ-)⟜√˜-qΓ·2;
    𝕩=0 ? 0⊸=β—ΆβŸ¨Β―1β€Ώ2β€Ώ2Γ·ΛœΒ·π•—.CbrtΓ—βŸœ4,3⊸β₯ŠβŸ©q;
    (2Γ—βˆš-pΓ·3)×𝕗.Cos(2Γ—Ο€Γ—β†•βŠΈΓ·3)-˜3Γ·Λœπ•—.Acos(√-3Γ·p)Γ—1.5Γ—qΓ·p
  }(27÷˜p⋆3)+4Γ·ΛœΓ—Λœq←(dΓ·a)-(27÷˜3β‹†ΛœbΓ·a)+3÷˜bΓ—a÷˜p←(cΓ·a)-3Γ·ΛœΓ—ΛœbΓ·a
}

The above implementation only works for the case where a≒0, it will yield NaN otherwise. Here are some tests for the four possible branches:

CubΒ¨ ⟨1β€Ώ0β€ΏΒ―7β€Ώ6, 1β€ΏΒ―1β€ΏΒ―8β€Ώ12, 1β€ΏΒ―6β€Ώ12β€ΏΒ―8, 1β€Ώ3β€Ώ0β€ΏΒ―1⟩ 
⟨ ⟨ 2.0000000000000004 1 ¯3.0000000000000004 ⟩ ⟨ ¯2.9999999999999996 1.9999999999999998 1.9999999999999998 ⟩ ⟨ 2 2 2 ⟩ ⟨ 0.532088886237956 ¯0.6527036446661387 ¯2.879385241571817 ⟩ ⟩

QR decomposition

I put some effort golfing this QR decomposition implementation, and I got a very satisfying 98 chars one-liner. Ungolfed a bit, it looks like this:

QR ← +Λβˆ˜Γ—βŽ‰1β€Ώβˆž{
  1=βŠ’Β΄β‰’π•© ? π•©βŠΈΓ·βŸœβŠ‘βŠΈβ‹ˆβˆš+ΛΓ—Λœπ•©;
  ∾˘{(q𝔽𝕨)β‹ˆ(r𝔽t)∾0π”½βŸk𝕩}Β΄π•Šπ”½{π•˜-𝕩𝔽tβ†©π•©β‰βŠΈπ”½π•˜}(kβ†“Λ˜π•©)βŠ‘qβ€Ώrβ†π•Šπ•©β†‘Λ˜Λœkβ†βŒˆ2Γ·ΛœβŠ’Β΄β‰’π•©βŠ£t←@
}

The function works like this: it recursively computes the QR decomposition of a matrix by first handling the base case (normalizing a single column) then splitting the matrix into two halves. The first half is decomposed into \(Q_0\) and \(R_0\), and the second half is orthogonalized against \(Q_0\) by subtracting its projection, yielding a residual matrix that is itself decomposed into \(Q_1\) and \(R_1\). Finally, the overall orthogonal matrix \(Q\) is formed by horizontally concatenating \(Q_0\) and \(Q_1\), and the upper triangular \(R\) is assembled as a block matrix combining \(R_0\), the projection coefficients, and \(R_1\):

\begin{equation*} Q \, R = \begin{pmatrix} Q_0 & Q_1 \end{pmatrix} \begin{pmatrix} R_0 & T \\ 0 & R_1 \end{pmatrix} = Q_0 R_0 + Q_0 T + Q_1 R_1, \end{equation*}

We can test it with random matrices:

(⊒∾⟜<m-+Λβˆ˜Γ—βŽ‰1β€ΏβˆžΒ΄) QR m ← 3β€Ώ3β€’rand.Range 0
β”Œβ”€                                                                                                                                        
Β· β”Œβ”€                                                             β”Œβ”€                                                            β”Œβ”€         
  β•΅  0.8157427013276365 Β―0.577946856084976 0.02326535562123689   β•΅ 0.9106163258394209 0.7411115590785274  0.7652096291273813   β•΅ 0 0 0    
    0.32843727859545113 0.4297133155667652 Β―0.8411155809122974                      0  0.709988720748101 0.15322713799622295     0 0 0    
      0.476122672490509 0.6937751061879561  0.5403547934222346                      0                  0 0.36577814222564664     0 0 0    
                                                               β”˜                                                             β”˜         β”˜  
                                                                                                                                         β”˜

Fast Fourier Transform

Below is an implementation of the radix-2 Cooley–Tukey FFT algorithm. The function leverages BQN's headers to define the inverse transform in a succinct way using the property:

\begin{equation*} \text{iFFT}[\mathbf{x}] = \frac{1}{N}\text{FFT}^{*}[\mathbf{x}^{*}] \end{equation*}

We also define a namespace for dealing with complex numbers, in particular the Cis function:

z ← {
  _p ⇐ {(-´𝔽¨)β‹ˆ(+´𝔽¨)⟜⌽}
  Cβ€ΏE ⇐ βŸ¨β‹ˆβŸœ-´˘, β€’math{𝕗.Cosβ‰Λ˜π•—.Sin}⟩
}
FFT ← {π•ŠβΌ: z.C{β‰ Γ·ΛœΒ·π”½π”Ύβˆ˜π”½}π•Šπ•©; (1=β‰ )β—ΆβŸ¨(+∾-)⟜(βŠ’Γ—z._p˘·z.E∘-Ο€Γ—β†•βŠΈΓ·βˆ˜β‰ )Β΄(π•ŠΒ¨βŠ’βŠ”Λœ2|βŠ’Λœ), βŠ’βŸ©π•©}

Let's confirm that the inverse returns back the original list:

(+´∘β₯ŠβŠ’-FFT⁼∘FFT) 0β€’rand.Range˜2β‹ˆΛœ2⋆10
1.914614300435602eΒ―14

We could also compare with the discrete Fourier transform, which despite being \(O(N^2)\) has should have a nice array formulation:

DFT ← β‰Λ˜Β΄<˘{π”½βˆ˜β‰+Λβˆ˜Γ—βŽ‰1β€Ώβˆž z._pΛœΒ·π”½1β€Ώ0⍉⁼·z.E Β―2Γ—Ο€Γ—β‰ Γ·ΛœΒ·Γ—βŒœΛœβŠ’Λœ}
(+´∘β₯ŠFFT-DFT) 0β€’rand.Range˜2β‹ˆΛœ2⋆10
Β―2.8412011632283907eΒ―10

In the DFT code above, I got into a big mess with the complex numbers, because the z namespace was too tightly coupled with the FFT implementation. I had to do a bunch of enclosing and coupling to get the same shape. With proper complex numbers support it would be something like:

DFT ← ⊒+Λβˆ˜Γ—βŽ‰1β€ΏβˆžΛœΒ·β‹†Β―2Γ—Ο€Γ—β‰ Γ·ΛœΒ·Γ—βŒœΛœβŠ’Λœ

Tensor n-mode product

The n-mode product is a key ingredient for computing the Tucker decomposition of a tensor. For this we can use the HOSVD algorithm: a method that has been rediscovered several times. For example, in the nuclear quantum dynamics community it is known as POTFIT and was published before the often cited De Lathauwer paper, see arXiv:1309.5060 for a discussion. For a tensor \(\mathcal{X}\) and a matrix \(U\) we define:

\begin{equation*} (\mathcal{X} \times_n U)_{i_1,\dots,i_{n-1},\,j,\,i_{n+1},\dots,i_N} = \sum_{i_n=1}^{I_n} x_{i_1,\dots,i_n,\dots,i_N}\, u_{j,i_n}. \end{equation*}

In BQN's parlance, we can express it as:

{+Λβˆ˜Γ—βŽ‰1β€ΏβˆžβŸœπ•©βŒΎ(β‰βŸπ•—)𝕨}

A beautiful example of notation as a tool of thought, in my opinion: this deferred 1-modifier (itself a compact melange of six modifiers) computes the 𝕗-mode product of a tensor 𝕨 and a matrix 𝕩. It works by moving the 𝕗-axis to the front, then multiplying 𝕨 and 𝕩 without the need for explicit unfolding, courtesy of the rank operator, and moving the last axis of the result back to 𝕗, all gracefully managed by under.

Footnotes:

1

Almost Perfect Artifacts Improve only in Small Ways: APL is more French than English, Alan J. Perlis (1978). From jsoftware's papers collection.

2

Initially, I intended to rigorously attribute all contributions, but this quickly filled the text with footnotes. I often get help streamlining my solutions from Marshall Lochbaum (the BQN creator), dzaima (the CBQN developer), and other fine folks from the BQN matrix room, thank you all! Please check the logs for more context.

3

Don’t believe me? Just ask Kilgore Trout!